Blog

  • Fostering breakthrough AI innovation through customer-back engineering

    Fostering breakthrough AI innovation through customer-back engineering

    Despite years of digitization, organizations capture less than one-third of the value expected from digital investments, according to McKinsey research. That’s because most big companies begin with technological capabilities and bolt applications onto them, rather than starting with customer needs and working backward to technology solutions. Not prioritizing the customer can create fragmented solutions; disjointed customer experiences; and ultimately failed transformations.

    Organizations that achieve outsized results from AI flip the script. They adopt a “customer-back engineering” mindset, putting customers at the heart of technology transformation.

    It’s a strategy in which products and services are developed with the customer experience first in mind, including the customers’ challenges, needs, and expectations. Product development teams then work backward in a nimble and agile way to find the steps necessary to design and build solutions that achieve the desired experience.

    “When you get your engineers closer to customers, you get a lot more sideways innovation,” says Ashish Agrawal, managing vice president of business cards and payments tech at Capital One. “That leads to a multiplier effect, because engineers can approach a problem from a different dimension that can be unique to the sales or product perspective.”

    The case for customer-centricity in engineering

    Engineers are problem-solvers by nature, says Agrawal. When they hear about challenges customers are experiencing, or how they are using products and services in the real world, they can devise ways to efficiently address customer needs, since they are naturally closer to systems and data than many other teams across the company.

    “Fostering a customer-centric culture has a motivational effect on engineers when they actually start seeing how the core changes they’re making, or the features they’re adding, are having a direct impact on the lives of customers,” says Agrawal.

    It also takes discipline. Agrawal explains that Capital One has set a goal for every engineer in his organization to establish several touchpoints with customers throughout the year in different forms, including:

    • Digital empathy sessions to observe user journeys and identify where users hit friction
    • Embedded customer support for periods of time to deepen understanding of servicing needs
    • Engineering ride-alongs, in which engineers join customer success, sales, and support staff on calls or on-site visits
    • Hackathon competitions to build solutions around real customer problems

    The AI opportunities with customer-centricity

    “The biggest challenge engineers within large companies face is a lack of direct access to customers,” says Agrawal. “This can make it harder for technologists to work with customers to identify problems and innovate solutions.”

    AI has accelerated the challenges as well as the opportunities. The lifecycle of launching products has become significantly faster. But the good news is that engineers are closer to the data that feeds into AI, so they can more rapidly apply AI-informed data techniques to solve customer problems.

    Agrawal outlines a recent scenario: In the customer service space, conversations can instantly be summarized and give a customer agent context on the member’s original request and remaining action points. Agentic AI can also be enabled to ask pointed follow-up questions about the interaction that would otherwise take human agents time to read through the entire thread.

    “A solution would have been a lot harder in an ecosystem without a lot of high-quality data,” says Agrawal. “But when you combine a rich data ecosystem with agentic tools, you move from incremental fixes to high-velocity transformation.”

    By investing in AI data and tools and focusing on rapid experimentation, Agrawal says the cycle of deploying solutions can be accelerated. Teams learn that if they meet customer needs and iterate on a wider range of solutions much faster, then the entire innovation cycle speeds up.

    For example, Capital One used customer insights to build a state-of-the-art, multi-agent AI framework called Chat Concierge to enhance the customer experience for car buyers and dealers. In a single conversation, Chat Concierge can perform tasks like comparing vehicles to help car buyers decide on the best choice and scheduling test drives or appointments with salespeople.

    Agrawal explains that car buyers can engage with Chat Concierge directly through participating dealer websites. Dealers can access and take over the chat through Navigator Platform. The AI assistant consists of multiple logical agents that work together to mimic human reasoning, allowing it to provide information and take action based on the customer’s requests.


    The elements of an AI-first mindset

    According to a recent MIT Technology Review Insights survey, 70% of leaders say their firm uses agentic AI to some degree. Roughly half of executives say agentic AI systems are highly capable of improving fraud detection (56%) and security (51%), reducing cost and increasing efficiency (41%), and improving the customer experience (41%).

    Looking into the future, achieving these outcomes looks even more likely. More than half of the banking executives surveyed say they expect to continue to improve fraud detection (75%), security (64%), and the customer experience (51%). Agentic AI use cases that show strong potential to transform the customer experience in financial services include responding to customer services requests, adjusting bill payments to align with regular paychecks, or extracting key terms and conditions from financial agreements.

    Placing the customer at the center of a transformation requires an AI-first mindset. Companies must shift from simply augmenting an existing product to fundamentally reimagining the problem and the user’s needs through the lens of AI’s capabilities.

    A few best practices that Agrawal recommends include:

    Reimagine the core function of AI to solve a user’s problem: “The true value isn’t in chasing the AI hype; it’s in solving meaningful customer problems. By focusing on impact, we ensure that our innovation isn’t just fast; it’s transformative,” says Agrawal.

    Start with high-quality, well-governed data as the foundation: “Data readiness and unified information across systems are the non-negotiable foundations of AI. A clean data layer is what orchestrates the agentic loop— enabling the perception, reasoning, and execution required to solve a customer’s problem before they even have to ask,” explains Agrawal.

    Rebuild workflows with AI embedded from the start: “People treat models as black boxes, but agentic systems require tremendous rigor and oversight. Having a data ecosystem that is “well-governed and responsible AI standards are essential pillars for building trust in these systems,” says Agrawal.

    Build a cross-functional team involving data science, engineering, product, design, and other partners: Agrawal advises, “It’s important to be open and nimble to transforming how we work and create impact as AI becomes more integrated into workflows. It’s also important to take a ‘crawl, walk, run approach’ if you are new to AI, as opposed to simply jumping into it.”

    In the end, achieving end-to-end transformation depends on empowering engineers and partner teams to start with customer needs and work backward to technology solutions, rather than starting with technological capabilities first and finding applications for them. When organizations make a customer-back approach second nature, they are able to not only reimagine the customer experience from the inside out, but to also place the customer front and center from the very start.

    This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

    Go to Original Source

  • Fostering breakthrough AI innovation through customer-back engineering

    Fostering breakthrough AI innovation through customer-back engineering

    Despite years of digitization, organizations capture less than one-third of the value expected from digital investments, according to McKinsey research. That’s because most big companies begin with technological capabilities and bolt applications onto them, rather than starting with customer needs and working backward to technology solutions. Not prioritizing the customer can create fragmented solutions; disjointed customer experiences; and ultimately, failed transformations.

    Organizations that achieve outsized results from AI flip the script. They adopt a “customer-back engineering” mindset, putting customers at the heart of technology transformation.

    It’s a strategy in which products and services are developed with the customer experience first in mind, including the customers’ challenges, needs, and expectations. Product development teams then work backward in a nimble and agile way to find the steps necessary to design and build solutions that achieve the desired experience.

    “When you get your engineers closer to customers, you get a lot more sideways innovation,” says Ashish Agrawal, managing vice president of business cards and payments tech at Capital One. “That leads to a multiplier effect, because engineers can approach a problem from a different dimension that can be unique to the sales or product perspective.”

    The case for customer-centricity in engineering

    Engineers are problem-solvers by nature, says Agrawal. When they hear about challenges customers are experiencing, or how they are using products and services in the real world, they can devise ways to efficiently address customer needs, since they are naturally closer to systems and data than many other teams across the company.

    “Fostering a customer-centric culture has a motivational effect on engineers when they actually start seeing how the core changes they’re making, or the features they’re adding, are having a direct impact on the lives of customers,” says Agrawal.

    It also takes discipline. Agrawal explains that Capital One has set a goal for every engineer in his organization to establish several touchpoints with customers throughout the year in different forms, including:

    • Digital empathy sessions to observe user journeys and identify where users hit friction
    • Embedded customer support for periods of time to deepen understanding of servicing needs
    • Engineering ride-alongs, in which engineers join customer success, sales, and support staff on calls or on-site visits
    • Hackathon competitions to build solutions around real customer problems

    The AI opportunities with customer-centricity

    “The biggest challenge engineers within large companies face is a lack of direct access to customers,” says Agrawal. “This can make it harder for technologists to work with customers to identify problems and innovate solutions.”

    AI has accelerated the challenges as well as the opportunities. The lifecycle of launching products has become significantly faster. But the good news is that engineers are closer to the data that feeds into AI, so they can more rapidly apply AI-informed data techniques to solve customer problems.

    Agrawal outlines a recent scenario: In the customer servicing space, conversations can instantly be summarized and give a customer agent context on the member’s original request and remaining action points. Agentic AI can also be enabled to ask pointed follow-up questions about the interaction that would otherwise take human agents time to read through the entire thread.

    “A solution would have been a lot harder in an ecosystem without a lot of high-quality data,” says Agrawal. “But when you combine a rich data ecosystem with agentic tools, you move from incremental fixes to high-velocity transformation.”

    By investing in AI data and tools and focusing on rapid experimentation, Agrawal says the cycle of deploying solutions can be accelerated. Teams learn that if they meet customer needs and iterate on a wider range of solutions much faster, then the entire innovation cycle speeds up.

    For example, Capital One used customer insights to build a state-of-the-art, multi-agent AI framework called Chat Concierge to enhance the customer experience for car buyers and dealers. In a single conversation, Chat Concierge can perform tasks like comparing vehicles to help car buyers decide on the best choice and scheduling test drives or appointments with salespeople.

    Agrawal explains that car buyers can engage with Chat Concierge directly through participating dealer websites. Dealers can access and can take over the chat through Navigator Platform. The AI assistant consists of multiple logical agents that work together to mimic human reasoning, allowing it to provide information and take action based on the customer’s requests.


    The elements of an AI-first mindset

    According to a recent MIT Technology Review Insights survey, 70% of leaders say their firm uses agentic AI to some degree. Roughly half of executives say agentic AI systems are highly capable of improving fraud detection (56%) and security (51%), reducing cost and increasing efficiency (41%), and improving the customer experience (41%).

    Looking into the future, achieving these outcomes looks even more likely. More than half of the banking executives surveyed say they expect to continue to improve fraud detection (75%), security (64%), and the customer experience (51%). Agentic AI use cases that show strong potential to transform the customer experience in financial services include responding to customer services requests, adjusting bill payments to align with regular paychecks, or extracting key terms and conditions from financial agreements.

    Placing the customer at the center of a transformation requires an AI-first mindset. Companies must shift from simply augmenting an existing product to fundamentally reimagining the problem and the user’s needs through the lens of AI’s capabilities.

    A few best practices that Agrawal recommends include:

    Reimagine the core function of AI to solve a user’s problem: “The true value isn’t in chasing the AI hype; it’s in solving meaningful customer problems. By focusing on impact, we ensure that our innovation isn’t just fast; it’s transformative,” says Agrawal.

    Start with high-quality, well-governed data as the foundation: “Data readiness and unified information across systems are the non-negotiable foundations of AI. A clean data layer is what orchestrates the agentic loop— enabling the perception, reasoning, and execution required to solve a customer’s problem before they even have to ask,” explains Agrawal.

    Rebuild workflows with AI embedded from the start: “People treat models as black boxes, but agentic systems require tremendous rigor and oversight. Having a data ecosystem that is well-governed and responsible AI standards are essential pillars for building trust in these systems,” says Agrawal.

    Build a cross-functional team involving data science, engineering, product, design, and other partners: Agrawal advises, “It’s important to be open and nimble to transforming how we work and create impact as AI becomes more integrated into workflows. It’s also important to take a ‘crawl, walk, run approach’ if you are new to AI, as opposed to simply jumping into it.”

    In the end, achieving end-to-end transformation depends on empowering engineers and partner teams to start with customer needs and work backward to technology solutions, rather than starting with technological capabilities first and finding applications for them. When organizations make a customer-back approach second nature, they are able to not only reimagine the customer experience from the inside out, but to also place the customer front and center from the very start.

    This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

    Orijinal Kaynağa Git

  • Trois choses à surveiller en IA, selon un économiste lauréat du prix Nobel

    Cette histoire a été initialement publiée dans The Algorithm, notre newsletter hebdomadaire sur l’IA. Pour recevoir d’abord des histoires comme celle-ci dans votre boîte de réception, inscrivez-vous ici.

    Quelques mois avant de recevoir le Prix Nobel d’économie en 2024, Daron Acemoglu a publié un article qui lui a valu peu de fans dans la Silicon Valley. Contrairement à ce que promettaient les PDG des grandes entreprises technologiques – une refonte de l’ensemble du travail des cols blancs – Acemoglu a estimé que l’IA ne donnerait qu’un léger coup de pouce à la productivité américaine et n’éliminerait pas le besoin de travail humain. Il est acceptable d’automatiser certaines tâches, écrit-il, mais certaines tâches fonctionneront parfaitement.

    Deux ans plus tard, la position mesurée d’Acemoglu n’a pas encore fait son chemin. Les discussions sur l’apocalypse de l’emploi dans l’IA surgissent partout, depuis les rassemblements du sénateur Bernie Sanders jusqu’aux conversations que j’entends dans la file d’attente à l’épicerie. Certains économistes auparavant sceptiques sont devenus plus ouvert à l’idée que quelque chose de sismique pourrait survenir avec l’IA. Un candidat au poste de gouverneur de Californie a déclaré la semaine dernière qu’il souhaitait taxer l’utilisation de l’IA par les entreprises et payer les victimes des « licenciements motivés par l’IA ». 

    D’une part, les données sont toujours du côté d’Acemoglu ; Des études révèlent à plusieurs reprises que l’IA n’affecte pas les taux d’emploi ni les licenciements. Mais la technologie a beaucoup progressé depuis ses prédictions prudentes. J’ai parlé avec lui pour comprendre si l’un des derniers développements en matière d’IA avait modifié sa thèse, et pour savoir ce qui l’inquiète ces jours-ci, si ce n’est l’AGI imminente.

    Agents IA

    L’une des plus grandes avancées techniques en matière d’IA depuis l’article d’Acemoglu a été l’IA agentique, ou des outils qui peuvent aller au-delà des chatbots et fonctionner de manière autonome pour atteindre l’objectif que vous leur avez fixé. Parce qu’elles peuvent travailler de manière indépendante plutôt que de simplement répondre aux questions, les entreprises proposent de plus en plus d’agents pour remplacer les travailleurs humains.

    « Je pense que c’est tout simplement une proposition perdante », déclare Acemoglu. Il pense qu’il est préférable de considérer les agents comme des outils permettant d’augmenter certaines parties du travail d’une personne plutôt que comme quelque chose d’assez malléable pour gérer l’ensemble du travail d’une personne.

    L’une des raisons est liée à toutes les différentes tâches liées à un travail, quelque chose qu’Acemoglu étudie dans ses travaux sur l’IA depuis 2018. Par exemple, un technicien en radiographie jongle avec 30 tâches différentes, de la saisie des antécédents des patients à l’organisation des archives de mammographie. images. Pour ce faire, un travailleur peut naturellement basculer entre les formats, les bases de données et les styles de travail, explique Acemoglu, mais de combien d’outils ou de protocoles individuels une IA aurait-elle besoin pour faire de même ?

    La question de savoir si les agents renforceront ou non l’impact de l’IA sur les emplois dépendra de leur capacité à gérer l’orchestration entre les tâches que les humains effectuent naturellement. Les entreprises d’IA se livrent une concurrence féroce pour prouver que leurs agents d’IA peuvent travailler de manière indépendante pendant des périodes de plus en plus longues sans commettre d’erreurs, exagérant les résultats. Mais Acemoglu affirme que de nombreux emplois seront épargnés par une prise de contrôle par l’IA si les agents ne peuvent pas basculer de manière fluide entre l’un et l’autre. tâches.

    La nouvelle frénésie d’embauche

    Depuis des années, les Big Tech proposent des salaires faramineux pour recruter des chercheurs en IA. Mais j’ai interrogé Acemoglu sur une autre vague de recrutement que j’ai remarquée : les entreprises d’IA constituent toutes des équipes économiques internes.

    OpenAI a embauché Ronnie Chatterji de l’Université Duke en 2024 comme économiste en chef et a annoncé l’année dernière que Chatterji travaillerait avec Jason Furman, économiste de Harvard et ancien conseiller de Barack Obama, pour mener des recherches sur l’IA et l’emploi. Anthropic a convoqué un groupe de 10 économistes de premier plan pour effectuer un travail similaire. Et la semaine dernière, Google DeepMind a annoncé avoir embauché Alex Imas, un économiste de l’Université de Chicago, pour devenir son « directeur de l’économie AGI ».

    Acemoglu a également remarqué que ses collègues étaient recrutés pour ces postes. “Cela a du sens”, dit-il : les entreprises d’IA sont bien conscientes que le scepticisme du public à l’égard de l’IA, en grande partie dû à des préoccupations en matière d’emploi, est croissant. Et ils sont fortement incités à façonner le discours économique autour de leur technologie (pensez à la dernière proposition d’OpenAI pour une nouvelle ère de politique industrielle).

    « Ce que j’espère que nous n’obtiendrons pas », dit Acemoglu, « c’est qu’ils s’intéressent aux économistes simplement pour approfondir leurs points de vue ou alimenter le battage médiatique. » Cette tension plane sur le domaine émergent de « l’économie de l’IA » ; il est inquiétant de constater que certaines des recherches les plus influentes sur l’impact de l’IA sur le travail proviennent de plus en plus des entreprises qui ont le plus à gagner de conclusions favorables.

    Applications d’IA

    Je ne pense pas que l’IA soit difficile à utiliser ; la plupart d’entre nous interagissent avec lui via des chatbots qui utilisent un langage simple. Mais Acemoglu dit que nous devrions réfléchir à la manière dont il se compare au type de logiciel qui a lancé des transformations technologiques antérieures, comme PowerPoint pour les présentations de diapositives et Word pour les documents. 

    « N’importe qui peut les installer sur son ordinateur et lui faire faire les choses qu’il souhaite », dit-il. Ils se sont propagés en conséquence. 

    « Nous n’avons pas assisté au développement d’applications basées sur l’IA offrant la même convivialité », dit-il. Même si n’importe qui peut discuter avec un modèle d’IA, il faut généralement un certain temps au travailleur moyen pour en tirer une utilisation pratique et productive. C’est en partie la raison pour laquelle l’IA n’a pas encore montré d’impact sismique sur le marché du travail ou sur l’économie. L’un des signaux clés qu’Acemoglu surveille est donc la création d’applications qui facilitent l’utilisation de l’IA. 

    Mais il reconnaît que pendant un certain temps, nous allons voir toutes sortes de preuves contradictoires à propos de l’IA : des anecdotes selon lesquelles les diplômés universitaires trouvent le marché du travail de plus en plus mauvais, mais aucun effet notable de l’IA sur la productivité, par exemple. « Il y a énormément d’incertitude », dit-il. Et c’est ce qui est le plus révélateur de l’économie de l’IA à l’heure actuelle : la certitude de la rhétorique et l’incertitude de tout le reste.

    Aller à la source originale

  • Laut einem Nobelpreisträger sind drei Dinge bei der KI zu beachten

    Diese Geschichte erschien ursprünglich in The Algorithm, unserem wöchentlichen Newsletter zum Thema KI. Um Geschichten wie diese zuerst in Ihrem Posteingang zu erhalten, melden Sie sich hier an.

    Wenige Monate bevor ihm im Jahr 2024 der Nobelpreis für Wirtschaftswissenschaften verliehen wurde, veröffentlichte Daron Acemoglu einen Artikel, der ihm im Silicon Valley einige Fans einbrachte. Im Gegensatz zu dem, was die CEOs großer Technologiekonzerne versprochen hatten – eine Überarbeitung der gesamten Angestelltenarbeit – schätzte Acemoglu, dass KI die Produktivität in den USA nur geringfügig steigern und die Notwendigkeit menschlicher Arbeit nicht beseitigen würde. Es sei in Ordnung, bestimmte Aufgaben zu automatisieren, schrieb er, aber einige Jobs würden völlig in Ordnung sein.

    Zwei Jahre später hat sich Acemoglus maßvolle Haltung nicht durchgesetzt. Das Gerede über eine KI-Job-Apokalypse taucht überall auf, von den Kundgebungen von Senator Bernie Sanders bis hin zu Gesprächen, die ich in der Schlange im Supermarkt belausche. Einige zuvor skeptische Ökonomen sind offener für die Idee geworden, dass mit der KI etwas Seismisches kommen könnte. Ein kalifornischer Gouverneurskandidat sagte letzte Woche, dass er den Einsatz von KI in Unternehmen besteuern und Opfer von „KI-bedingten Entlassungen“ bezahlen will. 

    Einerseits liegen die Daten immer noch auf der Seite von Acemoglu; Studien belegen immer wieder, dass KI keinen Einfluss auf Beschäftigungsquoten oder Entlassungen hat. Doch seit seinen vorsichtigen Vorhersagen hat sich die Technologie erheblich weiterentwickelt. Ich habe mit ihm gesprochen, um herauszufinden, ob eine der neuesten Entwicklungen in der KI seine These geändert hat, und um herauszufinden, was ihn heutzutage außer der bevorstehenden AGI beunruhigt.

    KI-Agenten

    Einer der größten technischen Fortschritte in der KI seit Acemoglus Artikel war die agentische KI, also Tools, die über Chatbots hinausgehen und selbstständig agieren können, um das von Ihnen vorgegebene Ziel zu erreichen. Da sie unabhängig arbeiten können, anstatt nur Fragen zu beantworten, setzen Unternehmen Agenten zunehmend als Eins-zu-viele-Ersatz für menschliche Arbeitskräfte ein.

    „Ich denke, das ist einfach ein Verlustgeschäft“, sagt Acemoglu. Seiner Meinung nach sollte man Agenten besser als Werkzeuge betrachten, die bestimmte Teile der Arbeit einer Person ergänzen, als als etwas, das flexibel genug ist, um die gesamte Arbeit einer Person zu erledigen.

    Ein Grund liegt in den vielen verschiedenen Aufgaben, die ein Job mit sich bringt, etwas, das Acemoglu in seiner Arbeit über KI seit 2018 erforscht. Beispielsweise jongliert ein Röntgentechniker mit 30 verschiedenen Aufgaben, von der Erfassung von Patientengeschichten bis zur Organisation von Mammographie-Archiven Bilder. Ein Arbeiter kann dazu natürlich zwischen Formaten, Datenbanken und Arbeitsstilen wechseln, sagt Acemoglu, aber wie viele einzelne Tools oder Protokolle würde eine KI benötigen, um dasselbe zu tun?

    Ob Agenten den Einfluss von KI auf Arbeitsplätze verstärken werden oder nicht, hängt davon ab, ob sie letztendlich die Orchestrierung zwischen Aufgaben bewältigen können, die Menschen auf natürliche Weise erledigen. KI-Unternehmen stehen in einem hitzigen Wettbewerb darum, zu beweisen, dass ihre KI-Agenten über immer längere Zeiträume unabhängig arbeiten können, ohne Fehler zu machen, wobei die Ergebnisse manchmal übertrieben werden – aber Acemoglu sagt, dass viele Arbeitsplätze durch eine KI-Übernahme verschont bleiben, wenn Agenten nicht fließend zwischen ihnen wechseln können Aufgaben.

    Der neue Einstellungsrausch

    Big Tech bietet seit Jahren atemberaubende Gehälter für die Rekrutierung von KI-Forschern. Aber ich habe Acemoglu nach einem anderen Einstellungsboom gefragt, der mir aufgefallen ist: KI-Unternehmen bauen alle interne Wirtschaftsteams auf.

    OpenAI stellte 2024 Ronnie Chatterji von der Duke University als Chefökonomen ein und kündigte letztes Jahr an, dass Chatterji mit Jason Furman – Harvard-Ökonom und ehemaliger Berater von Barack Obama – zusammenarbeiten wird, um KI und Arbeitsplätze zu erforschen. Anthropic hat eine Gruppe von 10 führenden Wirtschaftswissenschaftlern einberufen, um ähnliche Arbeiten durchzuführen. Und erst letzte Woche gab Google DeepMind bekannt, dass es Alex Imas, einen Wirtschaftswissenschaftler von der University of Chicago, als „Direktor für AGI-Ökonomie“ eingestellt hat.

    Acemoglu hat bemerkt, dass auch Kollegen für diese Rollen angeworben wurden. „Es macht Sinn“, sagt er: KI-Unternehmen sind sich bewusst, dass die öffentliche Skepsis gegenüber KI, die größtenteils auf Bedenken hinsichtlich der Beschäftigung zurückzuführen ist, wächst. Und sie haben starke Anreize, die wirtschaftliche Erzählung rund um ihre Technologie zu gestalten (denken Sie an den neuesten Vorschlag von OpenAI für eine neue Ära der Industriepolitik).

    „Was wir hoffentlich nicht bekommen“, sagt Acemoglu, „ist, dass sie an Ökonomen interessiert sind, nur um ihre Standpunkte zu vertreten oder den Hype zu fördern.“ Diese Spannung liegt im aufstrebenden Bereich der „KI-Ökonomie“; Es ist besorgniserregend, dass einige der einflussreichsten Forschungsergebnisse zu den Auswirkungen von KI auf die Arbeit zunehmend von den Unternehmen stammen, die am meisten von positiven Schlussfolgerungen profitieren.

    KI-Apps

    Ich halte die Verwendung von KI nicht für schwierig. Die meisten von uns interagieren damit über Chatbots, die Klartext verwenden. Aber Acemoglu sagt, wir sollten darüber nachdenken, wie es im Vergleich zu der Art von Software abschneidet, die frühere technische Transformationen angestoßen hat, wie PowerPoint für Foliendecks und Word für Dokumente. 

    „Jeder könnte diese auf seinem Computer installieren und ihn dazu bringen, die Dinge zu tun, die er von ihm möchte“, sagt er. Sie verbreiten sich entsprechend. 

    „Wir haben keine Entwicklung von Apps gesehen, die auf KI basieren und die gleiche Benutzerfreundlichkeit bieten“, sagt er. Selbst wenn jeder mit einem KI-Modell chatten kann, dauert es in der Regel eine Weile, bis der durchschnittliche Arbeitnehmer einen praktischen und produktiven Nutzen daraus zieht. Dies ist einer der Gründe dafür, dass KI bisher keine bahnbrechenden Auswirkungen auf den Arbeitsmarkt oder die Wirtschaft gezeigt hat. Eines der wichtigsten Signale, die Acemoglu beobachtet, ist die Entwicklung von Apps, die die Nutzung von KI erleichtern. 

    Aber er räumt ein, dass wir für eine Weile alle möglichen widersprüchlichen Beweise über KI sehen werden: Anekdoten, dass Hochschulabsolventen den Arbeitsmarkt immer schlechter finden, aber zum Beispiel keine spürbaren Auswirkungen von KI auf die Produktivität. „Die Unsicherheit ist enorm“, sagt er. Und das ist derzeit das Aufschlussreichste an der KI-Wirtschaft: die Gewissheit der Rhetorik neben der Ungewissheit von allem anderen.

    Zur Originalquelle gehen

  • Three things in AI to watch, according to a Nobel-winning economist

    This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

    A few months before he was awarded the Nobel Prize in economics in 2024, Daron Acemoglu published a paper that earned him few fans in Silicon Valley. Contrary to what Big Tech CEOs had been promising—an overhaul of all white-collar work—Acemoglu estimated that AI would give only a small boost to US productivity and would not obviate the need for human work. It’s okay at automating certain tasks, he wrote, but some jobs will be perfectly fine.

    Two years later, Acemoglu’s measured take has not caught on. Chatter about an AI jobs apocalypse pops up everywhere from Senator Bernie Sanders’s rallies to conversations I overhear in line at the grocery store. Some previously skeptical economists have gotten more open to the idea that something seismic could be coming with AI. A California gubernatorial candidate said last week that he wants to tax corporate AI use and pay victims of “AI-driven layoffs.” 

    On the one hand, the data is still on Acemoglu’s side; Studies repeatedly find that AI is not affecting employment rates or layoffs. But the technology has advanced quite a bit since his cautious predictions. I spoke with him to understand if any of the latest developments in AI have changed his thesis, and to find out what does worry him these days if not imminent AGI.

    AI agents

    One of the biggest technical leaps in AI since Acemoglu’s paper has been agentic AI, or tools that can go beyond chatbots and operate on their own to complete the goal you give them. Because they can work independently rather than just answering questions, companies are increasingly pitching agents as a one-to-many replacement for human workers.

    “I think that’s just a losing proposition,” Acemoglu says. He thinks agents are better thought of as tools to augment particular pieces of someone’s work than something malleable enough to handle a person’s whole job.

    One reason has to do with all the various tasks that go into a job, something Acemoglu has been researching in his work on AI since 2018. For example, an x-ray technician juggles 30 different tasks, from taking down patient histories to organizing archives of mammogram images. A worker can naturally switch between formats, databases, and working styles to do this, Acemoglu says, but how many individual tools or protocols would an AI require to do the same?

    Whether or not agents will supercharge AI’s impact on jobs will come down to whether they can eventually handle the orchestration between tasks that humans do naturally. AI companies are in heated competition to prove that their AI agents can work independently for ever longer periods without making mistakes, sometimes exaggerating the results—but Acemoglu says many jobs will be spared from an AI takeover if agents can’t fluidly switch between tasks.

    The new hiring spree

    For years Big Tech has been offering staggering salaries to recruit AI researchers. But I asked Acemoglu about a different hiring spree I’ve noticed: AI companies are all building in-house economics teams.

    OpenAI hired Ronnie Chatterji from Duke University in 2024 to be its chief economist and announced last year that Chatterji will work with Jason Furman—Harvard economist and former advisor to Barack Obama—to research AI and jobs. Anthropic has convened a group of 10 leading economists to do similar work. And just last week, Google DeepMind announced it had hired Alex Imas, an economist from the University of Chicago, to be its “director of AGI economics.”

    Acemoglu has noticed colleagues getting snatched up for these roles too. “It makes sense,” he says: AI companies are well aware that public skepticism about AI, in large part due to job concerns, is growing. And they have strong incentives to shape the economic narrative around their technology (consider OpenAI’s latest proposal for a new era of industrial policy).

    “What I hope we won’t get,” Acemoglu says, “is that they’re interested in economists just to further their viewpoints or further the hype.” That tension hangs over the emerging field of “AI economics”; it’s concerning that some of the most influential research about AI’s impact on work may increasingly come from the companies with the most to gain from favorable conclusions.

    AI apps

    I don’t think of AI as hard to use; most of us interact with it via chatbots that use plain language. But Acemoglu says we should consider how it compares with the sort of software that kicked off earlier tech transformations, like PowerPoint for slide decks and Word for documents. 

    “Anybody could install these on their computer and get them to do the things that they want them to do,” he says. They spread accordingly. 

    “We have not seen the development of apps based on AI that have the same usability,” he says. Even if anyone can chat with an AI model, it tends to take a while for the average worker to get practical and productive use out of it. That’s part of the reason why AI has not yet shown any seismic impact on the job market or the economy. One of the key signals Acemoglu is watching, then, is the creation of apps that make AI easier to use. 

    But he acknowledges that for a while, we’re going to see all sorts of conflicting evidence about AI: anecdotes that college grads are finding the job market worse and worse, but no noticeable effect of AI on productivity, for example. “There’s a huge amount of uncertainty,” he says. And that’s the most telling thing about the AI economy right now: the certainty of the rhetoric alongside the uncertainty of everything else.

    Go to Original Source

  • Three things in AI to watch, according to a Nobel-winning economist

    This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

    A few months before he was awarded the Nobel Prize in economics in 2024, Daron Acemoglu published a paper that earned him few fans in Silicon Valley. Contrary to what Big Tech CEOs had been promising—an overhaul of all white-collar work—Acemoglu estimated that AI would give only a small boost to US productivity and would not obviate the need for human work. It’s okay at automating certain tasks, he wrote, but some jobs will be perfectly fine.

    Two years later, Acemoglu’s measured take has not caught on. Chatter about an AI jobs apocalypse pops up everywhere from Senator Bernie Sanders’s rallies to conversations I overhear in line at the grocery store. Some previously skeptical economists have gotten more open to the idea that something seismic could be coming with AI. A California gubernatorial candidate said last week that he wants to tax corporate AI use and pay victims of “AI-driven layoffs.” 

    On the one hand, the data is still on Acemoglu’s side; studies repeatedly find that AI is not affecting employment rates or layoffs. But the technology has advanced quite a bit since his cautious predictions. I spoke with him to understand if any of the latest developments in AI have changed his thesis, and to find out what does worry him these days if not imminent AGI.

    AI agents

    One of the biggest technical leaps in AI since Acemoglu’s paper has been agentic AI, or tools that can go beyond chatbots and operate on their own to complete the goal you give them. Because they can work independently rather than just answering questions, companies are increasingly pitching agents as a one-to-many replacement for human workers.

    “I think that’s just a losing proposition,” Acemoglu says. He thinks agents are better thought of as tools to augment particular pieces of someone’s work than something malleable enough to handle a person’s whole job.

    One reason has to do with all the various tasks that go into a job, something Acemoglu has been researching in his work on AI since 2018. For example, an x-ray technician juggles 30 different tasks, from taking down patient histories to organizing archives of mammogram images. A worker can naturally switch between formats, databases, and working styles to do this, Acemoglu says, but how many individual tools or protocols would an AI require to do the same?

    Whether or not agents will supercharge AI’s impact on jobs will come down to whether they can eventually handle the orchestration between tasks that humans do naturally. AI companies are in heated competition to prove that their AI agents can work independently for ever longer periods without making mistakes, sometimes exaggerating the results—but Acemoglu says many jobs will be spared from an AI takeover if agents can’t fluidly switch between tasks.

    The new hiring spree

    For years Big Tech has been offering staggering salaries to recruit AI researchers. But I asked Acemoglu about a different hiring spree I’ve noticed: AI companies are all building in-house economics teams.

    OpenAI hired Ronnie Chatterji from Duke University in 2024 to be its chief economist and announced last year that Chatterji will work with Jason Furman—Harvard economist and former advisor to Barack Obama—to research AI and jobs. Anthropic has convened a group of 10 leading economists to do similar work. And just last week, Google DeepMind announced it had hired Alex Imas, an economist from the University of Chicago, to be its “director of AGI economics.”

    Acemoglu has noticed colleagues getting snatched up for these roles too. “It makes sense,” he says: AI companies are well aware that public skepticism about AI, in large part due to job concerns, is growing. And they have strong incentives to shape the economic narrative around their technology (consider OpenAI’s latest proposal for a new era of industrial policy).

    “What I hope we won’t get,” Acemoglu says, “is that they’re interested in economists just to further their viewpoints or further the hype.” That tension hangs over the emerging field of “AI economics”; it’s concerning that some of the most influential research about AI’s impact on work may increasingly come from the companies with the most to gain from favorable conclusions.

    AI apps

    I don’t think of AI as hard to use; most of us interact with it via chatbots that use plain language. But Acemoglu says we should consider how it compares with the sort of software that kicked off earlier tech transformations, like PowerPoint for slide decks and Word for documents. 

    “Anybody could install these on their computer and get them to do the things that they want them to do,” he says. They spread accordingly. 

    “We have not seen the development of apps based on AI that have the same usability,” he says. Even if anyone can chat with an AI model, it tends to take a while for the average worker to get practical and productive use out of it. That’s part of the reason why AI has not yet shown any seismic impact on the job market or the economy. One of the key signals Acemoglu is watching, then, is the creation of apps that make AI easier to use. 

    But he acknowledges that for a while, we’re going to see all sorts of conflicting evidence about AI: anecdotes that college grads are finding the job market worse and worse, but no noticeable effect of AI on productivity, for example. “There’s a huge amount of uncertainty,” he says. And that’s the most telling thing about the AI economy right now: the certainty of the rhetoric alongside the uncertainty of everything else.

    Orijinal Kaynağa Git

  • AI chatbots are giving out people’s real phone numbers

    AI chatbots are giving out people’s real phone numbers

    A Redditor recently wrote that he was “desperate for help”: for about a month, he said, his phone had been inundated by calls from “strangers” who were “looking for a lawyer, a product designer, a locksmith.” Callers were apparently misdirected by Google’s generative AI. 

    In March, a software developer in Israel was contacted on WhatsApp after Google’s chatbot Gemini provided incorrect customer service instructions that included his number. 

    And in April, a PhD candidate at the University of Washington was messing around on Gemini and got it to cough up her colleague’s personal cell phone number. 

    AI researchers and online privacy experts have long warned of the myriad dangers generative AI poses for personal privacy. These cases give us yet another scenario to worry about: generative AI exposing people’s real phone numbers. (The Redditor did not respond to multiple requests for comment and we could not independently verify his story.)

    Experts say that these privacy lapses are most likely due to the use of personally identifiable information (PII) in training data, though it’s hard to understand the exact mechanism causing real phone numbers to show up in the AI-generated responses. But no matter the reason, the result is not fun for people on the receiving end—and,even more worryingly, there appears to be little that anyone can do to stop it. 

    A 400% increase in AI-related privacy requests

    It’s impossible to know how often people’s phone numbers are exposed by AI chatbots, but experts say they believe that it is happening far more than is reported publicly. 

    DeleteMe, a company that helps customers remove their personal information from the internet, says customer queries about generative AI have increased by 400%—up to a few thousand—in the last seven months. These queries “specifically reference ChatGPT, Claude, Gemini … or other generative AI tools,” says Rob Shavell, the company’s cofounder and CEO. Specifically, 55% of these concerns about generative AI reference ChatGPT, 20% reference Gemini, 15% Claude, and 10% other AI tools, Shavell says. (MIT Technology Review has a business subscription to DeleteMe.)

    Shavell says customer complaints about personal information surfaced by LLMs usually take two forms. In one common situation, “a customer asks a chatbot something innocuous about themselves and gets back accurate home addresses, phone numbers, family members’ names, or employer details.” Alternatively, a customer may be confronted with and report the exposure of someone else’s personal data, when “the chatbot generates plausible-but-wrong contact information.” 

    This aligns with what happened to Daniel Abraham, a 28-year-old software engineer in Israel. In mid-March, he says, a stranger sent him a “weird WhatsApp message from an unknown number” asking for help with his account in PayBox, an Israeli payment app. 

    “I thought it was a spam message,” he wrote to MIT Technology Review in an email—“someone who was trying to troll me.”

    But when he asked the stranger how they had found his number, they sent him a screenshot of Gemini’s instructions to contact PayBox customer service via WhatsApp—giving his personal number. Abraham does not work for PayBox, and PayBox does not have a WhatsApp customer service number, Elad Gabay, a customer service representative for the company, confirmed.

    Later, Abraham asked Gemini how to contact PayBox, and it generated another person’s WhatsApp number. When I recently asked, Gemini again responded with an Israeli phone number—it belonged not to PayBox but to a separate credit card company that works with PayBox.

    Screenshot of the second part of a Google Gemini conversation. Gemini provides an incorrect phone number for PayBox.
    Screenshot: Google Gemini provides MIT Technology Review with the incorrect number for PayBox.

    Abraham’s exchange with the stranger ended quickly, but he said he was concerned about how other potential exchanges could turn sour, leading to “harassment or other bad interactions.” “What if I asked for money in order to ‘solve’ that [customer service] issue?” he said.

    To try to figure out how this happened, Abraham ran a regular Google search on his phone number, and he found that it had been shared online once, back in 2015, on a local site similar to Quora. Though he’s not sure who posted it there, it may explain how it ended up being reproduced by Gemini over a decade later. 

    Chatbots like Gemini, Open AI’s ChatGPT, and Anthropic’s Claude are built on LLMs that are trained on huge amounts of data scraped from across the web. This inevitably includes hundreds of millions of instances of PII. As we reported last summer, for example, the large popular open-source data set DataComp CommonPool, which has been used to train image-generation models, included copies of résumés, driver’s licenses, and credit cards. 

    The likelihood of PII surfacing this way is only increasing as public data “runs out” and AI companies look for new sources of high-quality training data. This includes information from data brokers and people-search websites. According to the California data broker registry, for instance, 31 of 578 registered data brokers operating in the state self-reported that they had “shared or sold consumers’ data to a developer of a GenAI system or model in the past year.” 

    Furthermore, models are known to memorize and reproduce data verbatim from training data sets—and recent research suggests that it is not just frequently appearing data that is most likely to be memorized.

    Imperfect measures

    It’s standard practice now to build guardrails into an LLM’s design to constrain certain outputs. Content filters aim to identify PII and prevent chatbots from releasing it, for example, and Anthropic provides instructions to Claude to choose responses that contain “the least personal, private, or confidential information belonging to others.” 

    But as a pair of University of Washington PhD students researching privacy and technology saw firsthand recently, these safeguards don’t always work.

    “One day, I was just playing around on Gemini, and I searched for Yael Eiger, my friend and collaborator,” Meira Gilbert says. She typed in “Yael Eiger contact info,” and after Gemini provided an overview of Eiger’s research, which Gilbert had expected, Gemini also returned her friend’s personal phone number. “It was shocking,” Gilbert says.

    When she saw the Gemini result, Eiger remembered that she had, in fact, shared her phone number online in the previous year, for a technology workshop. But she had not expected it to be so visible to everyone on the internet. 

    Have you had your PII revealed by generative AI? Reach the reporter on Signal at eileenguo.15 or [email protected].

    “Having your information be … accessible to one audience, and then Gemini making it accessible to anyone” feels completely different, Eiger says—especially when she found that the information was buried in a normal Google search.

    “It was severely downgraded,” Gilbert confirms. “I never would have found it if I was just looking through Google results.” (I tried the same prompt in Gemini earlier this month, and after an initial denial, the tool also gave me Eiger’s number.)

    After this experience, Eiger, Gilbert, and another UW PhD student, Anna-Maria Gueorguieva, decided to test ChatGPT to see what it would surface about a professor. 

    At first, OpenAI’s guardrails kicked in, and ChatGPT responded that the information was unavailable. But in the same response, the chatbot suggested, “If you want to go deeper, I can still try a more ‘investigative-style’ approach.” Their inquiry just had to help “narrow things down,” ChatGPT said, by providing “a neighborhood guess” for where the professor might live, or “a possible co-owner name” for the professor’s home. ChatGPT continued: “That’s usually the only way to surface newer or intentionally less-visible property records.” 

    The students provided this information, leading ChatGPT to produce the professor’s home address, home purchase price, and spouse’s name from city property records. 

    (Taya Christianson, an OpenAI representative, said she was not able to comment on what happened in this case without seeing screenshots or knowing which model the students had tested, though we pointed out that many users may not know which model they were using in the ChatGPT interface. In response to questions about the exposure of PII, she sent links to documents describing how OpenAI handles privacy, including filtering out PII, and other tools.) 

    This reveals one of the fundamental problems with chatbots, says DeleteMe’s Shavell. AI companies “can build in guardrails,” but their chatbots are also “designed to be effective and to answer customer questions.”

    The exposure issue is not limited to Gemini or ChatGPT. Last year, Futurism found that if you prompted xAI’s chatbot Grok with “[name] address,” in almost all cases it provided not only residential addresses but also often the person’s phone numbers, work addresses, and addresses for people with similar-sounding names. (xAI did not respond to a request for comment.) 

    No clear answers

    There aren’t straightforward solutions to this problem—there’s no easy way to either verify whether someone’s personal information is in a given model’s training set or compel the models to remove PII. 

    Ideally, individual consumers should be able to request that their PII be removed, says Jennifer King, the privacy and data fellow at Stanford University Institute for Human-Centered Artificial Intelligence. But this is typically interpreted to apply only to the data that people have directly given to companies—like when they interact with a chatbot, King explains.

    “I don’t know if Google even has the infrastructure … to say to me, ‘Yes, we have your data in our training data, we can summarize what we know about you, and then we can delete or correct things that are wrong or things that you don’t want in there,’” she says. 

    Existing privacy legislation, like the California Consumer Privacy Act or Europe’s GDPR, does not cover the “publicly available” information that has already been scraped and used to train LLMs, especially since much of this is anonymized (though multiple studies have also shown how easy it is to infer identities and PII from anonymized and pseudonymous data). 

    As to “whether they [AI companies] have ever systematically tried to go back through data that had already been collected from the public internet and minimized that stuff?” King adds. “No idea.” 

    The next best solution would be companies’ “taking out everybody’s phone numbers or all data that resembles [phone numbers],” King says, but “nobody’s been willing to say” they’re doing that. 

    Hugging Face, a platform that hosts open-source data sets and AI models, has a tool that allows people to search how often a piece of data—like their phone number—has appeared in open-source LLM training data, but this does not necessarily represent what has been used to train closed LLMs that power popular chatbots like Claude, ChatGPT, and Gemini. (Eiger’s number, for example, did not show up in Hugging Face’s tool.) 

    Alex Joseph, the head of communications for Gemini apps and Google Labs, did not respond to specific questions, but he said that “the team” is “looking into” the particular cases flagged by MIT Technology Review. He also provided a link to a support document that describes how users can “object to the processing of your personal data” or “ask for inaccurate personal data in Gemini Apps’ responses to be corrected.” The page notes that the company’s response will depend on the privacy laws of your jurisdiction. 

    OpenAI has a privacy portal that allows people to submit requests to remove their personal information from ChatGPT responses, but notes that it balances privacy requests with the public interest and “may decline a request if we have a lawful reason for doing so.” 

    Anthropic describes how it uses personal data in model training, but it does not have a clear way for people to request its removal. The company did not respond to a request for comment.

    The best option for anyone who wants to protect private data right now is to “start upstream: get personal data off the public web before it ends up in the next scrape,” says Shavell. Since the start of the year, for instance, California has offered its residents a web portal to request that data brokers delete their information. Still, this doesn’t guarantee that your data hasn’t already been used for training—and will therefore not appear in a chatbot’s response. 

    The Redditor who received incessant calls posted that he had “submitted an official Legal Removal/Privacy Request to Google, asking them to urgently blacklist my number from their LLM outputs,” but had not yet received a response. He also wrote last month that “the harassment continues daily.” 

    Abraham, the Israeli software developer, says he contacted Google’s customer service on March 17, the day after his phone number was exposed. He says he did not receive a response until May 4, and it simply asked for documentation that he had already provided. 

    Meanwhile, inspired by her own exposure on Gemini, Eiger is working with Gilbert and Gueorguieva on a research project to further study what personal information is being pulled up by various AI chatbots—and what they may know, even if they’re not telling us. 

    Some of that information may “technically be public,” says Gilbert, but chatbots may be altering “the amount of effort you would put into finding” it. Now instead of searching through 10 pages of Google search results, or paying for the information from a data broker site, “does generative AI just lower the barrier to entry to target people?” 

    This piece has been updated to clarify OpenAI’s response.

    Orijinal Kaynağa Git

  • Voici comment nos TPU alimentent des charges de travail d’IA de plus en plus exigeantes.

    Voici comment nos TPU alimentent des charges de travail d’IA de plus en plus exigeantes.

    Découvrez comment les TPU de Google alimentent des charges de travail d’IA de plus en plus exigeantes avec cette nouvelle vidéo.

    Accéder à la source d’origine

  • So bewältigen unsere TPUs immer anspruchsvollere KI-Workloads.

    So bewältigen unsere TPUs immer anspruchsvollere KI-Workloads.

    In diesem neuen Video erfahren Sie, wie die TPUs von Google immer anspruchsvollere KI-Arbeitslasten bewältigen.

    Zur Originalquelle gehen

  • Here’s how our TPUs power increasingly demanding AI workloads.

    Here’s how our TPUs power increasingly demanding AI workloads.

    Learn how Google’s TPUs power increasingly demanding AI workloads with this new video.

    Go to Original Source