About a month ago, Goldman Sachs released a 31-page-report titled "Gen AI: Too Much Spend, Too Little Benefit?”
I very humbly implore you to read it. Caustic while being professional, this report is easily one of the most damning research reports I have read in a while1.
Now, if you think like me, you would be asking a simple question - why would an investment bank trash an entire industry? We’ll come back to this later.
To be honest, the AI industry has not helped itself. I would like to refer you to a WSJ interview from about 5 months ago with OpenAI CTO Mina Murati. The interviewer asked her a few thoughtful and straightforward questions. Mina Murati flailed. As an example, when asked about what data was used to train Sora, OpenAI's app for generating video with AI, Murati said that OpenAI used publicly available data, and when the interviewer asked her if this included videos from YouTube, Murati's expression resembled that of a deer caught in the headlamps of an oncoming freight train. She stammered a bit before saying she "actually wasn't sure about that". When pushed a third time, and asked about videos from Facebook or Instagram, Murati shook her head and said that if videos were "publicly available...to use, there might be the data, I'm not sure, I'm not confident about it".
The CTO is either actively obfuscating the conversation (perhaps at the request of PR/Legal) or is genuinely unaware of the details - either way, not a good look.
A. Artificial or just plain Limited?
In March 2024, The Information published a story about Amazon and Google "tamping down generative AI expectations"2, with these companies pouring big crocks of water over their salespeople’s excitement3 about the AI capabilities they're selling. A tech executive is quoted in the article saying that customers are grappling with existential questions like "is AI providing value?" and "How do I evaluate how AI is doing". A Gartner analyst told AWS sales staff that the AI industry was "at the peak of the hype cycle around Large Language Models and other generative AI." Ouch!!
Most of the article jived with my own observations - that AI is not really delivering on revenues. To quote:
“software companies that have touted generative AI as a boon to enterprises are still waiting for revenue to emerge,"
The article goes on to cite KPMG’s buying 47,000 subscriptions to Microsoft's co-pilot AI "at a significant discount on Copilot's $30 per seat per month sticker price." More interestingly - KPMG bought these subscriptions not because of any revenue expansions it could point to but so that it’s people could "be familiar with any AI-related questions its customers might have." My God - KMPG really cares for its people!!!
Elsewhere in AI land -
Salesforce CFO Amy Weaver said in its most recent earnings call that Salesforce was "not factoring in material contribution" from Salesforce's numerous AI products in its Financial Year 2025 guidance.
Adobe’s share price has been muted even as analysts have continued to wonder about what actual revenue the AI integrations might bring.
ServiceNow’s Chief Financial officer Gina Mastantuono was quoted as saying that "from a revenue contribution perspective, AI is not going to be huge".
Moral of the story - this emperor might be lacking clothes.
B. Goldman’s Motivations
Back to the Goldman report.
The document covers AI’s productivity benefits (which GS says are most likely limited) , AI’s returns (likely to be lesser than anticipated) and AI’s externalities (high power demand - enough to make utilities spend ~40-45% more over. the medium term to keep up with hyperscalers like Google and Microsoft).
Now, again, if you think like me, you would be asking a simple question - why would an investment bank trash an entire industry? Either no one gave any business to GS in the recent AI M&A boom or more likely, it is just a profitable thing to do. See, i-banks are really not worried about the feelings of Sam Altmans of the world. They will do anything as long as doing so is profitable. Goldman in particular will hype anything as long as it makes a buck - wall street is about the velocity of money - not what happens to the money and investors.
If you find me cynical - hold on. Here is an article from May, 2024:
For Goldman to turn on AI this suddenly suggests that it is extremely anxious about the future of gen AI. Now, remember how I said wall street is about the velocity of money? One key takeaway from this GS performance is that the longer AI takes to make money, the more money it will have to make.
C. Views - Daron Acemoglu (MIT)
Now, let’s dive in a little more into this report - they spoke to some seriously knowledgeable people.
The report includes an interview with economist Daron Acemoglu of MIT (page 4), an Institute Professor who published a paper back in May called "The Simple Macroeconomics of AI" which argued that "GDP growth from generative AI will likely prove much more limited than many forecasters expect". The intervening time has only made Acemoglu more pessimistic with him declaring that "truly transformative changes won't happen quickly and few – if any – will likely occur within the next 10 years", and that gen AI's ability to boost global productivity is low because "many of the tasks that humans currently perform...are multi-faceted and require real-world interaction, which AI won't be able to materially improve anytime soon".
What makes this interview and the paper so remarkable is how thoroughly and aggressively they attack every single bullet point of AI hype. Acemoglu tears into the hypothesis that AI models will get more powerful as we throw more data and GPU capacity at them. He asks an interesting question - what does it mean to "double AI's capabilities"? How does that actually make e.g. customer service reps or sales people better at their jobs?
The key question to ask here is this - is more better?. Gen AI generates outputs based on text-based inputs and requests. Now, it doesn’t matter if you and I feed it the exact same request. The answer is always generated de-novo. You can try it out yourself. Meaning that there is no actual "knowledge" or "intelligence" involved anywhere. As a result, it's easy to see how gen AI can get better, but nearly impossible to see how gen AI leads any further than where we're already at.
And then there is the paucity of training data. This is something that doesn’t get enough attention, but it’s sufficiently dire that it has the potential to halt (or dramatically slow) any AI development in the near future. One paper published in the journal Computer Vision and Pattern Recognition found that each additional step in improving model performance becomes increasingly (and exponentially) more expensive to take. This implies a steep financial cost — not merely in just obtaining the data, but also the compute required to process it — with Anthropic CEO Dario Amodei saying that the AI models currently in development will cost as much as $1bn to train, and within three years we may see models that cost as much as “ten or a hundred billion” dollars, or anywhere between the GDPs of Mauritania (ranked ~140) or the GDP of Ecuador (ranked ~60). I mean - why feed people. AI yo!.
While Acemoglu has some positive things to say — for example, that AI models could be trained to help scientists conceive of and test new materials (which happened last year) — his general verdict is quite harsh - that using gen AI and "too much automation too soon could create bottlenecks and other problems for firms that no longer have the flexibility and trouble-shooting capabilities that human capital provides". or to put it more simply, if you are one of those bosses who doesn’t have a clue, you are going to fuck things up when you sacrifice your people at the altar of AI.
D. Views - Jim Covello (GS’s top semiconductors analyst)
Covello isn't a name you'll likely have heard4. He has consistently been named as the top semiconductor analyst for years, successfully catching the downturn in fundamentals in multiple major chip firms far before others did.
Covello, thinks that the gen AI bubble is full of shit.
Covello believes that the combined expenditure of all parts of the gen AI boom — data centers, utilities and applications — will cost a trillion dollars in the next several years alone, and asks one very simple question: "what trillion dollar problem will AI solve?" He notes that "replacing low-wage jobs with tremendously costly technology is basically the polar opposite of the prior technology transitions [he's] witnessed in the last thirty years".
Covello goes on to bust open a whole bunch of AI “myths” - some of which are:
Comparing AI to the early days of the internet - "even in its infancy, the internet was a low-cost technology solution that enabled e-commerce to replace costly incumbent solutions", and that "AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn't designed to do".
Tech starts off expensive and gets cheaper over time - "the tech world is too complacent in the assumption that AI costs will decline substantially over time". He specifically notes that the only reason that Moore's law worked in terms of getting consumers ever cheaper and faster chips was because competitors like AMD forced Intel (and other companies to compete) — Today Nvidia has a near-stranglehold on the GPUs required to work on gen AI.
AI driven competitive advantages - Tech companies now have to engage in the AI arms race given the hype (which will continue the trend of massive spending). He believes that there are "low odds of AI-related revenue expansion", in part because he doesn't believe that gen AI will make workers smarter, just more capable of finding better information faster. The most damning assertion is that any advantages that gen AI gives you can be "arbitraged away" because the tech can be used everywhere, and thus you can't, as a company, raise prices.
Covello concludes with one important and brutal note - the more time that passes without significant AI applications, the more challenging "the AI story will become". He also adds his own prediction — "investor enthusiasm may begin to fade if important use cases don't start to become more apparent in the next 12-18 months”.
I think he's being optimistic.
E. Views - Joseph Briggs (Senior Global Economist at GS, AI Hype Fiend)
The report also includes a palette cleanser in the form of Joseph Briggs.
Joseph Briggs argues that gen AI will "likely lead to significant economic upside" based almost entirely on the premise that AI will replace workers in some jobs and then allow them to get jobs in other fields. Briggs also argues that "the full automation of AI exposed tasks that are likely to occur over a longer horizon could generate significant cost savings", which assumes that gen AI (or AI itself) will actually replace these tasks.
I think Mr. Briggs is smoking something really trippy. Unlike every other interview in the report, Briggs continually mixes up AI and gen AI, and at one point suggests that "recent generative AI advances" are "foreshadowing the emergence of a superintelligence". He is saying that a transformer model that probabilistically generates the next part of a sentence or a picture will somehow gain sentience. I call BS.
Francois Chollet — an AI researcher at Google — recently argued that LLMs can't lead to AGI, explaining (in painful detail) that models like GPT are simply not capable of the kind of reasoning and theorizing that makes a human brain work. Chollet also notes that even models specifically built to complete the tasks of the Abstraction & Reasoning Corpus (a benchmark test for AI skills and true "intelligence") are only doing so because they've been fed millions of datapoints of people solving the test, which is kind of like measuring somebody's IQ based on them studying really hard to complete an IQ test.
How is that a measure of anything?
Tasks like taking someone's order and relaying it to the kitchen at a fast food restaurant might seem elementary to most people, but they aren’t for an AI model that generates answers without really understanding the meaning of any of the words. Last year, Wendy's announced that it would integrate its generative "FreshAI" ordering system into some restaurants, and a few weeks ago it was revealed that the system requires human intervention on 14% of the orders. On Reddit, one user noted that Wendy's AI regularly required three attempts to get it to understand them, and would cut you off if you weren't speaking fast enough.
White Castle, which implemented a similar system in partnership with Samsung and SoundHound, fared little better, with 10% of orders requiring human intervention. Last month, McDonald’s discontinued its own AI ordering system — which it built with IBM and deployed to more than 100 restaurants — likely because it just wasn’t very good.
If nothing else, this illustrates the disconnect between those building AI systems, and how much (or, rather, how little) they understand the jobs they wish to eliminate. A little humility goes a long way.
But here’s the thing - Wendy’s and McDonalds still need humans to make the food. If only ChatGPT will also make the burgers…
Ah well, more power to Monsieur Briggs.
F. Views - Brian Janous (Microsoft VP of Energy)
One theme brought up repeatedly is the idea that America's power grid is literally not ready for gen AI. Talking to Brian Janous, the report details numerous nightmarish problems that the growth of gen AI is causing in the power grid, such as:
Hyperscalers like Microsoft, Amazon and Google have increased their power demands from a few hundred megawatts in the early 2010s to a few gigawatts by 2030, enough to power multiple American cities.
The centralization of data center operations for multiple big tech companies in Northern Virginia may potentially require a doubling of grid capacity over the next decade.
Utilities have not experienced a period of load growth — as in a significant increase in power draw — in nearly 20 years, which is a problem because power infrastructure is slow to build and involves onerous permitting and bureaucratic measures to make sure it's done properly.
The total capacity of power projects waiting to connect to the grid grew 30% in the last year and wait times are 40-70 months.
Expanding the grid is "no easy or quick task".
To sum up - on top of gen AI not having any killer apps, not meaningfully increasing productivity or GDP, not generating any revenue, not creating new jobs or massively changing existing industries, it also requires US to totally rebuild its power grid, which Janous regrettably adds the US has kind of forgotten how to do.
Perhaps Sam Altman's energy breakthrough could be these AI companies being made to pay for new power infrastructure. Till then, pls stay away from India. Here’s looking at you, Ola/Krutrim/whatever new BS comes up.
Concluding
Every time I speak to people working in AI - we end up having deep and often very pugilistic conversations. One of the enduring defences of the AI hype is that somehow Open AI and Anthropic and the likes are working on sexy, secret technology that will give rectal paralysis to all haters and change the world.
One - I am not an AI hater. I just call it out for what it is. Two - Bullshit.
That's my answer to all of this. There is no magic trick. There is no secret thing that Sam Altman is going to reveal to us in a few months/years that makes me eat crow, or some magical tool that Microsoft or Google "pops out that makes all of this worth it.
Gen AI cannot do much more than it is currently doing, other than doing more of it faster with some new inputs. It isn’t getting much more efficient. David Cahn (Sequoia hype man) gleefully mentioned in a recent blog that Nvidia's B100 will "have 2.5x better performance for only 25% more cost". Again - BS - because gen AI isn't going to gain sentience or intelligence and consciousness because it's able to run faster.
Gen AI is not going to become AGI, nor will it become the kind of artificial intelligence you've seen in science fiction. Ultra-smart assistants like Jarvis from Iron Man would require a form of consciousness that no technology currently — or may ever — have — which is the ability to both process and understand information flawlessly and make decisions based on experience, which, if I haven't been clear enough, are all entirely distinct things.
Gen AI at best processes information when it trains on data, but at no point does it "learn" or "understand," because everything it's doing is based on ingesting training data and developing answers based on a mathematical sense or probability rather than any appreciation or comprehension of the material itself. LLMs are entirely different pieces of technology to that of "an artificial intelligence" in the sense that the AI bubble is hyping, and it's disgraceful that the AI industry has taken so much money and attention with such a flagrant, offensive lie.
You want to find probability of defaults in a loan portfolio - perhaps gen AI can give you pointers. If you expect it to tell you which customers are going to default - it’s not going to do that.
Industries aren’t going to change because of gen AI, because gen AI can't actually do many jobs, and it's mediocre at the few things that it's capable of doing. While it's a useful efficiency tool, said efficiency is based off of a technology that is extremely expensive. At some point AI companies like Anthropic and OpenAI will have to increase prices and from there, collapse under the weight of a technology that has no path to profitability.
Closer home, there's going to be a moment that spooks a major VC firm into pushing one of its startups to sell, or the sudden, unexpected yet very obvious collapse of a major player. For OpenAI and Anthropic, there really is no path to profitability — only one that includes burning further billions of dollars in the hope that they discover something, anything that might be truly innovative or indicative of the future, rather than further iterations of gen AI, which at best is an extremely expensive new way to process data.
It's obvious. It's well-documented. Gen AI costs far too much, isn't getting cheaper, uses too much power, and doesn't do enough to justify its existence. There are no killer apps, and no killer apps on the horizon. And there are no answers.
I don't know why more people aren't saying this as loudly as they can. I understand that big tech desperately needs this to be the next hypergrowth market as they haven't got any others, but pursuing this particular pork barrel project will hurt entire economies.
It's all a disgraceful waste.
Housekeeping:
As always, I look forward to hearing from you. If you liked this post, pls feel free to share this or subscribe to this newsletter using the links below. I try to write a 1000-2000 word essay once every 4 weeks or so.
yes, that includes hindenburg’s reports.
unfortunately, paywalled.
just goes on to prove that good sales people can sell anything.
unless you are very interested in semiconductors
Excellent stuff as always Vatsal. You should come on the podcast and talk about it.