Bubble or no bubble, AI won’t have just one winner, says Jeffrey Katzenberg.
On an episode of the “Sourcery” podcast released on Monday, the former DreamWorks CEO turned investor said 2026 will sift out companies that are producing real outcomes with AI from those that aren’t.
“Rather than look at it from the sort of extreme notion of what that means for a bubble to burst, I think there’ll be a reckoning here in which those that actually are producing real results and are being deployed in really effective and efficient ways,” he said when asked about whether the AI bubble could pop next year.
He added: “It’s not going to be a zero-sum game, a winner-take-all. But I also think at the same time, not everybody is going to win at this.”
Katzenberg served as chairman of Walt Disney Studios for 10 years until 1994 and cofounded DreamWorks after his departure. As DreamWorks’ CEO, he oversaw the production of hits like “Shrek,” “Madagascar,” and “Kung Fu Panda.”
He stepped down in 2016 and cofounded the venture capital firm WndrCo with former Dropbox Chief Financial Officer Sujay Jaswa in 2017. WndrCo’s investments include Cursor, Harvey, and Figma.
WndrCo’s general partner, ChenLi Wang, also shared on the podcast how he and Katzenberg evaluate startups in the AI era.
“First, I think through our entire careers pre-WndrCo, Jeffrey’s entire career, we’ve been people people,” Wang said. “The ingenuity and creativity of people and how magical their spikes are, and how, when you complement people and what they can create together, is the secret sauce.”
Wang added that he and Katzenberg have never “assessed our best humans based on benchmarks.”
“I mean, how many years have people have parents complained about standardized testing dumbing down their kids,” Wang said. “And yet, I think we’re going down the same route with the first wave of benchmarks.”
Like the two partners, research scientists have also criticized benchmarks because they overvalue superficial traits.
In a March blog post, Dean Valentine, the cofounder and CEO of AI security startup ZeroPath, said that “recent AI model progress feels mostly like bullshit.”
Valentine said that he and his team had been evaluating the performance of different models claiming to have “some sort of improvement” since the release of Anthropic’s 3.5 Sonnet in June 2024. None of the new models his team tried had made a “significant difference” in his company’s internal benchmarks or in developers’ abilities to find new bugs, he said.
In a February paper titled “Can we trust AI Benchmarks?” researchers at the European Commission’s Joint Research Center found that major issues exist in the current evaluation approach.
The researchers said that benchmarks “often prioritize state-of-the-art performance at the expense of broader societal concerns.”
Read the full article here



