Join Us Tuesday, December 30

It truly brings me no joy to admit this, but now that it’s the end of 2025, I can see that Google’s AI Overviews, for as much as I goofed on them for being clumsy and bad, are having the last laugh.

This realization comes after, in 2024, I totally dunked on Google after its AI told me to use glue to help cheese stick to pizza. So, in a moment of internet infamy, I did just that: I made my glue pizza, and even ate a piece.

Now … things are different.

How Google AI got better

The other day, my editor and I were discussing the state of the AI Overviews. We thought the overviews were pretty abysmal at the start of the year — but we now admit to each other that we use them — instead of traditional search — a lot of the time these days.

Of course, the fact that people (such as myself) are willing to use the AI Overview answer in Google instead of clicking on a link and sending traffic to a website that perhaps employs humans to make that content (such as myself) is not necessarily great.

Allow me to reminisce.

Google AI Overviews launched in the spring of 2024, and immediately, people noticed they often, uh, sucked. The answers they gave were full of absurd inaccuracies.

One example that went viral on social media was its answer for how to stop the cheese from sliding off your pizza — Google AI suggested adding glue to the sauce, which it seemed to have lifted from a joke on Reddit.

As a journalist with a commitment to seeking the truth, I actually made a pizza with glue in the sauce and ate it (I’m a trained professional idiot; don’t try this at home).

This year, people noticed that Google AI Overviews still were full of problems. One particularly amusing issue was what we’ll call the “You can’t lick a badger twice” problem: give it a nonsense phrase, and it would accept this as a real idiom and try to explain it.

To test more, I made up a few of my own fake idioms like “you can’t fit a duck in a pencil” and “the road is full of salsa,” and indeed, AI Overviews gave explanations for what those phrases meant. Here’s what I wrote at the time, back in April:

A Google spokeswoman told me, basically, that its AI systems are trying their best to give you what you want — but that when people purposely try to play games, sometimes the AI can’t exactly keep up.
“When people do nonsensical or ‘false premise’ searches, our systems will try to find the most relevant results based on the limited web content available,” spokeswoman Meghann Farnsworth said.
“This is true of Search overall — and in some cases, AI Overviews will also trigger in an effort to provide helpful context.”

But things are getting better. When I tried a fake idiom just the other day (“you can’t tell a yak not to dance”), it gave a more reasonably accurate answer that it was “not a common, established idiom, but rather a playful or poetic expression,” and suggested potential interpretations. Fair enough.

Google AI Overviews are getting less laughably bad, and just, well … useful.

I’ve gotten more used to seeing them, and as I get used to them, I’ve also gotten better at predicting if the type of query I’m posing is the kind of thing an AI Overview can answer. And it’s working more often than it isn’t.

Help us all.



Read the full article here

Share.
Leave A Reply