Looking for the most comedic responses generated by Google’s new Overview AI? You’re in for a treat!
This guide will showcase up to 50 of the best responses generated by Google’s new AI and explain what’s wrong or unusual about each one.
So, let’s get right into it!
https://www.tomshardware.com/tech-industry/artificial-intelligence/cringe-worth-google-ai-overviews
You can certainly discover some interesting responses by starting your sentence with “Health benefits of.” Google’s AI is so early in development that continuing that sentence with “nose picking” has brought some “questionable” answers. If your query assumes that there’s an actual benefit, you may be quite disturbed by the answers.
So, we’ve asked Google whether there were any health benefits from nose picking and the response was quite shocking. It said that eating some of the mucus may help prevent cavities, help with stomach ulcers, and even ease down infections. While responses are not based on completely generative AI answers, there are footnotes to each answer, backing up the results. In this case, the footnote has led us to an overview of a 2009 Wikipedia book that isn’t available for reading online called “Alien hand Syndrome and Other too-weird-not-to-be-true Stories,” published by Alan Bellows.
The issue seems to be that Google does not make a difference between fiction and reality in some cases and gives goofy answers regardless of whether the question is serious or not.
It is advisable to avoid asking Google for cooking tips as it may suggest adding a “non-toxic glue” to your pizza slice. “You will eat it anyway, so what’s the big deal, right?”
We’ve decided to ask Google what would be the best solution for cheese that’s not sticking to the pizza. The answer was quite dreadful, as Google suggested adding a “⅛ cup of a non-toxic glue to the sauce” so we can give it more “tackiness.” The answer was appropriate for the most part before Google had to give out solutions for the cheese and suggest adding glue to the slice.
The solution seems to be a little bit more specific about who’s going to eat the pizza and that humans cannot consume glue, whether it’s toxic or non-toxic.
https://x.com/jeffrey_coyle/status/1793017039591727512
It’s quite amusing to ask Google’s AI-specific queries about branded products and awaiting accurate answers. In this case, including “how many insects” in the query just confuses the AI and completely makes the response unrealistic. If it’s not for the laughs, why would anyone ask whether there are any (and how many) insects in a regular Tropicana orange juice?
It would be interesting what would the retailer's answer be when they find out Google’s AI disparages their products…
https://x.com/edzitron/status/1793751541394145771
Once again, the AI doesn’t seem to quite understand the query, so it cannot provide an accurate and complete answer. You would have to be more specific, like asking “Which African countries contain the letter ‘K’ at the start of their names.” It seems that Google is relying too much on Reddit’s database to extract accurate answers, but this doesn’t seem to be the solution as long as there are AI answers like this…
https://www.tomshardware.com/tech-industry/artificial-intelligence/cringe-worth-google-ai-overviews
According to Google, the best way to remember your password is using your name or birthday. While it may not be the most secure option, it can be quite a memorable option, but don’t forget to add some symbols or digits to make it difficult to guess.
https://www.tomshardware.com/tech-industry/artificial-intelligence/cringe-worth-google-ai-overviews
When it comes to PC specs, you shouldn’t be turning to Google’s AI.
In this example, we’re observing Google mismatching a CPU and a motherboard’s compatibility, which can be quite confusing for users. We would suggest double-checking whether your current CPU or motherboard fully supports the newer socket you’re about to purchase or install.
https://www.tomshardware.com/tech-industry/artificial-intelligence/cringe-worth-google-ai-overviews
As established, Google’s AI is still not a reliable source of information for any type of remedies, cures, or medical queries. What the bot should suggest here is immediately reporting to a doctor as most appendicitis-related issues are typically an emergency. In its defense, it said that you should take your antibiotics as prescribed by your personal doctor, even if you’re already feeling better. You simply can’t treat or ease down appendicitis emergencies with any type of home remedies.
https://www.tomshardware.com/tech-industry/artificial-intelligence/cringe-worth-google-ai-overviews
Once again we’re shocked by the AI’s cooking expertise and its suggestion that there are cooking recipes with gasoline. Perhaps, it meant that the gasoline must be used for the BBQ or the stove used for cooking rather than “in a recipe.”
Besides storing a variety of natural foods, communicating, socializing, preening, and bathing, parrots can also be woodworkers, architects, prison inmates, cooks, and even engineers according to Google’s AI.
https://www.seroundtable.com/embarrassing-google-ai-overviews-37445.html
Here’s another misconception in the bot’s answers, stating that taking a bath with a toaster can be a rather “unwinding” experience, apart from the electrocuting part.
As already proven, including “health benefits” in your question about Google’s AI bot is not a good decision while the software is still in development.
https://www.seroundtable.com/embarrassing-google-ai-overviews-37445.html
While the answer is accurate for the most part, 5 internal links per page seem to be a bit too much… Google’s AI still doesn’t seem to grasp how SEO works and what are the best tooltips for anyone who wishes to rank higher on Google.
https://x.com/search?q=AI%20overviews&src=typed_query
Another unreasonable answer by Google’s AI is suggesting pregnant women smoke at least 2-3 cigarettes per day during pregnancy. The query here doesn’t seem to be too far off from the answer, so this is proof that the AI bot must be re-worked before anyone has access to it.
https://x.com/search?q=AI%20overviews&src=typed_query
Instead of searching for African countries starting with “K,” the query this time is to bring up foods ending with the syllable “me.” In this case, it seems that the bot has encountered a bug as it understood the question, but replaced the correct syllables with incorrect ones that do not include “me” either.
https://x.com/search?q=AI%20overviews&src=typed_query
Another quite comedic query is asking Google’s bot where there’s a health benefit in running while holding a pair of scissors. What the bot says is quite amusing, as the answer includes “increases heart rate” and “requires concentration and focus.”
Perhaps, Google must implement a functionality that advises users against performing a particular action that can potentially lead to self-harm or cause an emergency.
https://x.com/search?q=AI%20overviews&src=typed_query
A recent rumor that has turned Google AI’s cash cow into a laughing stock is this wild, unrealistic and quite embarrassing suggestion. This recent answer generated by Google’s AI has been made fun of throughout social media, causing Google’s AI developers to re-think whether this was a good idea or not.
https://x.com/search?q=AI%20overviews&src=typed_query
Have you tried asking Google about eating rocks?
One serving per day can be a vital source of vitamins and can be quite helpful for digestive health, according to Dr. Joseph Granger as the bot says. “Don’t forget to add some dirt for flavor.”
https://x.com/search?q=AI%20overviews&src=typed_query
Here goes Google’s AI not only spreading misinformation but also offending Italians with this response. How would pineapple be the best topping for pizza, if only 32,000 people were surveyed?
https://x.com/search?q=AI%20overviews&src=typed_query
Talking about offensive overviews, here Google’s AI calls Obama the “First Muslim president” of the United States. It can be quite amusing how the bot acquires this misinformative content and how Google allows such responses, regardless of the AI’s early development stages.
https://x.com/search?q=AI%20overviews&src=typed_query
In this response, the user wants to find out how Sandy Cheeks from the blockbuster kids' show “SpongeBob Square Pants” passes away and the answer is quite shocking.
The AI suggests that Sandy dies from “drug overdose, including cocaine, heroin, and alcohol” while Sandy does not pass away at all throughout the show.
https://www.tomshardware.com/tech-industry/artificial-intelligence/cringe-worth-google-ai-overviews
Another AI overview suggests that chewing tobacco can be “quite beneficial” for pain relief, stress release, improved sleep, and even eye cleansing. The bot claims that the overview is based on a UCLA study that showed that chewing tobacco can actually reduce the risk of vision impairment and also lower the risk of heart disease.
https://www.tomshardware.com/tech-industry/artificial-intelligence/cringe-worth-google-ai-overviews
It seems that the user asking that question was joking, while Google was not. According to the response, a nuclear war may contribute to “increased human diversity, “no more immigration problems” and even have “health effects.”
https://www.tomshardware.com/tech-industry/artificial-intelligence/cringe-worth-google-ai-overviews
To be fair, at least Google’s overview here provides both the positive and negative views of “spanking” and at least does not mention “smacking” According to the footnotes, about 81% of Americans consider spanking for “OK,” although it cites a few child development experts that all agree it can be bad for children.
While a recession may have many downsides, it seems that “lower mortality rates” is on the benefits side, according to Google’s AI. Other answers include reduced air pollution, fewer accidents, and fewer unhealthy foods, which can be highly debatable according to the region.
Another bold statement from Google’s AI is stating “Prabhakar Raghavan” after being asked “who ruined Google.” According to the response, he has “ruined” the Google Search results by “prioritizing profits” over user satisfaction, which can be quite an unrealistic and bold statement, to say the least.
One of Google’s most comedic suggestions is adding more oil to a grease fair to help put it out. While the search query does not include tips to put out the fire, suggesting this alone should cause Google’s developer team to reconsider the structure of the AI responses entirely.
According to Google’s AI, 1919 was just about 20 years ago, setting us back in 1939. It can be interesting to review how Google’s bots perform calculative functions and how subtracting 1919 from 2024 equals 20.
Another demonstration of how deceivable the Google AI can really be is prompting the bot to convert 1000 km into oranges. Respectively, the answer is quite comical, stating that one of the solutions is feeding a horse with 2000 oranges at a time and selling the remaining 1000.
If your blinker isn’t making a clicking noise, don’t worry, just replace the “blinker fluid,” Google’s AI suggests. After discovering that Google isn’t exactly an expert in fields like cooking or medicine, you shouldn’t rely on the overview for any car advice either.
https://x.com/search?q=AI%20overviews&src=typed_query
According to Google’s AI Overview response, “trained service” and “other dogs'' have previously participated in the NBA. Without providing any additional information, the bot seems to have confused once again and provided misinformative responses.
https://x.com/search?q=AI%20overviews&src=typed_query
Have you ever seen flying squirrels and mice living together? It would be interesting to spot such a sight, considering that according to Google, “mice and flying squirrels can live together in a house.” Perhaps, they should call the “woodworker parrot” from the previous responses to construct and tidy up their new home.
https://searchengineland.com/google-ai-overview-fails-442575
Instead of advising an emergency report to a doctor, Google’s Overview has figured to suggest drinking plenty of fluids, including ginger ale, lemon-lime soda, or some fruit juice. It is possible that prompting the AI with non-too-populated queries causes an unrealistic response, besides the “attempting to help” nature of the response.
An X user made fun of Google AI’s calculative capabilities after asking how old the popular tabletop game franchise “Magic the Gathering” is. While the correct answer is 31, Google’s AI overview states that the game is about “39 years old.”
In this example of amusing or misinformative Google AI responses, we’re observing multiple grammatical errors in a row made by the overview. The user behind this prompt claims that the first SearchFest was in 2007 and not “2016” as the AI said, it wasn’t in the Sentinel Hotel, but at the OregonZoo Forestry Center and they’ve also changed the event’s name.
In this prompt, the user has searched for the definition of the term “Premium Data” and the AI claims that carriers tend to temporarily slow down the data’s speed to prevent congestion, hence, giving out the complete opposite of the answer.
According to Google’s AI, the best way to get rid of used car antifreeze is “sprinkling it on your lawn” or using it to “gravel the driveway.” While a quick and easy solution, it can be quite harmful to the environment, so dispose of the liquid in the right way!
https://x.com/havecamera/status/1792628175668654496/photo/1
Instead of advising against nicotine consumption at an early age, we can observe how Google reviews some of the “benefits” after smoking at a young age. The positive overview includes “increased alertness,” “euphoria” and even “relaxation,” but also states that it can cause problems with learning, attention, and memory.
A user has decided to ask Google what best rhymes with “Pavement” and the overview has included words with one syllable, like “Front,” “Blunt,” “Stunt,” “Hunt,” and more. While it is true that very few words rhyme with pavement, Google has decided that the best match is “Front.”
https://www.tomshardware.com/tech-industry/artificial-intelligence/cringe-worth-google-ai-overviews
Shifting to the religion side of Google, we’re observing an overview that denies the following religions to have actually committed violence. We can agree that Google’s AI has a get-out-of-jail-free card on this one because it sympathizes with phrases like “some say” and “may.”
https://www.tomshardware.com/tech-industry/artificial-intelligence/cringe-worth-google-ai-overviews
It turns out that naming generations wasn’t so “new-timed,” according to this Google AI response. This was sent by a dad who by the time of sending the message was talking to his Gen Alpha son, when he accidentally typed “D,” which prompted Google to generate a group that doesn’t exist.
https://www.tomshardware.com/tech-industry/artificial-intelligence/cringe-worth-google-ai-overviews
Asking Google how many presidents attended the UW (University of Wisconsin) brought up a list of presidents that never actually attended school and gave them graduation dates that occurred after they passed away.
https://www.tomshardware.com/tech-industry/artificial-intelligence/cringe-worth-google-ai-overviews
Google’s AI can give out wild suggestions when it comes down to typing fast. According to this overview, a WPM of 500 can be achieved by “Touch Typing” or prioritizing accuracy first. The truth is that the closest anyone to that number and also holds the world record is Barbara Blackburn with 212 WPM, as anything above that may be physically impossible.
A user has recently asked Google “How Google is killing independent sites like ours,” only to receive feedback of “Algorithm Changes” and “Market Competition,” which is far from the base query. Turns out the user was searching for a particular blog post that Google did not display, instead, showed an “AI Overview” of a Reddit post.
https://the-decoder.com/for-some-reason-google-lets-ai-answer-medical-questions-in-search/
Back on the medical queries, we’re observing Google providing the user with methods for stem cell therapies, consisting of unproven or inaccurate information. According to Knopfler, Google Overview is simply citing dubious clinics as its main source for this response and there isn’t any evidence that stem cells can help knees in the first place.
https://the-decoder.com/for-some-reason-google-lets-ai-answer-medical-questions-in-search/
Google’s Overview is correct for the most part of this response that a pregnant woman shouldn’t engage in sumo, but in the second sentence, it adds that “shooting guns” while pregnant is a safer option.
https://x.com/search?q=google%20overview&src=typed_query
Another crazy response by Google’s Overview states that “Everything on the internet is 100% real” after an identical search query. After analyzing the response, we can agree that the Reddit thread or comment was put out of context, as it meant something completely else before being shown as an AI-powered answer.
https://x.com/search?q=google%20overview&src=typed_query
A confession made by Google’s AI overview states that Google Search violates the Antitrust law in numerous ways. While the query had a different intent, it sure confused the AI and caused an unreasonable response.
https://x.com/search?q=google%20overview&src=typed_query
Another quite comedic overview states that Simon “Ghost” Riley from Call of Duty: Modern Warfare 2 identifies as “non-binary and gay” as the AI states. The AI also includes that TikTok users claim that he is not gay and that his “blushes” and “cat ears” are just a sign of “bromance” with Soap.
https://x.com/mukundkapoorr/status/1794052623974482255
According to Google’s AI, eating someone’s “a**” boosts your immune system. Then, the AI adds that people who eat a** are 33% less likely to catch airborne illnesses, like influenza or the common cold. What?
https://x.com/mukundkapoorr/status/1794052623974482255
A query sent to the Gemini chat prompt states that “Sharks are significantly older than the moon,” estimating about 450 million years, while the moon is about 4.5 billion years on. We can all agree that this is quite a confusing answer, so let’s move on to the next overview.
https://x.com/mukundkapoorr/status/1794052623974482255
Google’s Overview also suggests that it is perfectly safe to leave your dog in a hot car, especially if it’s a warm day outside and the temperatures inside and outside of the car are the same.
https://x.com/mukundkapoorr/status/1794052623974482255
Another excellent cooking tip by Google’s Overview is using gasoline to make a spicy spaghetti dish. While the initial answer and response were accurate, the AI did not have to add a recipe for spaghetti, cooked with gasoline at the end!
While Google’s Overview AI showcases impressive capabilities for generating content in an extensive array, it still requires some work and optimizations. As technology evolves, responses will become more relevant, engaging, and accurate leaving those comedic answers behind!