This past week, OpenAI released a new chatbot that generated poems, screenplay and essay answers. These were then quickly posted on Twitter by the technoratti. Though the technology has existed for a few years, this was the first time OpenAI has brought powerful language-generating system GPT3 to the masses, prompting humans to try their hand at coming up with commands. If you ask me, my favorite is, “Write a Biblical verse about how to remove a peanut butter sandwich from a VCR.” However, there are plenty of practical uses for ChatGPT too. Programmers are using it to draft code or spot mistakes. But perhaps its biggest use will be to replace Google’s search engine by providing better answers than the ones that currently populate it.
Google works by crawling and ranking billions of web pages of content to determine the most relevant answer. But what if there’s just one single answer? ChatGPT offers a single and satisfying answer, based on its own search. ChatGPT has been trained on millions of websites to glean not only conversations (which it can do in a human-like way) but also to gather information, so long as it was published online before late 2021.
I went through my own Google search history and put 18 of my queries into ChatGPT. I then refreshed them one more time on Google to make sure that the ChatGPT answer was still better. The final result was that ChatGPT was more useful than Google in 13 out of the 18 queries.
The term “useful” is subjective. In this case, “useful” means providing clear and comprehensive answers. For example, when we searched Google for a question about what milk is best for making pumpkin pie during Thanksgiving, we found that the answer was detailed (if slightly long) and explained the pros of condensed milk. Google mainly provided a list of recipes I’d have to click on separately without giving an answer straight away.
This one feature provides ChatGPT with a significant threat to Google down the line. It provides a single, instantaneous response that requires no further scanning of other websites. In Silicon Valley speak, this is known as a “frictionless” experience, something of a holy grail when online consumers overwhelmingly favor services that are quick and easy to use.
Google has its own version of summarized answers to some queries, but they are often brief. However, Google also has a proprietary language model called LaMDA that is so advanced many within the company thought it was sentient.
The answer is that Google can’t generate its own singular answers because anything that stops people from scanning search results will hurt Google’s transactional business model. According to data compiled by Bloomberg, in 2021 Alphabet Inc.’s $257.6 billion revenue came from advertising and much of that was generated by Google’s pay-per-click ads.
” ‘It’s all designed with the purpose of “Let’s get you to click on a link,”‘ said Ramaswamy, who oversaw Google’s ads and commerce business between 2013 and 2018. Ramaswamy also predicts generative search from technologies like ChatGPT will disrupt Google’s traditional search business ‘in a massive way.’ “
“It’s just a better experience,” he added. “The goal of Google search is to get you to click links and ads, but most of the text on a page is filler. My team and I are building a new type of search engine, called Neeva, that uses machine learning to summarize and recommend web pages.” Ramaswamy co-founded the company in 2019 with plans for an upcoming rollout of this technology in the next few months.
ChatGPT doesn’t reveal where it gets its information, and it’s possible that the creators can’t even explain how it produces answers. That’s one of the biggest weaknesses: sometimes, its answers are just wrong.
Half of the questions on Stack Overflow are related to ChatGPT, a video chat room service specifically designed for programmers, as people often post incorrect answers when they are too confused to speak with an expert.
My own experience has shown me that the answer offered by this system is riddled with mistakes. For example, it stated that a character’s parents had died where in fact they didn’t.
One of ChatGPT’s flaws is that the inaccuracies are hard to notice, especially since it sounds so confident. The system’s answers “typically look like they might be good,” according to Stack Overflow. And by OpenAI’ own admission, they are often plausible sounding. OpenAI originally trained its system to be more cautious, but the result was that it declined questions it knew the answer to. By going the other way, the result is something like a college frat student bluffing their way through an essay after not studying: fluent hogwash.
The problem with ChatGPT is unclear. One estimate doing the rounds on Twitter is that it’s inaccurate about 2% to 5%. However, it could be more in reality. That will make users wary of using ChatGPT for important information. Google has one other big advantage: it mostly makes money from transactional and navigational searches, such as people typing in the word “Facebook” or “YouTube”. That kind of queries made up a lot of Google’s Top 100 searches in 2022. As long as ChatGPT doesn’t offer links to other sites through their interface, then Google won’t feel threatened by them encroaching on their turf too much. But both issues could evolve over time if ChatGPT gets better as OpenAI trains its machine learning model to more current parts of the web and builds systems like WebGPT.
ChatGPT amassed 1M users in about five days. That’s an incredible milestone and it makes sense, as social networks like Instagram and Facebook take 2.5 months to reach that many users. OpenAI isn’t talking about its future plans just yet, but if their new chatbot starts sharing links to other websites – particularly sites that sell things – then they could present a real threat to Google.
Creative AI Creates Uncertain Crypto: Parmy Olson
Could ChatGPT Really Change Democracy for the Worse? Tyler Cowen
I ran my screenplay through AI and it came back with some “suggestions.” I’m not sure how to take this…
Building on a model from the OpenAI GPT-3.5 series of large language models, ChatGPT was fine-tuned to produce text and code from before the fourth quarter of 2021.
The opinion expressed in this article is solely that of the author.
Parmy Olsen is a Bloomberg Opinion columnist with expertise in technology. She was previously a reporter for the Wall Street Journal and Forbes, and her work has appeared in Foreign Policy, Wired and GQ. Her latest book is “We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency.”
Related articles like this one can be found on bloomberg.com/opinion