In light of ChatGPT’s rising popularity, Google’s management has reportedly issued a ‘code red’

The New York Times reported Wednesday that Google’s management has issued a “code red” amid the launch of ChatGPT – a buzzy conversational-artificial-intelligence chatbot developed by OpenAI.

In an internal memo and audio recording reviewed by The Times, Sundar Pichai, the CEO of Google and its parent company, Alphabet, has directed a number of groups within the company to refocus their efforts on addressing the threat ChatGPT poses to Google’s search-engine business.

In particular, Google‘s research, trust, and safety divisions, among other departments, have been told to switch gears to assist with the creation and launch of artificial intelligence prototypes and products, according to The Times. The Times reports that some employees have been assigned to develop AI products that generate art and graphics, similar to OpenAI’s DALL-E, which is used by millions.

In response to a request for comment, a Google spokesperson did not immediately respond.

As Google expands its AI-product portfolio, employees and experts debate whether ChatGPT, run by Sam Altman, former Y Combinator president, could replace the search engine and harm its ad-revenue model.

According to Insider, Sridhar Ramaswamy, who oversaw Google’s ad team between 2013 and 2018, ChatGPT could prevent users from clicking on Google links with ads, which generated $208 billion – 81% of Alphabet’s overall revenue – in 2021.

By collecting information from millions of websites, ChatGPT, which gained over 1 million users five days after its public launch in November, can generate singular answers to questions in a conversational, humanlike way. In addition to writing college essays, the chatbot has provided coding advice and even served as a therapist for users.

The bot has been criticized for its errors, however. AI experts told Insider that ChatGPT is incapable of fact-checking what it says and cannot distinguish between verified facts and misinformation. As well as making up answers, AI researchers have called it “hallucinations.”

Bloomberg reported that the bot can also generate racist and sexist responses.

The Times reported that Google is hesitant to release its AI chatbot LaMDA to the public because of its high margin of error and toxicity potential. According to a recent CNBC report, Google executives were concerned about “reputational risk” if it were released broadly in its current state.

Zoubin Ghahramani, who leads Google’s AI lab Google Brain, told The Times before ChatGPT was released that chatbots cannot be used reliably on a daily basis.

According to The Times, Google may instead focus on improving its search engine over time rather than shutting it down.

We might get an early look at Google’s AI products at I/O in May, which is Google’s annual developer conference.

Recent Articles

Related Stories

Stay on op - Ge the daily news in your inbox