Don’t blindly trust what AI tells you, Google boss tells BBC


Faisal Islam,economics editor,

Rachel Clune,business reporter And

Liv McMahon,technology reporter

grey placeholderGetty Images Seen from above, a young female student interacts with an AI chatbot on a smartphone while studying at a desk with a laptop, notes and stationery. This view highlights modern education and technology integration.getty images

The boss of Google’s parent company Alphabet has told the BBC that people should not “blindly trust” everything AI tools tell them.

In an exclusive interview, Chief Executive Sundar Pichai said AI models are “prone to errors” and urged people to use them with other tools.

Mr Pichai said this highlighted the importance of having a rich information ecosystem rather than relying solely on AI technology.

“That’s why people use Google Search as well, and we also have other products that are more capable of providing accurate information.”

However, some experts say that big tech companies like Google should not invite users to fact-check the output of their tools, but should instead focus on making their systems more reliable.

While AI tools were helpful “if you want to write something creatively”, Mr Pichai said people “need to learn to use these tools based on what they are good at, and not blindly trust what they say”.

“We are proud of the work we have done to provide as accurate information as possible, but with current state-of-the-art AI technology there is the potential for some errors to occur,” he told the BBC.

The company displays disclaimers on its AI tools to let users know they can make mistakes.

But this has not protected it from criticism and concerns over the errors made by its products.

Google’s rollout of AI overviews summarizing its search results faced criticism and mockery due to some irregular, inaccurate responses.

The tendency of generic AI products like chatbots to disseminate misleading or inaccurate information is a cause for concern among experts.

“We know that these systems create answers, and they create answers to please us – and that’s a problem,” Gina Neff, professor of responsible AI at Queen Mary University of London, told BBC Radio 4’s Today programme.

“If I’m asking ‘What movie should I watch next’ that’s OK, but if I’m asking really sensitive questions about my health, about mental well-being, about science, about the news, that’s quite different,” she said.

He urged Google to take more responsibility over its AI products and their accuracy, and not impose it on consumers.

“The company is now asking to mark their own exam papers while they are burning the school,” he said.

‘A new phase’

The tech world is awaiting the latest launch of Google’s consumer AI model, Gemini 3.0, which is starting to take market share back from ChatGPT.

Starting in May this year, Google began introducing a new “AI mode” to its Search, integrating its Gemini chatbot, which aims to give users the experience of talking to an expert.

At the time, Mr Pichai said Gemini’s integration with Search signaled a “new phase of AI platform transformation”.

The move is also part of the tech giant’s bid to remain competitive against AI services like ChatGate, which have threatened Google’s online search dominance.

His comments support BBC research earlier this year, which found that AI chatbots produced inaccurate summaries of news stories.

OpenAI’s ChatGPIT, Microsoft’s Copilot, Google’s Gemini and Perplexity AI were all given material from the BBC website and asked questions about it, and the research found that there were “significant inaccuracies” in the AI ​​answers.

In his interview with the BBC, Mr Pichai said there was some tension between how fast technology is developing and how mitigation is done to prevent potentially harmful effects.

For Alphabet, Mr Pichai said managing stress meant being “courageous and responsible at the same time”.

“That’s why we’re moving so quickly right now. I think our consumers are demanding it,” he said.

Mr Pichai said the tech giant has also increased its investment in AI security in proportion to its investment in AI.

“For example, we are open-sourcing technology that will allow you to detect whether an image was created by AI or not,” he said.

Asked by tech billionaire Elon Musk about recently exposed years-old comments by OpenAI’s founders that the now Google-owned DeepMind could create an AI “dictatorship”, Mr Pichai said, “No company should have technology as powerful as AI”.

But he said there are many companies in the AI ​​ecosystem today.

“If there was only one company that was building AI technology and everyone else had to use it, I would be concerned too, but we’re a long way from that scenario right now,” he said.



Leave a Comment