Episode 1  |  36 Min  |  February 05

Why AI hallucinates and why it matters with Ankur Taly, scientist at Google

Share on

Engaging topics at a glance

  • 00:00:20
    Introduction
  • 00:10:36
    Why do models make mistakes and why is it called AI hallucinations?
  • 00:13:31
    How does a model know which relationships are meaningful and not?
  • 00:16:12
    Things enterprise leaders should keep in mind while deploying LLMs
  • 00:18:14
    How does grounding address these AI hallucinations?
  • 00:21:53
    How much is grounding going to solve the hallucination problem?
  • 00:24:47
    Does hallucinatory capability drive innovation?

Join us in this episode featuring Ankur Taly, Staff Research Scientist, Google, as we explore the concept of grounding of LLMs!

Machines are supposed to work without mistakes, just like a calculator does math correctly. But in the world of artificial intelligence, errors, often called ‘AI hallucinations,’ are common. This makes us wonder about these mistakes and the computer programs behind them. For businesses that use AI in their work, especially when dealing with customers, making sure AI works without errors is very important.

Grounding requirement is that not only that you should not have any made up stuff, but everything that you output should be grounded in some knowledge source and the knowledge source is something that I control.

– Ankur Taly

Understanding how AI makes decisions and being clear about its processes is very important. Business leaders need to be able to watch and explain how AI makes decisions. This will be crucial for using AI in their companies in the future.

To fight AI hallucinations, grounding is important. Grounding means making sure AI answers are based on real facts. This involves teaching AI systems using correct and reliable information and making them give answers that can be proven. Grounding stops AI from making things up or giving wrong information.
When businesses use LLMs (large language models) in their work, they should think about some important things. First, they need to use good data to teach AI because bad data can lead to wrong or unfair results. It’s also important to have rules about how AI is used in the company to avoid causing harm or misusing AI information.

While you can use this in a very creative way, this next word prediction is also ultimately to be blamed for hallucinations because what it’s doing is basically it looks at what it recently said and then tries to predict what will likely come right after.

– Ankur Taly

Businesses also need to keep an eye on AI’s results to fix mistakes or wrong information. Having people check and filter AI’s work ensures that it’s correct and consistent. It’s also important to teach employees and users about what AI can and can’t do to avoid misunderstandings or misuse.


Even though AI hallucinations can be a problem, they can also have some positives. They can make people think creatively and find new solutions to tough problems. AI’s imaginative ideas can be fun, offering new types of art and media. Plus, AI hallucinations can help with learning by making people think and talk about interesting topics.

Production Team
Arvind Ravishunkar, Ankit Pandey, Rinat Sergeev, Chandan Jha, Nikhil Sood, Dipika Prasad

Latest podcasts

Episode 2  |  39 Min  |  February 05

Develop AI strategy for your organization with Dr. Kavita Ganesan

Develop AI strategy for your organization with Dr. Kavita Ganesan

Share on

Engaging topics at a glance

  • 00:12:19
    Key messages in the book: The business case for AI
  • 00:12:58
    What should enterprise leaders look into when implementing AI
  • 00:15: 25
    What problems can be solved with AI?
  • 00:16:13
    Importance of data in AI
  • 00:19:30
    Things to consider when going with AI in production
  • 00:20:48
    What makes a problem AI suitable?
  • 00:24:35
    Success rate of AI projects
  • 00:25:37
    What causes failure of AI projects?
  • 00:28:14
    What is preventing AI success?
  • 00:30:20
    Data integration problem

“Develop AI strategy for your organization” with Dr. Kavita Ganesan, where she discusses things to consider when implementing AI.

Many programmes, specifically AI-based programmes, start with the right intentions but often fail when they go into production. And, to explore this topic, we had an insightful discussion with our guest in this episode to understand why this happens and how it can be solved.

Most of the AI initiatives today fail to make it into production because people are not solving the right problems with AI, and there is a lack of understanding of what AI is at the leadership level.

The perception that Gen AI can solve every problem is inaccurate, and understanding this is crucial for enterprise leaders. There are many other AI techniques that can solve business problems and it's important to have a general understanding of what AI is and what types of problems it can solve. As implementing AI is not only cost intensive, but it also comes with many risks.

After the emergence of Gen AI, contrary to what many people think today, data collection is still a very integral part of AI initiatives in order to fine-tune the models for company-specific problems.

When deciding on the application of AI, it is advisable to use it for intricate issues that require numerous narrow prediction tasks. In such cases, a large amount of data points needs to be evaluated for making decisions, which could be challenging for human minds to process.

It's important for companies to have a strategic approach while implementing AI. Instead of just focusing on the latest trends (like implementing Gen AI for all the problems), companies should identify the problems that need to be solved in their business in order to have a huge business impact.

Production Team
Arvind Ravishunkar, Ankit Pandey, Rinat Sergeev, Chandan Jha, Nikhil Sood, Dipika Prasad

Top trending insights

Trailer  |  01 Min  |  February 05

Unpacked with Arvind Ravishunkar

Unpacked with Arvind Ravishunkar

Share on

In this series, Unpacked, I explore and unpack the most important concepts that business leaders need to know about emerging technologies. I connect with reputed scholars, industry experts and leaders through conversations. I also do short 5 min episodes on key concepts. I am your host Arvind Ravishunkar and this is Season 1 : Generative AI

Co-create for collective wisdom

This is your invitation to become an integral part of our Think Tank community. Co-create with us to bring diverse perspectives and enrich our pool of collective wisdom. Your insights could be the spark that ignites transformative conversations.

Learn More
cocreate-halftone