Episode 1  |  36 Min  |  February 05

Why AI hallucinates and why it matters with Ankur Taly, scientist at Google

Share on

Engaging topics at a glance

  • 00:00:20
    Introduction
  • 00:10:36
    Why do models make mistakes and why is it called AI hallucinations?
  • 00:13:31
    How does a model know which relationships are meaningful and not?
  • 00:16:12
    Things enterprise leaders should keep in mind while deploying LLMs
  • 00:18:14
    How does grounding address these AI hallucinations?
  • 00:21:53
    How much is grounding going to solve the hallucination problem?
  • 00:24:47
    Does hallucinatory capability drive innovation?

Join us in this episode featuring Ankur Taly, Staff Research Scientist, Google, as we explore the concept of grounding of LLMs!

Machines are supposed to work without mistakes, just like a calculator does math correctly. But in the world of artificial intelligence, errors, often called ‘AI hallucinations,’ are common. This makes us wonder about these mistakes and the computer programs behind them. For businesses that use AI in their work, especially when dealing with customers, making sure AI works without errors is very important.

Grounding requirement is that not only that you should not have any made up stuff, but everything that you output should be grounded in some knowledge source and the knowledge source is something that I control.

– Ankur Taly

Understanding how AI makes decisions and being clear about its processes is very important. Business leaders need to be able to watch and explain how AI makes decisions. This will be crucial for using AI in their companies in the future.

To fight AI hallucinations, grounding is important. Grounding means making sure AI answers are based on real facts. This involves teaching AI systems using correct and reliable information and making them give answers that can be proven. Grounding stops AI from making things up or giving wrong information.
When businesses use LLMs (large language models) in their work, they should think about some important things. First, they need to use good data to teach AI because bad data can lead to wrong or unfair results. It’s also important to have rules about how AI is used in the company to avoid causing harm or misusing AI information.

While you can use this in a very creative way, this next word prediction is also ultimately to be blamed for hallucinations because what it’s doing is basically it looks at what it recently said and then tries to predict what will likely come right after.

– Ankur Taly

Businesses also need to keep an eye on AI’s results to fix mistakes or wrong information. Having people check and filter AI’s work ensures that it’s correct and consistent. It’s also important to teach employees and users about what AI can and can’t do to avoid misunderstandings or misuse.


Even though AI hallucinations can be a problem, they can also have some positives. They can make people think creatively and find new solutions to tough problems. AI’s imaginative ideas can be fun, offering new types of art and media. Plus, AI hallucinations can help with learning by making people think and talk about interesting topics.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Latest podcasts

Episode 4  |  53 Min  |  February 05

Performance and choice of LLMs with Nick Brady, Microsoft

Performance and choice of LLMs with Nick Brady, Microsoft

Share on

Engaging topics at a glance

  • 00:12:23
    Introduction
  • 00:14:20
    Current use cases being deployed for GenAI
  • 00:19:10
    Performance of LLM models
  • 00:36:15
    Domain Specific LLMs vs General Intelligence LLMs
  • 00:38:37
    How to choose the right LLM?
  • 00:41:27
    Open Source vs Closed Source
  • 00:44:50
    Cost of LLM
  • 00:46:10
    Conclusion

"Exploring what should organization considering when choosing to adopt LLMs" with guest Nick Brady, Senior Program Manager at Microsoft Azure Open AI Service

AI has been at the forefront of transformation for more than a decade now. Still, the Open AI launch of chat GPT in November 2022 will be noted as a historical moment – the scale of which even Open AI did not expect – in the history of technological innovations. Most people don't realize or fully appreciate the magnitude of the shift that we're in. Now, we're able to directly express to a machine a problem that we need to have solved; equipping these technologies with the right reasoning engines and the right connectivity could bring the biggest technology leapfrog not just for enterprises but even in everyday lives.

The onset of leapfrog does bring out a few questions for enterprises looking to adopt GenAI as a part of their strategy, operations and way ahead, like:

What use cases are best suited to adopt the models?

While most customers are looking for how this could reduce business costs in their organizations, the true value is when it is used to maximize business value productivity and downstream that could lead to employee satisfaction and customer satisfaction. Any place where there's language – programming or natural language – is a good use case for generative AI, and that probably would be the most profound shift. So, if you have language, if you have a document, if you have big data where you're trying to sort of synthesize, understand what that content and what the content is, generative AI models can do this ad nauseam without any delay.

The most common metric used across the world to describe LLMs is the number of parameters; in the case of GPT 3, it is trained on 175 billion parameters, but what does this mean?

Parameter size refers to essentially the number of values that the model can change independently as it learns from data and stores all information in the vast associative ray of memory as its model weights. What's perhaps more important for these models, and it speaks to more of their capability, is their vocabulary size.

How does one decide and evaluate which would be the best-suited model for the selected use cases?

The best practice really is to start with the most powerful and advanced language model like GPT 4.0 to test, if it's even possible, with your use case. Post confirming the possibility of use case trickle down to simpler models to find its efficacy and efficiency. If the simpler model can probably achieve 90% of the way, with just a little bit of prompt engineering, then you could optimize for costs.

Organizations would have to define what quality means to them. It could be the model's output, its core response, or performance in terms of latency, where the quality of the output may not be as important as how quickly we can respond back to the user.

The key for leaders is to pay close attention to the potential use cases, test them with the best model and then optimize the model to balance the cost, efficacy and efficiency factors.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Top trending insights

Episode 11  |  55 Min  |  February 05

How to deploy AI sustainably with Dr. Eng Lim Goh, SVP at HPE

How to deploy AI sustainably with Dr. Eng Lim Goh, SVP at HPE

Share on

Engaging topics at a glance

  • 00:16:45
    Why is sustainable AI important?
  • 00:20:48
    Apart from power, what else matters for sustainable AI?
  • 00:26:12
    What about e-waste and recycling?
  • 00:29:15
    Why is AI so power hungry?
  • 00:32:19
    What model should business leaders adopt for AI deployment?
  • 00:36:47
    More on energy use by AI
  • 00:39:56
    Choosing the right hardware for AI
  • 00:45:08
    Organizational effort for sustainable AI
  • 00:48:56
    Considerations when deploying AI

Explore the vital link between AI and sustainability as we discuss strategies for eco-conscious AI deployment with Dr. Eng Lim Goh, Senior vice president of Data & AI at Hewlett Packard Enterprise

In this episode of Unpacked, Arvind introduces Dr. Eng Lim Goh, SVP of Data and AI at Hewlett Packard Enterprises, to discuss the topic of sustainable AI. They agree on the importance of being conscious of the planet's well-being while charting business growth and profitability. 

Dr. Goh explains the need for corporations to consider sustainability due to their net-zero goals, the conscious younger generation of employees, and the economic implications of power consumption in supercomputing. He shares his experience of working on the International Space Station and how he realized the importance of a sustainable approach to technology. 

Similarly, he suggests that businesses should consider long-term goals while investing in AI and related technologies, adding that it is important to measure the impact of such efforts quantitatively. He also talks about the importance of collaboration between businesses, governments, and academia to achieve sustainable progress. The conversation then moves on to the topic of energy consumption in AI, and Dr. Goh explains how the power consumption of large models has been a challenge in the supercomputing industry. He suggests that businesses should consider using more efficient hardware and software to reduce energy consumption and how they can approach this. He also mentions the importance of using renewable energy sources to power data centers. 

The conversation concludes with Dr. Goh’s vision for the future of AI and sustainability. Dr. Goh emphasizes the need for businesses to consider the long-term impact of their actions and to invest in sustainable technologies. He believes that AI can play a crucial role in achieving sustainability goals and that it is important for businesses to collaborate and share knowledge to achieve sustainable progress. 

Overall, the conversation highlights the need for businesses to consider sustainability while investing in AI and related technologies. It emphasizes the importance of transparency, collaboration, and measurement in achieving sustainable progress.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Co-create for collective wisdom

This is your invitation to become an integral part of our Think Tank community. Co-create with us to bring diverse perspectives and enrich our pool of collective wisdom. Your insights could be the spark that ignites transformative conversations.

Learn More
cocreate-halftone