Trailer  |  01 Min  |  February 05

Unpacked with Arvind Ravishunkar

Share on

In this series, Unpacked, I explore and unpack the most important concepts that business leaders need to know about emerging technologies. I connect with reputed scholars, industry experts and leaders through conversations. I also do short 5 min episodes on key concepts. I am your host Arvind Ravishunkar and this is Season 1 : Generative AI

Latest podcasts

Episode 1  |  36 Min  |  February 05

Why AI hallucinates and why it matters with Ankur Taly, scientist at Google

Why AI hallucinates and why it matters with Ankur Taly, scientist at Google

Share on

Engaging topics at a glance

  • 00:00:20
    Introduction
  • 00:10:36
    Why do models make mistakes and why is it called AI hallucinations?
  • 00:13:31
    How does a model know which relationships are meaningful and not?
  • 00:16:12
    Things enterprise leaders should keep in mind while deploying LLMs
  • 00:18:14
    How does grounding address these AI hallucinations?
  • 00:21:53
    How much is grounding going to solve the hallucination problem?
  • 00:24:47
    Does hallucinatory capability drive innovation?

Join us in this episode featuring Ankur Taly, Staff Research Scientist, Google, as we explore the concept of grounding of LLMs!

Machines are supposed to work without mistakes, just like a calculator does math correctly. But in the world of artificial intelligence, errors, often called 'AI hallucinations,' are common. This makes us wonder about these mistakes and the computer programs behind them. For businesses that use AI in their work, especially when dealing with customers, making sure AI works without errors is very important.

Understanding how AI makes decisions and being clear about its processes is very important. Business leaders need to be able to watch and explain how AI makes decisions. This will be crucial for using AI in their companies in the future.

To fight AI hallucinations, grounding is important. Grounding means making sure AI answers are based on real facts. This involves teaching AI systems using correct and reliable information and making them give answers that can be proven. Grounding stops AI from making things up or giving wrong information.
When businesses use LLMs (large language models) in their work, they should think about some important things. First, they need to use good data to teach AI because bad data can lead to wrong or unfair results. It's also important to have rules about how AI is used in the company to avoid causing harm or misusing AI information.

Businesses also need to keep an eye on AI's results to fix mistakes or wrong information. Having people check and filter AI's work ensures that it's correct and consistent. It's also important to teach employees and users about what AI can and can't do to avoid misunderstandings or misuse.


Even though AI hallucinations can be a problem, they can also have some positives. They can make people think creatively and find new solutions to tough problems. AI's imaginative ideas can be fun, offering new types of art and media. Plus, AI hallucinations can help with learning by making people think and talk about interesting topics.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Top trending insights

Episode 5  |  57 Min  |  February 05

Exploring reinforcement learning with MIT Professor Vivek Farias

Exploring reinforcement learning with MIT Professor Vivek Farias

Share on

Engaging topics at a glance

  • 00:16:50
    What is Reinforcement Learning
  • 00:20:10
    Reinforcement Learning for LLMs
  • 00:24:00
    How do you reward your model?
  • 00:33:00
    Revealed preferences v/s just a few individuals doing that
  • 00:36:00
    AI model training AI in the future?
  • 00:40:18
    Methodologies other than Reinforcement Learning
  • 00:43:10
    Considerations when in the Reinforcement Learning with Human Feedback (RLHF) Phases
  • 00:48:10
    About Cimulate

“Exploring Reinforcement Learning” with guest Vivek Farias, Professor, MIT, discusses what role reinforcement learning has to play in this world of Artificial Intelligence.

Learning systems with humans date back to almost 5,000 years ago. And these learning systems have what allowed us to progress as a society. Being able to teach other people what we know and share knowledge has been the foundational pillars of our evolution and civilization. And interestingly, these learning systems are not unique to just humans. Animals also have these learning systems. When you look at orcas, dolphins, the higher-order intelligent animals spend time training and teaching their young ones. In the last 50 to 60 years, we have not just been teaching humans how to learn, but we have been teaching machines how to learn. And this artificial intelligence area has benefited from our understanding of these learning systems.

The guest started with highlighting the importance of acknowledging uncertainty and balancing between exploiting what is known and exploring to learn more about the environment. This problem is referred to as a "multi-arm bandit problem" and is considered fundamental in reinforcement learning, where the goal is to optimize actions in an environment.

When looking at it specifically for Large Language Models (LLMs) the role of Reinforcement Learning. RL has played the central role in building general purpose chatbots that are based on LLMs. Because the resulting model that has been trained on data might not give you the refined output that you are expecting from it.

When discussing about rewards and losses in reinforcement learning phase, it came out that the way we structure rewards and penalties for AI models greatly influences their reliability, how they interact with public and the accountability.

Overall deploying AI involves a balance. Backend deployment offers some level of predictability, while front-end deployment is uncertain. Successful business must experiment and capitalize in both aspects.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Co-create for collective wisdom

This is your invitation to become an integral part of our Think Tank community. Co-create with us to bring diverse perspectives and enrich our pool of collective wisdom. Your insights could be the spark that ignites transformative conversations.

Learn More
cocreate-halftone