Episode 10  |  61 Min  |  April 04

What you should know about LLM’s with Anupam Datta, Co-founder TruEra, and ex-CMU

Share on

Engaging topics at a glance

  • 00:09:15
    Introduction
  • 00:13:40
    What is a Large Language Model (LLM)?
  • 00:18:40
    Is LLM a form of intelligence?
  • 00:20:25
    Comparing how LLMs learn than human learning.
  • 00:22:50
    How LLMs differ from one another?
  • 00:27:56
    What to consider when choosing LLMs?
  • 00:44:05
    Can LLMs retrieve past human knowledge?
  • 00:51:45
    How can companies harness power of statistical models?
  • 00:53:05
    Key things to keep in Mind when integrating LLM into the business.
  • 00:56:10
    Conclusion

Join us in this episode featuring Anupam Datta, Co-founder and Chief Scientist, TruEra, as we dive into the evolution of LLMs and what they hold for the future!

This world of generative AI has caught us by storm. And as enterprise leaders in your companies, understanding the technology behind generative AI will give you a competitive advantage as you plan your companies and businesses. And to help you do this, we will unpack a technology, large language models (LLMs), that powers AI today and represent a paradigm shift in this field of Artificial Intelligence.

LLMs can craft meaningful responses across many domains. Their performance has notably improved recently thanks to the substantial increase in model size and data volume.

Is it like human? No, far from it. Generative AI is still a machine that is learning statistical patterns and generalizing from there remarkably well, but these are machines.

– Anupam Datta

With the increasing acceptance of this technology, numerous companies are unveiling various Large Language Models (LLMs). It’s important to recognize that opting for the largest or highest-performing LLM isn’t always the most suitable approach. Instead, one might prefer LLMs that excel in specific tasks relevant to their application. As a leader in the enterprise, it’s crucial to integrate this understanding into your company’s strategy, aiding in identifying the appropriate LLMs to match and adapt for your applications. Achieving equilibrium between LLM selection, cost considerations, and latency considerations stands as a pivotal concern for enterprises. Equally essential is the thorough validation and assessment of generative outputs, serving as a safeguard prior to embarking on consequential choices. Hence, the undertaking of reliability testing at this current juncture is paramount.

Furthermore, enterprises need to consider a few other key aspects in this evolving landscape of LLMs as they build out LLMs. Starting with a well-defined business use case that offers real value is crucial. As LLMs move from development to production, it’s important to establish thorough evaluations and observability throughout their lifecycle. Education across the organization is vital to implement LLMs effectively. Companies should train their workforce to adapt to this changing technology stack. Fostering a community around responsible AI development and evaluation can contribute to a better understanding and management of LLMs. With these steps, enterprises can navigate the complexities of LLMs and harness their potential for positive impact.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Latest podcasts

Trailer  |  01 Min  |  April 04

Unpacked with Arvind Ravishunkar

Unpacked with Arvind Ravishunkar

Share on

In this series, Unpacked, I explore and unpack the most important concepts that business leaders need to know about emerging technologies. I connect with reputed scholars, industry experts and leaders through conversations. I also do short 5 min episodes on key concepts. I am your host Arvind Ravishunkar and this is Season 1 : Generative AI

Top trending insights

Episode 5  |  57 Min  |  April 04

Exploring reinforcement learning with MIT Professor Vivek Farias

Exploring reinforcement learning with MIT Professor Vivek Farias

Share on

Engaging topics at a glance

  • 00:16:50
    What is Reinforcement Learning
  • 00:20:10
    Reinforcement Learning for LLMs
  • 00:24:00
    How do you reward your model?
  • 00:33:00
    Revealed preferences v/s just a few individuals doing that
  • 00:36:00
    AI model training AI in the future?
  • 00:40:18
    Methodologies other than Reinforcement Learning
  • 00:43:10
    Considerations when in the Reinforcement Learning with Human Feedback (RLHF) Phases
  • 00:48:10
    About Cimulate

“Exploring Reinforcement Learning” with guest Vivek Farias, Professor, MIT, discusses what role reinforcement learning has to play in this world of Artificial Intelligence.

Learning systems with humans date back to almost 5,000 years ago. And these learning systems have what allowed us to progress as a society. Being able to teach other people what we know and share knowledge has been the foundational pillars of our evolution and civilization. And interestingly, these learning systems are not unique to just humans. Animals also have these learning systems. When you look at orcas, dolphins, the higher-order intelligent animals spend time training and teaching their young ones. In the last 50 to 60 years, we have not just been teaching humans how to learn, but we have been teaching machines how to learn. And this artificial intelligence area has benefited from our understanding of these learning systems.

The guest started with highlighting the importance of acknowledging uncertainty and balancing between exploiting what is known and exploring to learn more about the environment. This problem is referred to as a "multi-arm bandit problem" and is considered fundamental in reinforcement learning, where the goal is to optimize actions in an environment.

When looking at it specifically for Large Language Models (LLMs) the role of Reinforcement Learning. RL has played the central role in building general purpose chatbots that are based on LLMs. Because the resulting model that has been trained on data might not give you the refined output that you are expecting from it.

When discussing about rewards and losses in reinforcement learning phase, it came out that the way we structure rewards and penalties for AI models greatly influences their reliability, how they interact with public and the accountability.

Overall deploying AI involves a balance. Backend deployment offers some level of predictability, while front-end deployment is uncertain. Successful business must experiment and capitalize in both aspects.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Co-create for collective wisdom

This is your invitation to become an integral part of our Think Tank community. Co-create with us to bring diverse perspectives and enrich our pool of collective wisdom. Your insights could be the spark that ignites transformative conversations.

Learn More
cocreate-halftone