Episode 6  |  61 Min  |  March 20

Develop GenAI Strategy for your organization with AI Scientist, Omid Bakhshandeh

Share on

Engaging topics at a glance

  • 00:14:45
    Key factors to consider while formulating LLM strategy
  • 00:17:15
    What is a Foundational Model?
  • 00:20:50
    Should companies train their own model or leverage existing models?
  • 00:26:00
    Considerations when leveraging existing LLM model as a foundational model
  • 00:29:30
    Open-source vs API based
  • 00:39:50
    Time to Market
  • 00:47:07
    Challenges when building own LLM
  • 00:52:00
    Hybrid Model, a mid-way
  • 00:54:20
    Conclusion

“Developing GenAI Strategy” with guest Omid Bakhshandeh, AI Scientist with a PhD in Artificial Intelligence, discusses how organizations can foray into adoption of GenAI.

Whether you are the company’s CEO or leading a business unit, if you’re asking yourself the question, should I develop an AI strategy? That’s the wrong question because today, we know that if you don’t have an AI strategy, the odds of you being successful in the next couple of years will diminish. So, the right question is, what is my AI strategy, and how fast can I deploy this strategy? To answer this question, large language models are at the heart of every company’s AI strategy. In a previous episode with Professor Anum Datta, we unpacked LLMs and explored what LLMs are. In this episode, that conversation was taken to the next level, and we discussed the key things you need to know about LLMs that’ll help you develop your company’s AI strategy.

Looking at the current landscape of Large Language Models (LLMs), these LLMs capture vast amounts of knowledge and serve as repositories of knowledge that have given rise to foundational models. With this concept, there’s no need to initiate the training of an LLM from the ground up. Instead, existing LLMs available in the market, which have already encapsulated knowledge, can be harnessed and seamlessly integrated into applications. It is beneficial for companies in most cases to follow this strategy. The inherent trade-off pertains to the risk of foregoing the utilization of established LLMs, which could result in a delay in promptly reaching the market.

Building a cloud from scratch? Unlikely. Just as you leverage cloud providers’ tools, AI benefits from tools that sit on established structures.

– Omid Bakhshandeh

On the contrary, some companies, characterized by their possession of significant volumes of unique and customized data, may contemplate the development of proprietary foundational models and specific LLMs. This strategic manoeuvre facilitates the integration of such models into their respective industries and provides avenues for potential monetization opportunities.

The key for leaders is to pay close attention to the potential use cases, data, and the support system available when building the AI strategy.

Production Team
Arvind Ravishunkar, Ankit Pandey, Rinat Sergeev, Chandan Jha, Nikhil Sood, Dipika Prasad

Latest podcasts

Episode 7  |  45 Min  |  March 20

How AI will impact your business with Harvard Professor, Shikhar Ghosh

How AI will impact your business with Harvard Professor, Shikhar Ghosh

Share on

Engaging topics at a glance

  • 00:10:30
    Introduction
  • 00:13:35
    Why AI is so disruptive?
  • 00:16:30
    How businesses and governments accept this new reality?
  • 00:19:20
    How enterprise leaders should approach the AI transformation?
  • 00:21:40
    New business models shaped with AI
  • 00:27:15
    Emotions, decisions, and algorithms
  • 00:34:35
    Are we ready yet?

Join us in this episode featuring Shikhar Ghosh, Professor, Harvard Business School, as we explore how AI can fundamentally impact business and society!

In the ever-evolving landscape of technology, artificial intelligence stands as a true disruptor, poised to reshape not only our businesses but also the very fabric of society. In a captivating podcast discussion with Shikhar Ghosh, Harvard Business School professor, we delve deep into the riveting world of AI, exploring why its impact is so seismic, how enterprise leaders should navigate this new frontier, the question of human relevance in the age of AI, and whether we are truly prepared for this transformative journey.

We will uncover the essence of AI's disruptive power and provide compelling insights into the sheer transformation that AI can herald.

Be prepared to be guided through the stormy seas of AI influence on businesses. Our expert highlights the critical importance of a well-defined AI approach. Enterprise leaders must be agile and proactive, recognizing that AI is not merely a tool but a transformational force. We will discuss how to approach AI with an open mindset, viewing it as a catalyst for innovation rather than just a threat.

We will also see why leaders should maximize the upside of AI. This underscores the value of human-machine collaboration, emphasizing that AI augments human capabilities rather than replacing them entirely. It's a matter of harnessing AI's analytical prowess to inform decision-making and free up human resources for more creative and strategic pursuits.

One of the most intriguing segments of the podcast explores the question that lingers in the minds of many: Will humans remain relevant in the age of AI? This is discussed with nuances that business leaders can take a leaf from and be proactive in embracing AI wisely and effectively.

In a world teetering on the precipice of AI-driven transformation, this podcast offers a compelling exploration of why AI is the disruptive force of our era. It presents an alluring narrative that transcends the technical jargon, making the topic accessible and engaging for both the tech-savvy and those new to the AI landscape. As we listen to Professor Shikhar’s captivating insights, we are left with a resounding question: Will we embrace AI as a catalyst for positive change, or will we be swept aside by its inexorable tide of disruption? The answer may very well determine the fate of businesses and society as we know it. Find out more, tune in to the full podcast and embark on a journey into the future of AI, business, and our shared human experience.

Production Team
Arvind Ravishunkar, Ankit Pandey, Rinat Sergeev, Chandan Jha, Nikhil Sood, Dipika Prasad

Top trending insights

Episode 1  |  36 Min  |  March 20

Why AI hallucinates and why it matters with Ankur Taly, scientist at Google

Why AI hallucinates and why it matters with Ankur Taly, scientist at Google

Share on

Engaging topics at a glance

  • 00:00:20
    Introduction
  • 00:10:36
    Why do models make mistakes and why is it called AI hallucinations?
  • 00:13:31
    How does a model know which relationships are meaningful and not?
  • 00:16:12
    Things enterprise leaders should keep in mind while deploying LLMs
  • 00:18:14
    How does grounding address these AI hallucinations?
  • 00:21:53
    How much is grounding going to solve the hallucination problem?
  • 00:24:47
    Does hallucinatory capability drive innovation?

Join us in this episode featuring Ankur Taly, Staff Research Scientist, Google, as we explore the concept of grounding of LLMs!

Machines are supposed to work without mistakes, just like a calculator does math correctly. But in the world of artificial intelligence, errors, often called 'AI hallucinations,' are common. This makes us wonder about these mistakes and the computer programs behind them. For businesses that use AI in their work, especially when dealing with customers, making sure AI works without errors is very important.

Understanding how AI makes decisions and being clear about its processes is very important. Business leaders need to be able to watch and explain how AI makes decisions. This will be crucial for using AI in their companies in the future.

To fight AI hallucinations, grounding is important. Grounding means making sure AI answers are based on real facts. This involves teaching AI systems using correct and reliable information and making them give answers that can be proven. Grounding stops AI from making things up or giving wrong information.
When businesses use LLMs (large language models) in their work, they should think about some important things. First, they need to use good data to teach AI because bad data can lead to wrong or unfair results. It's also important to have rules about how AI is used in the company to avoid causing harm or misusing AI information.

Businesses also need to keep an eye on AI's results to fix mistakes or wrong information. Having people check and filter AI's work ensures that it's correct and consistent. It's also important to teach employees and users about what AI can and can't do to avoid misunderstandings or misuse.


Even though AI hallucinations can be a problem, they can also have some positives. They can make people think creatively and find new solutions to tough problems. AI's imaginative ideas can be fun, offering new types of art and media. Plus, AI hallucinations can help with learning by making people think and talk about interesting topics.

Production Team
Arvind Ravishunkar, Ankit Pandey, Rinat Sergeev, Chandan Jha, Nikhil Sood, Dipika Prasad

Co-create for collective wisdom

This is your invitation to become an integral part of our Think Tank community. Co-create with us to bring diverse perspectives and enrich our pool of collective wisdom. Your insights could be the spark that ignites transformative conversations.

Learn More
cocreate-halftone