Episode 6  |  61 Min  |  March 20

Develop GenAI Strategy for your organization with AI Scientist, Omid Bakhshandeh

Share on

Engaging topics at a glance

  • 00:14:45
    Key factors to consider while formulating LLM strategy
  • 00:17:15
    What is a Foundational Model?
  • 00:20:50
    Should companies train their own model or leverage existing models?
  • 00:26:00
    Considerations when leveraging existing LLM model as a foundational model
  • 00:29:30
    Open-source vs API based
  • 00:39:50
    Time to Market
  • 00:47:07
    Challenges when building own LLM
  • 00:52:00
    Hybrid Model, a mid-way
  • 00:54:20
    Conclusion

“Developing GenAI Strategy” with guest Omid Bakhshandeh, AI Scientist with a PhD in Artificial Intelligence, discusses how organizations can foray into adoption of GenAI.

Whether you are the company’s CEO or leading a business unit, if you’re asking yourself the question, should I develop an AI strategy? That’s the wrong question because today, we know that if you don’t have an AI strategy, the odds of you being successful in the next couple of years will diminish. So, the right question is, what is my AI strategy, and how fast can I deploy this strategy? To answer this question, large language models are at the heart of every company’s AI strategy. In a previous episode with Professor Anum Datta, we unpacked LLMs and explored what LLMs are. In this episode, that conversation was taken to the next level, and we discussed the key things you need to know about LLMs that’ll help you develop your company’s AI strategy.

Looking at the current landscape of Large Language Models (LLMs), these LLMs capture vast amounts of knowledge and serve as repositories of knowledge that have given rise to foundational models. With this concept, there’s no need to initiate the training of an LLM from the ground up. Instead, existing LLMs available in the market, which have already encapsulated knowledge, can be harnessed and seamlessly integrated into applications. It is beneficial for companies in most cases to follow this strategy. The inherent trade-off pertains to the risk of foregoing the utilization of established LLMs, which could result in a delay in promptly reaching the market.

Building a cloud from scratch? Unlikely. Just as you leverage cloud providers’ tools, AI benefits from tools that sit on established structures.

– Omid Bakhshandeh

On the contrary, some companies, characterized by their possession of significant volumes of unique and customized data, may contemplate the development of proprietary foundational models and specific LLMs. This strategic manoeuvre facilitates the integration of such models into their respective industries and provides avenues for potential monetization opportunities.

The key for leaders is to pay close attention to the potential use cases, data, and the support system available when building the AI strategy.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Latest podcasts

Episode 8  |  51 Min  |  March 20

Are LLMs the Answer to Everything with Prof. Mausam, IIT Delhi

Are LLMs the Answer to Everything with Prof. Mausam, IIT Delhi

Share on

Engaging topics at a glance

  • 00:32:28
    Introduction
  • 00:38:00
    Intended use of LLMs
  • 00:41:30
    Performance of smaller model trained for specific task vs LLMs.
  • 00:45:00
    How LLMs fare when dealing with mathematical and reasoning problems
  • 00:52:40
    How small models are able to perform better than LLMs?
  • 00:55:45
    Future of LLMs and Traditional AI Models

Uncovering whether LLMs are the one part of the answer or the entire answer to your problem with our guest, Prof. Mausam, with our guest, Prof. Mausam, a distinguished figure in Computer Science at IIT Delhi with over 2 decades of experience in Artificial Intelligence.

In this episode, we discussed that LLMs aren't an answer to all AI-based problems. If you are trying to automate your factories, if you are trying to bring in predictive maintenance, if you want to do smarter planning, in all these automation tasks, LLMs are one part of the answer and aren't the entire answer. And so, the breakthrough in AI in the last couple of years in neural networks and language models alone isn't sufficient for us to get to this world. We dream of this world of AI-based automation and what it will do for us. It's got the potential, but there is an X factor that's still missing.

Guest started with discussing the misconception about large language models (LLMs) and their intended use. Initially designed for basic language tasks, summarizing text, recalling information, and answering basic to moderately complex questions, LLMs are much more intelligent than what was conceived.

He also talked about despite various attempts to improve the LLMs; they found that these enhanced models (LLMs) didn't match the performance of standalone trained models.

The conversation shifted to the limitations of LLMs in handling complex industry applications such as supply chain management. Guest highlighted that these tasks involve vast numerical considerations, vendor identification, object quantity determination, cost analysis, and optimization, which are beyond the capabilities of LLMs. 

When further discussing the reasoning capabilities and how they fare when dealing with a mathematical problem, it emerged that as the level of complexity of such problems goes up, the performance of these models goes down.

He mentioned it's better to use these models for writing code to solve mathematical problems rather than using them for solving such problems.

In the end, the guest shared his perspective on the future use of LLMs and traditional methods, and in his view, it will be better to help us solve our problems in the best way.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Top trending insights

Episode 2  |  39 Min  |  March 20

Develop AI strategy for your organization with Dr. Kavita Ganesan

Develop AI strategy for your organization with Dr. Kavita Ganesan

Share on

Engaging topics at a glance

  • 00:12:19
    Key messages in the book: The business case for AI
  • 00:12:58
    What should enterprise leaders look into when implementing AI
  • 00:15: 25
    What problems can be solved with AI?
  • 00:16:13
    Importance of data in AI
  • 00:19:30
    Things to consider when going with AI in production
  • 00:20:48
    What makes a problem AI suitable?
  • 00:24:35
    Success rate of AI projects
  • 00:25:37
    What causes failure of AI projects?
  • 00:28:14
    What is preventing AI success?
  • 00:30:20
    Data integration problem

“Develop AI strategy for your organization” with Dr. Kavita Ganesan, where she discusses things to consider when implementing AI.

Many programmes, specifically AI-based programmes, start with the right intentions but often fail when they go into production. And, to explore this topic, we had an insightful discussion with our guest in this episode to understand why this happens and how it can be solved.

Most of the AI initiatives today fail to make it into production because people are not solving the right problems with AI, and there is a lack of understanding of what AI is at the leadership level.

The perception that Gen AI can solve every problem is inaccurate, and understanding this is crucial for enterprise leaders. There are many other AI techniques that can solve business problems and it's important to have a general understanding of what AI is and what types of problems it can solve. As implementing AI is not only cost intensive, but it also comes with many risks.

After the emergence of Gen AI, contrary to what many people think today, data collection is still a very integral part of AI initiatives in order to fine-tune the models for company-specific problems.

When deciding on the application of AI, it is advisable to use it for intricate issues that require numerous narrow prediction tasks. In such cases, a large amount of data points needs to be evaluated for making decisions, which could be challenging for human minds to process.

It's important for companies to have a strategic approach while implementing AI. Instead of just focusing on the latest trends (like implementing Gen AI for all the problems), companies should identify the problems that need to be solved in their business in order to have a huge business impact.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Co-create for collective wisdom

This is your invitation to become an integral part of our Think Tank community. Co-create with us to bring diverse perspectives and enrich our pool of collective wisdom. Your insights could be the spark that ignites transformative conversations.

Learn More
cocreate-halftone