Engaging topics at a glance
Uncovering whether LLMs are the one part of the answer or the entire answer to your problem with our guest, Prof. Mausam, with our guest, Prof. Mausam, a distinguished figure in Computer Science at IIT Delhi with over 2 decades of experience in Artificial Intelligence.
In this episode, we discussed that LLMs aren’t an answer to all AI-based problems. If you are trying to automate your factories, if you are trying to bring in predictive maintenance, if you want to do smarter planning, in all these automation tasks, LLMs are one part of the answer and aren’t the entire answer. And so, the breakthrough in AI in the last couple of years in neural networks and language models alone isn’t sufficient for us to get to this world. We dream of this world of AI-based automation and what it will do for us. It’s got the potential, but there is an X factor that’s still missing.
Guest started with discussing the misconception about large language models (LLMs) and their intended use. Initially designed for basic language tasks, summarizing text, recalling information, and answering basic to moderately complex questions, LLMs are much more intelligent than what was conceived.
They are not large maintenance models, they are not predictive maintenance models, they are not large doctors, they are not, they’re not large anything else but language.
– Mausam
He also talked about despite various attempts to improve the LLMs; they found that these enhanced models (LLMs) didn’t match the performance of standalone trained models.
The conversation shifted to the limitations of LLMs in handling complex industry applications such as supply chain management. Guest highlighted that these tasks involve vast numerical considerations, vendor identification, object quantity determination, cost analysis, and optimization, which are beyond the capabilities of LLMs.
When further discussing the reasoning capabilities and how they fare when dealing with a mathematical problem, it emerged that as the level of complexity of such problems goes up, the performance of these models goes down.
He mentioned it’s better to use these models for writing code to solve mathematical problems rather than using them for solving such problems.
If you ask it to do reasoning, it doesn’t do a good job. But if you ask it to write code to do reasoning, it does a better job.
– Mausam
In the end, the guest shared his perspective on the future use of LLMs and traditional methods, and in his view, it will be better to help us solve our problems in the best way.
Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha
Engaging topics at a glance
“Developing GenAI Strategy” with guest Omid Bakhshandeh, AI Scientist with a PhD in Artificial Intelligence, discusses how organizations can foray into adoption of GenAI.
Whether you are the company's CEO or leading a business unit, if you're asking yourself the question, should I develop an AI strategy? That's the wrong question because today, we know that if you don't have an AI strategy, the odds of you being successful in the next couple of years will diminish. So, the right question is, what is my AI strategy, and how fast can I deploy this strategy? To answer this question, large language models are at the heart of every company's AI strategy. In a previous episode with Professor Anum Datta, we unpacked LLMs and explored what LLMs are. In this episode, that conversation was taken to the next level, and we discussed the key things you need to know about LLMs that'll help you develop your company's AI strategy.
Looking at the current landscape of Large Language Models (LLMs), these LLMs capture vast amounts of knowledge and serve as repositories of knowledge that have given rise to foundational models. With this concept, there's no need to initiate the training of an LLM from the ground up. Instead, existing LLMs available in the market, which have already encapsulated knowledge, can be harnessed and seamlessly integrated into applications. It is beneficial for companies in most cases to follow this strategy. The inherent trade-off pertains to the risk of foregoing the utilization of established LLMs, which could result in a delay in promptly reaching the market.
On the contrary, some companies, characterized by their possession of significant volumes of unique and customized data, may contemplate the development of proprietary foundational models and specific LLMs. This strategic manoeuvre facilitates the integration of such models into their respective industries and provides avenues for potential monetization opportunities.
The key for leaders is to pay close attention to the potential use cases, data, and the support system available when building the AI strategy.
Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha
Engaging topics at a glance
“Develop AI strategy for your organization” with Dr. Kavita Ganesan, where she discusses things to consider when implementing AI.
Many programmes, specifically AI-based programmes, start with the right intentions but often fail when they go into production. And, to explore this topic, we had an insightful discussion with our guest in this episode to understand why this happens and how it can be solved.
Most of the AI initiatives today fail to make it into production because people are not solving the right problems with AI, and there is a lack of understanding of what AI is at the leadership level.
The perception that Gen AI can solve every problem is inaccurate, and understanding this is crucial for enterprise leaders. There are many other AI techniques that can solve business problems and it's important to have a general understanding of what AI is and what types of problems it can solve. As implementing AI is not only cost intensive, but it also comes with many risks.
After the emergence of Gen AI, contrary to what many people think today, data collection is still a very integral part of AI initiatives in order to fine-tune the models for company-specific problems.
When deciding on the application of AI, it is advisable to use it for intricate issues that require numerous narrow prediction tasks. In such cases, a large amount of data points needs to be evaluated for making decisions, which could be challenging for human minds to process.
It's important for companies to have a strategic approach while implementing AI. Instead of just focusing on the latest trends (like implementing Gen AI for all the problems), companies should identify the problems that need to be solved in their business in order to have a huge business impact.
Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha
This is your invitation to become an integral part of our Think Tank community. Co-create with us to bring diverse perspectives and enrich our pool of collective wisdom. Your insights could be the spark that ignites transformative conversations.
Learn MoreKey Speakers
Thank you for subscribing!!!