Episode 4  |  53 Min  |  March 13

Performance and choice of LLMs with Nick Brady, Microsoft

Share on

Engaging topics at a glance

  • 00:12:23
  • 00:14:20
    Current use cases being deployed for GenAI
  • 00:19:10
    Performance of LLM models
  • 00:36:15
    Domain Specific LLMs vs General Intelligence LLMs
  • 00:38:37
    How to choose the right LLM?
  • 00:41:27
    Open Source vs Closed Source
  • 00:44:50
    Cost of LLM
  • 00:46:10

"Exploring what should organization considering when choosing to adopt LLMs" with guest Nick Brady, Senior Program Manager at Microsoft Azure Open AI Service

AI has been at the forefront of transformation for more than a decade now. Still, the Open AI launch of chat GPT in November 2022 will be noted as a historical moment – the scale of which even Open AI did not expect – in the history of technological innovations. Most people don’t realize or fully appreciate the magnitude of the shift that we’re in. Now, we’re able to directly express to a machine a problem that we need to have solved; equipping these technologies with the right reasoning engines and the right connectivity could bring the biggest technology leapfrog not just for enterprises but even in everyday lives.

The onset of leapfrog does bring out a few questions for enterprises looking to adopt GenAI as a part of their strategy, operations and way ahead, like:

if we equip these technologies with the right reasoning engines and the right connectivity to this work and the things that we do every single day, this could be the single most democratising force of technology like the world has ever seen, not just in enterprise but even in our personal lives.rn

– Nick Brady

What use cases are best suited to adopt the models?

While most customers are looking for how this could reduce business costs in their organizations, the true value is when it is used to maximize business value productivity and downstream that could lead to employee satisfaction and customer satisfaction. Any place where there’s language – programming or natural language – is a good use case for generative AI, and that probably would be the most profound shift. So, if you have language, if you have a document, if you have big data where you’re trying to sort of synthesize, understand what that content and what the content is, generative AI models can do this ad nauseam without any delay.

The interesting thing about parameters is it’s actually not a direct correlation to how powerful the model might be. I mean, the parameter size refers to essentially the number of values that the model can change independently as it learns from data.

– Nick Brady

The most common metric used across the world to describe LLMs is the number of parameters; in the case of GPT 3, it is trained on 175 billion parameters, but what does this mean?

Parameter size refers to essentially the number of values that the model can change independently as it learns from data and stores all information in the vast associative ray of memory as its model weights. What’s perhaps more important for these models, and it speaks to more of their capability, is their vocabulary size.

How does one decide and evaluate which would be the best-suited model for the selected use cases?

The best practice really is to start with the most powerful and advanced language model like GPT 4.0 to test, if it’s even possible, with your use case. Post confirming the possibility of use case trickle down to simpler models to find its efficacy and efficiency. If the simpler model can probably achieve 90% of the way, with just a little bit of prompt engineering, then you could optimize for costs.

But many of LLMs are English only and that’s a real problem, especially for multinational enterprises and organisations that have diverse employees and diverse customers that speak in many different languages.

– Nick Brady

Organizations would have to define what quality means to them. It could be the model’s output, its core response, or performance in terms of latency, where the quality of the output may not be as important as how quickly we can respond back to the user.

The key for leaders is to pay close attention to the potential use cases, test them with the best model and then optimize the model to balance the cost, efficacy and efficiency factors.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Latest podcasts

Episode 3  |  48 Min  |  March 13

Leading the AI transformation of your company with Prof. Gregory LaBlanc

Leading the AI transformation of your company with Prof. Gregory LaBlanc

Share on

Engaging topics at a glance

  • 00:13:40
    What is transformation? What constitutes it?
  • 00:15:29
    Have you seen unpredictable organizational behavior before?
  • 00:16:30
    Learnings that enterprise leaders should pay attention to
  • 00:17:30
    How do organizations overcome fear to adapt?
  • 00:18:55
    Do you foresee AI running parts of companies?
  • 00:21:28
    Is data accessibility a key challenge for AI?
  • 00:23:29
    Are algorithms or data the true competitive edge?
  • 00:25:17
    Will companies without data become irrelevant?
  • 00:30:28
    What is your vision for the future of work?
  • 00:36:53
    Will AI drive higher-order thinking?

"AI Transformation – the new paradigm" with UC Berkeley Professor and AI Startup Expert, Greg La Blanc. Get ready to dive into the future of AI!

For some people, transformation is exciting and challenging. Curiosity and excitement about learning, drew Greg to into the field of strategy and transformation and all the other topics that he has been teaching throughout his career. 

Every time you learn something, you are displacing or changing some previous notion of how the world works. For some people, this is disturbing. But for others, it is a thrill and really exciting. It's how you approach the transformation is the beginning of how you deal with transformation, and curiosity is such a powerful, such a powerful human trait.

Some people would emphasize what they call long-term trends. And then others would be more inclined to say everything's new. Similarly, with the digital and AI transformations taking place, you can say, everything's new, everything has to be changed. This is something that we've never seen before, or you can say this is not that much different from the sorts of things that we have seen and happened to us in the past. 

As humans, we are in the entropy reduction business. We are trying to create order. We're trying to make sense of our world. We're trying to put in place practices that we can automate. We're trying to create routines and subroutines, and indeed, this is how efficiency happens. Efficiency happens when you realize, you start to recognize patterns, and you start to engage in repetitive action. 

The problem with that is that the circumstances and the environment changes. And so, the routines that you've established, they need to be changed at some point. And that requires a bit of work. So, sometimes, there's a couple different ways we can respond to that. One is to say, okay, the world's changed, so we got to change the way we're doing things. The other is to say, well, let's try to change the world so that we don't have to change. And that often means trying to shape the behavior of your customers or your employees or try to use regulation or market power to hold off the onslaught of change.

The third way is to say, let's change. 

Too much flexibility means that nothing ever gels, too little flexibility means that, you get stuck. And so, it is needed to figure out what that optimal amount of flexibility is, and then figuring out a way to routinize change. That sounds paradoxical. It means creating systems, which are designed right intentionally to respond to the, the changing environment. If you can routinize change, you can routinize curiosity. If you can create a standard operating procedure for discovery, then in some ways you can have your cake and eat it too. And that’s what all really good dynamic businesses are, are trying to do.

Every time there's a new discovery in the world of artificial intelligence, people say, now's the time. This is AI, it's this. Back in 2015 with neural nets, everyone's like, yes, AI finally. The possibilities of AI and each one of these sorts of punctuated discoveries are a continuation of series of discoveries that have been happening right in the world of artificial intelligence for the last couple of decades.

The technology diffuses rapidly. What doesn't diffuse as rapidly are managerial techniques, organizational, architectural innovations. And that's also the reason why older companies have a tough time adapting. They resist change and the kinds of transformations that they would need to undertake in order to enable new technologies.

There is the immune system of the organization, but the immune system of all of the individuals within the organization Natural propensity for many people is to fight new ideas when they encounter them as individuals. And then if you take that and you combine it into a big organization, you can often have an organization where every individual's open to new ideas, but the organization is not because it has its own logic.

Fear plays a role, but it's not the complete story. It's not always that they're afraid. They feel fairly confident that they can keep this at bay. And this is why leadership is so critical. You need carrots and sticks, but you also need your, your, your vision and, and your messaging.

Even before generative ai, more primitive forms of machine learning and the ones that have been the easiest to adopt are the ones that perform some relatively narrow tasks. Suppose you are in HR and you're doing hiring, and someone comes up with a product that helps you to process more applications more quickly. You can see how that is going to save you money. You can see if you are in marketing and someone comes along and says, I got this great tool that'll help you to figure out who you should be targeting with your marketing. You will think, I am a revenue center, I've just boosted my revenue. So, all of those specific applications are actually relatively unproblematic. 

Just setting aside AI for a second, if we look at the automotive industry. Look at a company like Ford or GM that has tier one suppliers, tier two suppliers, tier three suppliers, and son on. If there is an innovation in the steering column, the tier one supplier makes steering, they'll figure it out and they'll start selling it. But the challenge is when you want to figure out a way to connect those things.

The current supply chain architecture makes it very difficult, because you need to adjust the design elements of the brake to coordinate better with the design elements of the, the steering column. And when you have everything set up in this, then it becomes tough. Whereas with Tesla, which has an integrated, much more integrated production process and design process, it is super easy. To make those kinds of shifts. So, the reason the car companies are struggling is because they've tried to incorporate a lot of these new technological innovations into the pre-existing business architecture, supply chain, and value chain architecture, which was optimized for the internal combustion engine. Which is why someone like Tesla can just leapfrog.

Your competitive advantage is always going to come from the data. It is never going to come from your analytics tools. 

If I have access to unique data, then I can take cutting edge algorithms and train them on that data it can give competitive edge.

There will be companies that can they live without a solid data strategy, but for the vast majority of companies, if you do not have a data strategy, you're toast.

There are two major takeaways. The first one is in this transformation; your organizational structure is super important. How you organize your company so that data is democratised. And then the second one is having high quality unique data. Not just the quality of data, it is the uniqueness of the data is what's going to differentiate you going forward, at least in the next couple of years.

How do you make a balance between flexibility and order is also going to be an important skill for all leaders. All our education systems have to teach flexibility, adaptability, how to learn and how to learn fast.

With artificial intelligence in all of our jobs, we have to develop higher order thinking skills.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Top trending insights

Episode 2  |  39 Min  |  March 13

Develop AI strategy for your organization with Dr. Kavita Ganesan

Develop AI strategy for your organization with Dr. Kavita Ganesan

Share on

Engaging topics at a glance

  • 00:12:19
    Key messages in the book: The business case for AI
  • 00:12:58
    What should enterprise leaders look into when implementing AI
  • 00:15: 25
    What problems can be solved with AI?
  • 00:16:13
    Importance of data in AI
  • 00:19:30
    Things to consider when going with AI in production
  • 00:20:48
    What makes a problem AI suitable?
  • 00:24:35
    Success rate of AI projects
  • 00:25:37
    What causes failure of AI projects?
  • 00:28:14
    What is preventing AI success?
  • 00:30:20
    Data integration problem

“Develop AI strategy for your organization” with Dr. Kavita Ganesan, where she discusses things to consider when implementing AI.

Many programmes, specifically AI-based programmes, start with the right intentions but often fail when they go into production. And, to explore this topic, we had an insightful discussion with our guest in this episode to understand why this happens and how it can be solved.

Most of the AI initiatives today fail to make it into production because people are not solving the right problems with AI, and there is a lack of understanding of what AI is at the leadership level.

The perception that Gen AI can solve every problem is inaccurate, and understanding this is crucial for enterprise leaders. There are many other AI techniques that can solve business problems and it's important to have a general understanding of what AI is and what types of problems it can solve. As implementing AI is not only cost intensive, but it also comes with many risks.

After the emergence of Gen AI, contrary to what many people think today, data collection is still a very integral part of AI initiatives in order to fine-tune the models for company-specific problems.

When deciding on the application of AI, it is advisable to use it for intricate issues that require numerous narrow prediction tasks. In such cases, a large amount of data points needs to be evaluated for making decisions, which could be challenging for human minds to process.

It's important for companies to have a strategic approach while implementing AI. Instead of just focusing on the latest trends (like implementing Gen AI for all the problems), companies should identify the problems that need to be solved in their business in order to have a huge business impact.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Co-create for collective wisdom

This is your invitation to become an integral part of our Think Tank community. Co-create with us to bring diverse perspectives and enrich our pool of collective wisdom. Your insights could be the spark that ignites transformative conversations.

Learn More