Episode 13  |  57 Min  |  May 27

Responsible Al practices for business leaders with Dr. Rachel Adams, CEO, GCG

Share on

Engaging topics at a glance

  • 00:01:50
    About the topic i.e. Responsible AI
  • 00:12:20
    About the guest speaker – Dr. Rachel Adams
  • 00:14:20
    What is responsible AI
  • 00:16:00
    Why responsible AI
  • 00:19:55
    AI autonomy and policy
  • 00:22:10
    Beyond theoretical understanding of AI
  • 00:24:15
    How should business leaders approach responsible AI
  • 00:29:20
    Thoughts on AI regulation
  • 00:31:00
    Responsible AI framework for business leaders
  • 00:33:10
    Index for responsible AI
  • 00:36:20
    Staying educated on responsible AI and AI policy
  • 00:39:40
    EU AI regulations
  • 00:41:45
    Other emerging AI regulations
  • 00:45:10
    AI policy development on a specific area
  • 00:48:30
    Concluding thoughts on responsible AI

Gain valuable perspectives on integrating responsible AI into business strategies from Dr. Rachel Adams, CEO and founder of the Global Center on AI Governance.

In this insightful podcast, we embark on a journey to understand the intricate landscape of responsible AI practices, especially tailored for business leaders. Dr. Rachel Adams, one of the top global voices on responsible AI and the founder and CEO of the Global Center on AI Governance, serves as our guide through this complex terrain.

The conversation begins by framing the discussion around the concept of “thinking with care” when it comes to AI development and deployment. Dr. Adams emphasizes the importance of inclusivity and diversity in AI development, particularly in addressing the unique needs of different regions and communities worldwide. She stresses the significance of aligning technological advancements with the real needs of people, advocating for a user-centric approach driven by community engagement and feedback.

Both developers and deployers of AI need to be responsive to the negative impacts AI is having on the ground.

– Dr. Rachel Adams

As the dialogue progresses, the focus shifts towards the role of business leaders in navigating the multifaceted dimensions of responsible AI. Dr. Adams elucidates the critical considerations that business leaders must keep in mind during both the development and deployment phases of AI initiatives. From addressing inherent biases in AI models to safeguarding user privacy and data protection, she outlines a comprehensive framework for ethical AI governance within organizations.

Moreover, Dr. Adams sheds light on emerging policy developments in the field of AI regulation, highlighting the European AI Act as a pioneering effort in this space. She underscores the need for nuanced, sector-specific regulations tailored to the diverse contexts and challenges faced by different industries and regions.

Throughout the conversation, Dr. Adams emphasizes the importance of collaboration and cross-disciplinary dialogue in advancing responsible AI practices. She underscores the need for closer collaboration between technologists, policymakers, and communities to navigate the evolving landscape of AI governance effectively.

Thinking with care means getting as many people thinking as possible. So creating kind of diverse groups of people that are thinking with care allows that thinking to be bigger.

– Dr. Rachel Adams

As the podcast draws to a close, we reflect on the fundamental principles of responsible AI adoption, emphasizing the imperative of “thinking with care” in every aspect of AI development and deployment. Dr. Adams reiterates the need for a collective effort to ensure that AI technologies are developed and deployed in a manner that prioritizes human values, equity, and societal well-being.

In summary, this podcast provides invaluable insights into the complex challenges and opportunities presented by AI technology for business leaders. Through engaging dialogue and expert analysis, Dr. Rachel Adams offers a roadmap for ethical AI adoption, empowering business leaders to navigate the ethical complexities of AI with confidence and integrity.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Latest podcasts

Episode 4  |  53 Min  |  May 27

Performance and choice of LLMs with Nick Brady, Microsoft

Performance and choice of LLMs with Nick Brady, Microsoft

Share on

Engaging topics at a glance

  • 00:12:23
    Introduction
  • 00:14:20
    Current use cases being deployed for GenAI
  • 00:19:10
    Performance of LLM models
  • 00:36:15
    Domain Specific LLMs vs General Intelligence LLMs
  • 00:38:37
    How to choose the right LLM?
  • 00:41:27
    Open Source vs Closed Source
  • 00:44:50
    Cost of LLM
  • 00:46:10
    Conclusion

"Exploring what should organization considering when choosing to adopt LLMs" with guest Nick Brady, Senior Program Manager at Microsoft Azure Open AI Service

AI has been at the forefront of transformation for more than a decade now. Still, the Open AI launch of chat GPT in November 2022 will be noted as a historical moment – the scale of which even Open AI did not expect – in the history of technological innovations. Most people don't realize or fully appreciate the magnitude of the shift that we're in. Now, we're able to directly express to a machine a problem that we need to have solved; equipping these technologies with the right reasoning engines and the right connectivity could bring the biggest technology leapfrog not just for enterprises but even in everyday lives.

The onset of leapfrog does bring out a few questions for enterprises looking to adopt GenAI as a part of their strategy, operations and way ahead, like:

What use cases are best suited to adopt the models?

While most customers are looking for how this could reduce business costs in their organizations, the true value is when it is used to maximize business value productivity and downstream that could lead to employee satisfaction and customer satisfaction. Any place where there's language – programming or natural language – is a good use case for generative AI, and that probably would be the most profound shift. So, if you have language, if you have a document, if you have big data where you're trying to sort of synthesize, understand what that content and what the content is, generative AI models can do this ad nauseam without any delay.

The most common metric used across the world to describe LLMs is the number of parameters; in the case of GPT 3, it is trained on 175 billion parameters, but what does this mean?

Parameter size refers to essentially the number of values that the model can change independently as it learns from data and stores all information in the vast associative ray of memory as its model weights. What's perhaps more important for these models, and it speaks to more of their capability, is their vocabulary size.

How does one decide and evaluate which would be the best-suited model for the selected use cases?

The best practice really is to start with the most powerful and advanced language model like GPT 4.0 to test, if it's even possible, with your use case. Post confirming the possibility of use case trickle down to simpler models to find its efficacy and efficiency. If the simpler model can probably achieve 90% of the way, with just a little bit of prompt engineering, then you could optimize for costs.

Organizations would have to define what quality means to them. It could be the model's output, its core response, or performance in terms of latency, where the quality of the output may not be as important as how quickly we can respond back to the user.

The key for leaders is to pay close attention to the potential use cases, test them with the best model and then optimize the model to balance the cost, efficacy and efficiency factors.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Top trending insights

Episode 6  |  61 Min  |  May 27

Develop GenAI Strategy for your organization with AI Scientist, Omid Bakhshandeh

Develop GenAI Strategy for your organization with AI Scientist, Omid Bakhshandeh

Share on

Engaging topics at a glance

  • 00:14:45
    Key factors to consider while formulating LLM strategy
  • 00:17:15
    What is a Foundational Model?
  • 00:20:50
    Should companies train their own model or leverage existing models?
  • 00:26:00
    Considerations when leveraging existing LLM model as a foundational model
  • 00:29:30
    Open-source vs API based
  • 00:39:50
    Time to Market
  • 00:47:07
    Challenges when building own LLM
  • 00:52:00
    Hybrid Model, a mid-way
  • 00:54:20
    Conclusion

“Developing GenAI Strategy” with guest Omid Bakhshandeh, AI Scientist with a PhD in Artificial Intelligence, discusses how organizations can foray into adoption of GenAI.

Whether you are the company's CEO or leading a business unit, if you're asking yourself the question, should I develop an AI strategy? That's the wrong question because today, we know that if you don't have an AI strategy, the odds of you being successful in the next couple of years will diminish. So, the right question is, what is my AI strategy, and how fast can I deploy this strategy? To answer this question, large language models are at the heart of every company's AI strategy. In a previous episode with Professor Anum Datta, we unpacked LLMs and explored what LLMs are. In this episode, that conversation was taken to the next level, and we discussed the key things you need to know about LLMs that'll help you develop your company's AI strategy.

Looking at the current landscape of Large Language Models (LLMs), these LLMs capture vast amounts of knowledge and serve as repositories of knowledge that have given rise to foundational models. With this concept, there's no need to initiate the training of an LLM from the ground up. Instead, existing LLMs available in the market, which have already encapsulated knowledge, can be harnessed and seamlessly integrated into applications. It is beneficial for companies in most cases to follow this strategy. The inherent trade-off pertains to the risk of foregoing the utilization of established LLMs, which could result in a delay in promptly reaching the market.

On the contrary, some companies, characterized by their possession of significant volumes of unique and customized data, may contemplate the development of proprietary foundational models and specific LLMs. This strategic manoeuvre facilitates the integration of such models into their respective industries and provides avenues for potential monetization opportunities.

The key for leaders is to pay close attention to the potential use cases, data, and the support system available when building the AI strategy.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Co-create for collective wisdom

This is your invitation to become an integral part of our Think Tank community. Co-create with us to bring diverse perspectives and enrich our pool of collective wisdom. Your insights could be the spark that ignites transformative conversations.

Learn More
cocreate-halftone