Episode 14  |  58 Min  |  May 31

Reasoning and evolution of GenAI with Dr. Srinivas Padmanabhuni, testAIng.com

Share on

Engaging topics at a glance

  • 00:13:00
    Introduction
  • 00:15:00
    Why Artificial General Intelligence (AGI) is still very far
  • 00:16:30
    Are AI founders hyping up AI
  • 00:18:00
    Types of reasoning
  • 00:20:00
    Why current AI based on LLMs perform so well
  • 00:22:05
    What is Abductive reasoning
  • 00:25:10
    Does current AI only do inductive reasoning
  • 00:25:30
    Will the current architecture suffice in spawning abductive reasoning
  • 00:27:15
    Does current AI summarize documents in a way that a human does
  • 00:30:30
    Does statistics play a role in human reasoning
  • 00:33:35
    About emotional reasoning
  • 00:36:10
    Biggest risks of current AI models
  • 00:36:50
    Applications of current AI and use-case for inductive reasoning
  • 00:41:00
    Creativity and reasoning
  • 00:46:35
    Concluding thoughts

This podcast offers valuable insights about the limitations of current AI reasoning, including common sense reasoning, and debunks the hype around AGI, which viewers can apply in their field.

In this podcast, Arvind interviews Dr. Srinivas Padmanabhuni, who has a PhD in common sense reasoning from the University of Alberta and is a founder of an AI startup testAIng.com.

Dr. Srinivas explains that the concept of the old Aristotelian logic is just not enough to capture human-level reasoning. The traditional logic that is taught in schools is based on zero one, which is just one part of our reasoning. Human beings reason a lot more than that, and the way AI reasons today is a very narrow, myopic way of reasoning.

They (LLMs) are very poor at inventing new knowledge unless it is actually coming from somewhere, some large corpus existing somewhere.

– Dr. Srinivas Padmanabhuni

Dr. Srinivas then shares his thoughts on AGI (Artificial General Intelligence). He thinks that it is too premature to talk about AGI, and it’s only marketing. He says that AI is not able to invent anything new unless it is actually coming from somewhere, some large corpus existing somewhere. He says that unless you really have a common sense level agent, which can think about reasoning, you cannot just think about AGI.

The podcast also discusses the limitations of AI and what we can expect from it in the future. Dr. Srinivas explains that AI is not capable of inventing anything new, but is only capable of processing information and making predictions based on existing data. He suggests that AI is best used for tasks that involve large amounts of data processing, such as financial analysis, medical diagnosis, and weather forecasting.

Dr. Srinivas also talks about the importance of common sense reasoning and how it is an essential part of human-level reasoning, which is missing in AI today. He explains that common sense reasoning is necessary for tasks such as understanding jokes, sarcasm, and metaphors.

Technically it is nothing but the biggest statistical engine out there is; hardly any brain, hardly any logic, hardly any intelligence, so it is the dumbest of the dumbest application.

– Dr. Srinivas Padmanabhuni

Towards the end of the podcast, Arvind and Dr. Srinivas discuss the future of AI and its potential applications. They suggest that AI has the potential to revolutionize industries such as healthcare, retail, and transportation. However, they also caution that the development of AI should be done in an ethical and responsible manner, with consideration for the potential risks and consequences.

Overall, the podcast is very informative for anyone who wants to learn about the field of AI. It gives a good understanding of the limitations of AI and what we can expect from it in the future. The podcast also highlights the importance of common sense reasoning and how it is an essential part of human-level reasoning, which is missing in AI today. Lastly, the podcast emphasizes the need for ethical and responsible development of AI, with consideration for the potential risks and consequences.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Latest podcasts

Episode 9  |  56 Min  |  May 31

Building prototypes and pilots using generative AI with Mark Donavon, Nestlé Purina

Building prototypes and pilots using generative AI with Mark Donavon, Nestlé Purina

Share on

Engaging topics at a glance

  • 00:11:20
    Introduction
  • 00:16:30
    How does the market mindset help in conceptualizing ideas?
  • 00:19:00
    Consumer research, design, and prototype for AI-based products
  • 00:22:40
    Data sources and models used in early product development
  • 00:25:35
    When to feed data into AI model?
  • 00:28:32
    When to take the prototype to production?
  • 00:37:35
    ML models used during prototyping
  • 00:40:46
    Generative AI in your products
  • 00:43:05
    Testing early models
  • 00:45:25
    Grounding models
  • 00:47:20
    Key insights

Join us in this episode where our guest Mark Donavon, Head of Digital Strategy and Ecosystem Development at Nestle Purina PetCare shares his real-life experiences and insights to explore what it takes to understand and build prototypes and pilots using AI.

This podcast gives insights into how a pet care organization harnesses the power of AI and IoT technologies to enhance pet welfare. The discussion centers on innovative problem-solving and the considerable potential for AI applications in the pet care domain.

The podcast opens by highlighting the importance of allowing technology to be driven by problems and needs rather than dictating solutions. The emphasis is on understanding specific user groups and comprehending the challenges faced by pet owners. Instead of beginning with existing technology and searching for problems to solve, their approach revolves around understanding the needs of end users and subsequently exploring how technology can address these issues. This user-centric approach is a cornerstone of their organization, reinforcing their commitment to developing products tailored to pet owners' requirements.

The conversation then pivots to the process of understanding user needs. The organization conducts consumer research, with variations across regional divisions. Each division maintains its own consumer insight team working closely with external agency partners to gather research data. Their digital team collaborates with these divisions, allowing them to access consumer insights that might not be uncovered through traditional research methods. This highlights the adaptability of their company and the synergistic relationship between divisions.

The podcast proceeds to discuss the practical application of AI and IoT technologies. An example is presented: a smart litter box equipped with IoT capabilities that utilizes AI to provide valuable insights. The aim is to detect early signs of kidney disease in cats, a common yet often undiagnosed ailment. The organization saw an opportunity to intervene earlier by identifying changes in a cat's bathroom behavior that correlate with an increased risk of the disease. This innovative device provides pet owners and veterinarians with early warning indicators, effectively transforming the approach to cat health.

The speaker underscores how the smart litter box is revolutionizing pet care. Traditional practices often involve diagnosing the disease at advanced stages, making it challenging for veterinarians to do more than manage symptoms. However, this device alerts pet owners to subtle behavioral changes, enabling early intervention and potentially life-saving treatments.

The journey toward developing this ground-breaking device is then explored. It began with a low-fidelity prototype, using a simple mechanical device to record data when a cat entered the litter box. This provided initial insights into behavioral patterns. Subsequently, more sensors and technologies were integrated, resulting in the current iteration of the smart litter box. The speaker stresses the importance of combining various sensors to collect comprehensive data for diagnosing specific behaviors and patterns in cats, thus facilitating early detection of health issues.

The podcast also delves into AI models, which are employed to gain a deeper understanding of pet behavior. Early prototypes collected data on behavioral patterns but could not interpret the cat's actions within the litter box. To address this limitation, machine learning models were incorporated. These models were trained to distinguish between various behaviors, such as urination, defecation, and digging. This enhanced the system's ability to provide meaningful insights, enabling the early detection of potential health issues by interpreting the pet's actions within the litter box.

In response, a point is made regarding the flexibility and adaptability of AI models. It's crucial to allow machine learning models to evolve and adapt since pets may exhibit diverse behaviors. This flexibility aligns with the organization's commitment to accumulating extensive data and generating high-quality training data to enhance their systems.

The discussion then touches upon the challenges of introducing innovative technologies within an established company. The speaker describes the initial hurdles they faced when convincing management to invest in these new technological directions. Skepticism and questions about the impact on pet food sales were common concerns. Yet, by presenting real-world data, success stories, and tangible outcomes, they were able to build a compelling case and garner support for their projects over time.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Top trending insights

Episode 6  |  61 Min  |  May 31

Develop GenAI Strategy for your organization with AI Scientist, Omid Bakhshandeh

Develop GenAI Strategy for your organization with AI Scientist, Omid Bakhshandeh

Share on

Engaging topics at a glance

  • 00:14:45
    Key factors to consider while formulating LLM strategy
  • 00:17:15
    What is a Foundational Model?
  • 00:20:50
    Should companies train their own model or leverage existing models?
  • 00:26:00
    Considerations when leveraging existing LLM model as a foundational model
  • 00:29:30
    Open-source vs API based
  • 00:39:50
    Time to Market
  • 00:47:07
    Challenges when building own LLM
  • 00:52:00
    Hybrid Model, a mid-way
  • 00:54:20
    Conclusion

“Developing GenAI Strategy” with guest Omid Bakhshandeh, AI Scientist with a PhD in Artificial Intelligence, discusses how organizations can foray into adoption of GenAI.

Whether you are the company's CEO or leading a business unit, if you're asking yourself the question, should I develop an AI strategy? That's the wrong question because today, we know that if you don't have an AI strategy, the odds of you being successful in the next couple of years will diminish. So, the right question is, what is my AI strategy, and how fast can I deploy this strategy? To answer this question, large language models are at the heart of every company's AI strategy. In a previous episode with Professor Anum Datta, we unpacked LLMs and explored what LLMs are. In this episode, that conversation was taken to the next level, and we discussed the key things you need to know about LLMs that'll help you develop your company's AI strategy.

Looking at the current landscape of Large Language Models (LLMs), these LLMs capture vast amounts of knowledge and serve as repositories of knowledge that have given rise to foundational models. With this concept, there's no need to initiate the training of an LLM from the ground up. Instead, existing LLMs available in the market, which have already encapsulated knowledge, can be harnessed and seamlessly integrated into applications. It is beneficial for companies in most cases to follow this strategy. The inherent trade-off pertains to the risk of foregoing the utilization of established LLMs, which could result in a delay in promptly reaching the market.

On the contrary, some companies, characterized by their possession of significant volumes of unique and customized data, may contemplate the development of proprietary foundational models and specific LLMs. This strategic manoeuvre facilitates the integration of such models into their respective industries and provides avenues for potential monetization opportunities.

The key for leaders is to pay close attention to the potential use cases, data, and the support system available when building the AI strategy.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Co-create for collective wisdom

This is your invitation to become an integral part of our Think Tank community. Co-create with us to bring diverse perspectives and enrich our pool of collective wisdom. Your insights could be the spark that ignites transformative conversations.

Learn More
cocreate-halftone