Episode 9  |  56 Min  |  April 01

Building prototypes and pilots using generative AI with Mark Donavon, Nestlé Purina

Share on

Engaging topics at a glance

  • 00:11:20
    Introduction
  • 00:16:30
    How does the market mindset help in conceptualizing ideas?
  • 00:19:00
    Consumer research, design, and prototype for AI-based products
  • 00:22:40
    Data sources and models used in early product development
  • 00:25:35
    When to feed data into AI model?
  • 00:28:32
    When to take the prototype to production?
  • 00:37:35
    ML models used during prototyping
  • 00:40:46
    Generative AI in your products
  • 00:43:05
    Testing early models
  • 00:45:25
    Grounding models
  • 00:47:20
    Key insights

Join us in this episode where our guest Mark Donavon, Head of Digital Strategy and Ecosystem Development at Nestle Purina PetCare shares his real-life experiences and insights to explore what it takes to understand and build prototypes and pilots using AI.

This podcast gives insights into how a pet care organization harnesses the power of AI and IoT technologies to enhance pet welfare. The discussion centers on innovative problem-solving and the considerable potential for AI applications in the pet care domain.

The podcast opens by highlighting the importance of allowing technology to be driven by problems and needs rather than dictating solutions. The emphasis is on understanding specific user groups and comprehending the challenges faced by pet owners. Instead of beginning with existing technology and searching for problems to solve, their approach revolves around understanding the needs of end users and subsequently exploring how technology can address these issues. This user-centric approach is a cornerstone of their organization, reinforcing their commitment to developing products tailored to pet owners’ requirements.

We saw an opportunity really to intervene much earlier in helping to understand that there’s a change in the cat’s bathroom behaviour that can correlate to an increased risk of early onset kidney disease, renal disease, et cetera. So now, there’s time to engage with the vets.

– Mark Donavon

The conversation then pivots to the process of understanding user needs. The organization conducts consumer research, with variations across regional divisions. Each division maintains its own consumer insight team working closely with external agency partners to gather research data. Their digital team collaborates with these divisions, allowing them to access consumer insights that might not be uncovered through traditional research methods. This highlights the adaptability of their company and the synergistic relationship between divisions.

The podcast proceeds to discuss the practical application of AI and IoT technologies. An example is presented: a smart litter box equipped with IoT capabilities that utilizes AI to provide valuable insights. The aim is to detect early signs of kidney disease in cats, a common yet often undiagnosed ailment. The organization saw an opportunity to intervene earlier by identifying changes in a cat’s bathroom behavior that correlate with an increased risk of the disease. This innovative device provides pet owners and veterinarians with early warning indicators, effectively transforming the approach to cat health.

The speaker underscores how the smart litter box is revolutionizing pet care. Traditional practices often involve diagnosing the disease at advanced stages, making it challenging for veterinarians to do more than manage symptoms. However, this device alerts pet owners to subtle behavioral changes, enabling early intervention and potentially life-saving treatments.

I was blown away because right from something I was wearing on my wrist, it just described all of the movements I was doing with high precision. It was counting the reps and it was telling me how good my form was.

– Mark Donavon

The journey toward developing this ground-breaking device is then explored. It began with a low-fidelity prototype, using a simple mechanical device to record data when a cat entered the litter box. This provided initial insights into behavioral patterns. Subsequently, more sensors and technologies were integrated, resulting in the current iteration of the smart litter box. The speaker stresses the importance of combining various sensors to collect comprehensive data for diagnosing specific behaviors and patterns in cats, thus facilitating early detection of health issues.

The podcast also delves into AI models, which are employed to gain a deeper understanding of pet behavior. Early prototypes collected data on behavioral patterns but could not interpret the cat’s actions within the litter box. To address this limitation, machine learning models were incorporated. These models were trained to distinguish between various behaviors, such as urination, defecation, and digging. This enhanced the system’s ability to provide meaningful insights, enabling the early detection of potential health issues by interpreting the pet’s actions within the litter box.

In response, a point is made regarding the flexibility and adaptability of AI models. It’s crucial to allow machine learning models to evolve and adapt since pets may exhibit diverse behaviors. This flexibility aligns with the organization’s commitment to accumulating extensive data and generating high-quality training data to enhance their systems.

The discussion then touches upon the challenges of introducing innovative technologies within an established company. The speaker describes the initial hurdles they faced when convincing management to invest in these new technological directions. Skepticism and questions about the impact on pet food sales were common concerns. Yet, by presenting real-world data, success stories, and tangible outcomes, they were able to build a compelling case and garner support for their projects over time.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Latest podcasts

Episode 14  |  58 Min  |  April 01

Reasoning and evolution of GenAI with Dr. Srinivas Padmanabhuni, testAIng.com

Reasoning and evolution of GenAI with Dr. Srinivas Padmanabhuni, testAIng.com

Share on

Engaging topics at a glance

  • 00:13:00
    Introduction
  • 00:15:00
    Why Artificial General Intelligence (AGI) is still very far
  • 00:16:30
    Are AI founders hyping up AI
  • 00:18:00
    Types of reasoning
  • 00:20:00
    Why current AI based on LLMs perform so well
  • 00:22:05
    What is Abductive reasoning
  • 00:25:10
    Does current AI only do inductive reasoning
  • 00:25:30
    Will the current architecture suffice in spawning abductive reasoning
  • 00:27:15
    Does current AI summarize documents in a way that a human does
  • 00:30:30
    Does statistics play a role in human reasoning
  • 00:33:35
    About emotional reasoning
  • 00:36:10
    Biggest risks of current AI models
  • 00:36:50
    Applications of current AI and use-case for inductive reasoning
  • 00:41:00
    Creativity and reasoning
  • 00:46:35
    Concluding thoughts

This podcast offers valuable insights about the limitations of current AI reasoning, including common sense reasoning, and debunks the hype around AGI, which viewers can apply in their field.

In this podcast, Arvind interviews Dr. Srinivas Padmanabhuni, who has a PhD in common sense reasoning from the University of Alberta and is a founder of an AI startup testAIng.com.

Dr. Srinivas explains that the concept of the old Aristotelian logic is just not enough to capture human-level reasoning. The traditional logic that is taught in schools is based on zero one, which is just one part of our reasoning. Human beings reason a lot more than that, and the way AI reasons today is a very narrow, myopic way of reasoning.

Dr. Srinivas then shares his thoughts on AGI (Artificial General Intelligence). He thinks that it is too premature to talk about AGI, and it’s only marketing. He says that AI is not able to invent anything new unless it is actually coming from somewhere, some large corpus existing somewhere. He says that unless you really have a common sense level agent, which can think about reasoning, you cannot just think about AGI.

The podcast also discusses the limitations of AI and what we can expect from it in the future. Dr. Srinivas explains that AI is not capable of inventing anything new, but is only capable of processing information and making predictions based on existing data. He suggests that AI is best used for tasks that involve large amounts of data processing, such as financial analysis, medical diagnosis, and weather forecasting.

Dr. Srinivas also talks about the importance of common sense reasoning and how it is an essential part of human-level reasoning, which is missing in AI today. He explains that common sense reasoning is necessary for tasks such as understanding jokes, sarcasm, and metaphors.

Towards the end of the podcast, Arvind and Dr. Srinivas discuss the future of AI and its potential applications. They suggest that AI has the potential to revolutionize industries such as healthcare, retail, and transportation. However, they also caution that the development of AI should be done in an ethical and responsible manner, with consideration for the potential risks and consequences.

Overall, the podcast is very informative for anyone who wants to learn about the field of AI. It gives a good understanding of the limitations of AI and what we can expect from it in the future. The podcast also highlights the importance of common sense reasoning and how it is an essential part of human-level reasoning, which is missing in AI today. Lastly, the podcast emphasizes the need for ethical and responsible development of AI, with consideration for the potential risks and consequences.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Top trending insights

Episode 8  |  51 Min  |  April 01

Are LLMs the answer to everything with Prof. Mausam, IIT Delhi

Are LLMs the answer to everything with Prof. Mausam, IIT Delhi

Share on

Engaging topics at a glance

  • 00:32:28
    Introduction
  • 00:38:00
    Intended use of LLMs
  • 00:41:30
    Performance of smaller model trained for specific task vs LLMs.
  • 00:45:00
    How LLMs fare when dealing with mathematical and reasoning problems
  • 00:52:40
    How small models are able to perform better than LLMs?
  • 00:55:45
    Future of LLMs and Traditional AI Models

Uncovering whether LLMs are the one part of the answer or the entire answer to your problem with our guest, Prof. Mausam, with our guest, Prof. Mausam, a distinguished figure in Computer Science at IIT Delhi with over 2 decades of experience in Artificial Intelligence.

In this episode, we discussed that LLMs aren't an answer to all AI-based problems. If you are trying to automate your factories, if you are trying to bring in predictive maintenance, if you want to do smarter planning, in all these automation tasks, LLMs are one part of the answer and aren't the entire answer. And so, the breakthrough in AI in the last couple of years in neural networks and language models alone isn't sufficient for us to get to this world. We dream of this world of AI-based automation and what it will do for us. It's got the potential, but there is an X factor that's still missing.

Guest started with discussing the misconception about large language models (LLMs) and their intended use. Initially designed for basic language tasks, summarizing text, recalling information, and answering basic to moderately complex questions, LLMs are much more intelligent than what was conceived.

He also talked about despite various attempts to improve the LLMs; they found that these enhanced models (LLMs) didn't match the performance of standalone trained models.

The conversation shifted to the limitations of LLMs in handling complex industry applications such as supply chain management. Guest highlighted that these tasks involve vast numerical considerations, vendor identification, object quantity determination, cost analysis, and optimization, which are beyond the capabilities of LLMs. 

When further discussing the reasoning capabilities and how they fare when dealing with a mathematical problem, it emerged that as the level of complexity of such problems goes up, the performance of these models goes down.

He mentioned it's better to use these models for writing code to solve mathematical problems rather than using them for solving such problems.

In the end, the guest shared his perspective on the future use of LLMs and traditional methods, and in his view, it will be better to help us solve our problems in the best way.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Co-create for collective wisdom

This is your invitation to become an integral part of our Think Tank community. Co-create with us to bring diverse perspectives and enrich our pool of collective wisdom. Your insights could be the spark that ignites transformative conversations.

Learn More
cocreate-halftone