Episode 9  |  56 Min  |  April 01

Building prototypes and pilots using generative AI with Mark Donavon, Nestlé Purina

Share on

Engaging topics at a glance

  • 00:11:20
    Introduction
  • 00:16:30
    How does the market mindset help in conceptualizing ideas?
  • 00:19:00
    Consumer research, design, and prototype for AI-based products
  • 00:22:40
    Data sources and models used in early product development
  • 00:25:35
    When to feed data into AI model?
  • 00:28:32
    When to take the prototype to production?
  • 00:37:35
    ML models used during prototyping
  • 00:40:46
    Generative AI in your products
  • 00:43:05
    Testing early models
  • 00:45:25
    Grounding models
  • 00:47:20
    Key insights

Join us in this episode where our guest Mark Donavon, Head of Digital Strategy and Ecosystem Development at Nestle Purina PetCare shares his real-life experiences and insights to explore what it takes to understand and build prototypes and pilots using AI.

This podcast gives insights into how a pet care organization harnesses the power of AI and IoT technologies to enhance pet welfare. The discussion centers on innovative problem-solving and the considerable potential for AI applications in the pet care domain.

The podcast opens by highlighting the importance of allowing technology to be driven by problems and needs rather than dictating solutions. The emphasis is on understanding specific user groups and comprehending the challenges faced by pet owners. Instead of beginning with existing technology and searching for problems to solve, their approach revolves around understanding the needs of end users and subsequently exploring how technology can address these issues. This user-centric approach is a cornerstone of their organization, reinforcing their commitment to developing products tailored to pet owners’ requirements.

We saw an opportunity really to intervene much earlier in helping to understand that there’s a change in the cat’s bathroom behaviour that can correlate to an increased risk of early onset kidney disease, renal disease, et cetera. So now, there’s time to engage with the vets.

– Mark Donavon

The conversation then pivots to the process of understanding user needs. The organization conducts consumer research, with variations across regional divisions. Each division maintains its own consumer insight team working closely with external agency partners to gather research data. Their digital team collaborates with these divisions, allowing them to access consumer insights that might not be uncovered through traditional research methods. This highlights the adaptability of their company and the synergistic relationship between divisions.

The podcast proceeds to discuss the practical application of AI and IoT technologies. An example is presented: a smart litter box equipped with IoT capabilities that utilizes AI to provide valuable insights. The aim is to detect early signs of kidney disease in cats, a common yet often undiagnosed ailment. The organization saw an opportunity to intervene earlier by identifying changes in a cat’s bathroom behavior that correlate with an increased risk of the disease. This innovative device provides pet owners and veterinarians with early warning indicators, effectively transforming the approach to cat health.

The speaker underscores how the smart litter box is revolutionizing pet care. Traditional practices often involve diagnosing the disease at advanced stages, making it challenging for veterinarians to do more than manage symptoms. However, this device alerts pet owners to subtle behavioral changes, enabling early intervention and potentially life-saving treatments.

I was blown away because right from something I was wearing on my wrist, it just described all of the movements I was doing with high precision. It was counting the reps and it was telling me how good my form was.

– Mark Donavon

The journey toward developing this ground-breaking device is then explored. It began with a low-fidelity prototype, using a simple mechanical device to record data when a cat entered the litter box. This provided initial insights into behavioral patterns. Subsequently, more sensors and technologies were integrated, resulting in the current iteration of the smart litter box. The speaker stresses the importance of combining various sensors to collect comprehensive data for diagnosing specific behaviors and patterns in cats, thus facilitating early detection of health issues.

The podcast also delves into AI models, which are employed to gain a deeper understanding of pet behavior. Early prototypes collected data on behavioral patterns but could not interpret the cat’s actions within the litter box. To address this limitation, machine learning models were incorporated. These models were trained to distinguish between various behaviors, such as urination, defecation, and digging. This enhanced the system’s ability to provide meaningful insights, enabling the early detection of potential health issues by interpreting the pet’s actions within the litter box.

In response, a point is made regarding the flexibility and adaptability of AI models. It’s crucial to allow machine learning models to evolve and adapt since pets may exhibit diverse behaviors. This flexibility aligns with the organization’s commitment to accumulating extensive data and generating high-quality training data to enhance their systems.

The discussion then touches upon the challenges of introducing innovative technologies within an established company. The speaker describes the initial hurdles they faced when convincing management to invest in these new technological directions. Skepticism and questions about the impact on pet food sales were common concerns. Yet, by presenting real-world data, success stories, and tangible outcomes, they were able to build a compelling case and garner support for their projects over time.

Production Team
Arvind Ravishunkar, Ankit Pandey, Rinat Sergeev, Chandan Jha, Nikhil Sood, Dipika Prasad

Latest podcasts

Episode 1  |  36 Min  |  April 01

Why AI hallucinates and why it matters with Ankur Taly, scientist at Google

Why AI hallucinates and why it matters with Ankur Taly, scientist at Google

Share on

Engaging topics at a glance

  • 00:00:20
    Introduction
  • 00:10:36
    Why do models make mistakes and why is it called AI hallucinations?
  • 00:13:31
    How does a model know which relationships are meaningful and not?
  • 00:16:12
    Things enterprise leaders should keep in mind while deploying LLMs
  • 00:18:14
    How does grounding address these AI hallucinations?
  • 00:21:53
    How much is grounding going to solve the hallucination problem?
  • 00:24:47
    Does hallucinatory capability drive innovation?

Join us in this episode featuring Ankur Taly, Staff Research Scientist, Google, as we explore the concept of grounding of LLMs!

Machines are supposed to work without mistakes, just like a calculator does math correctly. But in the world of artificial intelligence, errors, often called 'AI hallucinations,' are common. This makes us wonder about these mistakes and the computer programs behind them. For businesses that use AI in their work, especially when dealing with customers, making sure AI works without errors is very important.

Understanding how AI makes decisions and being clear about its processes is very important. Business leaders need to be able to watch and explain how AI makes decisions. This will be crucial for using AI in their companies in the future.

To fight AI hallucinations, grounding is important. Grounding means making sure AI answers are based on real facts. This involves teaching AI systems using correct and reliable information and making them give answers that can be proven. Grounding stops AI from making things up or giving wrong information.
When businesses use LLMs (large language models) in their work, they should think about some important things. First, they need to use good data to teach AI because bad data can lead to wrong or unfair results. It's also important to have rules about how AI is used in the company to avoid causing harm or misusing AI information.

Businesses also need to keep an eye on AI's results to fix mistakes or wrong information. Having people check and filter AI's work ensures that it's correct and consistent. It's also important to teach employees and users about what AI can and can't do to avoid misunderstandings or misuse.


Even though AI hallucinations can be a problem, they can also have some positives. They can make people think creatively and find new solutions to tough problems. AI's imaginative ideas can be fun, offering new types of art and media. Plus, AI hallucinations can help with learning by making people think and talk about interesting topics.

Production Team
Arvind Ravishunkar, Ankit Pandey, Rinat Sergeev, Chandan Jha, Nikhil Sood, Dipika Prasad

Top trending insights

Episode 4  |  53 Min  |  April 01

Performance and choice of LLMs with Nick Brady, Microsoft

Performance and choice of LLMs with Nick Brady, Microsoft

Share on

Engaging topics at a glance

  • 00:12:23
    Introduction
  • 00:14:20
    Current use cases being deployed for GenAI
  • 00:19:10
    Performance of LLM models
  • 00:36:15
    Domain Specific LLMs vs General Intelligence LLMs
  • 00:38:37
    How to choose the right LLM?
  • 00:41:27
    Open Source vs Closed Source
  • 00:44:50
    Cost of LLM
  • 00:46:10
    Conclusion

"Exploring what should organization considering when choosing to adopt LLMs" with guest Nick Brady, Senior Program Manager at Microsoft Azure Open AI Service

AI has been at the forefront of transformation for more than a decade now. Still, the Open AI launch of chat GPT in November 2022 will be noted as a historical moment – the scale of which even Open AI did not expect – in the history of technological innovations. Most people don't realize or fully appreciate the magnitude of the shift that we're in. Now, we're able to directly express to a machine a problem that we need to have solved; equipping these technologies with the right reasoning engines and the right connectivity could bring the biggest technology leapfrog not just for enterprises but even in everyday lives.

The onset of leapfrog does bring out a few questions for enterprises looking to adopt GenAI as a part of their strategy, operations and way ahead, like:

What use cases are best suited to adopt the models?

While most customers are looking for how this could reduce business costs in their organizations, the true value is when it is used to maximize business value productivity and downstream that could lead to employee satisfaction and customer satisfaction. Any place where there's language – programming or natural language – is a good use case for generative AI, and that probably would be the most profound shift. So, if you have language, if you have a document, if you have big data where you're trying to sort of synthesize, understand what that content and what the content is, generative AI models can do this ad nauseam without any delay.

The most common metric used across the world to describe LLMs is the number of parameters; in the case of GPT 3, it is trained on 175 billion parameters, but what does this mean?

Parameter size refers to essentially the number of values that the model can change independently as it learns from data and stores all information in the vast associative ray of memory as its model weights. What's perhaps more important for these models, and it speaks to more of their capability, is their vocabulary size.

How does one decide and evaluate which would be the best-suited model for the selected use cases?

The best practice really is to start with the most powerful and advanced language model like GPT 4.0 to test, if it's even possible, with your use case. Post confirming the possibility of use case trickle down to simpler models to find its efficacy and efficiency. If the simpler model can probably achieve 90% of the way, with just a little bit of prompt engineering, then you could optimize for costs.

Organizations would have to define what quality means to them. It could be the model's output, its core response, or performance in terms of latency, where the quality of the output may not be as important as how quickly we can respond back to the user.

The key for leaders is to pay close attention to the potential use cases, test them with the best model and then optimize the model to balance the cost, efficacy and efficiency factors.

Production Team
Arvind Ravishunkar, Ankit Pandey, Rinat Sergeev, Chandan Jha, Nikhil Sood, Dipika Prasad

Co-create for collective wisdom

This is your invitation to become an integral part of our Think Tank community. Co-create with us to bring diverse perspectives and enrich our pool of collective wisdom. Your insights could be the spark that ignites transformative conversations.

Learn More
cocreate-halftone