Episode 15  |  33 Min  |  June 25

Build remarkable AI products with Seth Godin, Bestselling author & Marketing expert

Share on

Engaging topics at a glance

  • 00:00:15
    Introduction
  • 00:12:30
    How advertising today differs from marketing
  • 00:15:50
    Create remarkable products
  • 00:17:30
    When to start advertising
  • 00:20:10
    Role of AI in creating remarkable products
  • 00:25:50
    AI as a companion to create remarkable products
  • 00:30:00
    Concluding thoughts

Join us as we explore the impact of AI on product innovation and marketing with Set Godin, Bestselling author.

In this episode of Unpacked, marketing guru Seth Godin delves into the profound changes and opportunities AI brings to marketing and product development. The conversation emphasizes the necessity to adapt and harness AI’s potential to build remarkable products.

I think AI is the biggest change in our world since electricity. And I think it has been under-understood and we’re not even coming close to understanding what it’s going to do.

– Seth Godin

Seth begins by discussing the evolution of marketing, highlighting the transition from traditional methods to more innovative approaches driven by technology. He underscores that the essence of marketing remains the same: understanding and meeting customer needs. However, the tools and strategies have significantly evolved, with AI playing a pivotal role.

AI’s impact on marketing is multifaceted. Seth explains that AI enables more precise targeting and personalization, allowing marketers to deliver tailored experiences to consumers. This not only enhances customer satisfaction but also drives engagement and loyalty. He stresses that marketers must embrace AI to stay competitive, as it offers insights and capabilities that were previously unattainable.

Building on this, Seth explores how AI can aid in creating remarkable products. He asserts that AI can streamline product development processes, from ideation to execution. By analyzing vast amounts of data, AI can identify trends, predict consumer preferences, and optimize designs. This accelerates innovation and helps companies bring products to market faster and more efficiently.

If you are not using AI as that sort of intern slash clerk slash assistant slash brainstorming partner, you are wasting a lot of your time.

– Seth Godin

A key theme throughout the podcast is the importance of being remarkable. Seth emphasizes that in a world saturated with options, standing out is crucial. Remarkable products are those that elicit conversations and inspire loyalty. AI can help identify what makes a product remarkable by analyzing consumer feedback and market trends, enabling companies to refine and enhance their offerings continuously.

Seth also touches on the ethical considerations of using AI in marketing. He advocates for transparency and responsible use of AI, ensuring that consumer trust is maintained. Marketers should be mindful of privacy concerns and strive to use AI in ways that benefit consumers without compromising their rights.

The conversation then shifts to practical advice for marketers looking to leverage AI. Seth suggests starting with small, manageable projects to gain familiarity with AI tools and build internal expertise. He also recommends collaborating with AI specialists and investing in training to bridge any knowledge gaps.

And now that we have the chance to have a trillion dollar workforce of AI assistants who will work for us for free, how are we gonna use it? What are we gonna use it for? What would make it worth it?

– Seth Godin

Seth concludes with a call to action for marketers. He urges them to embrace AI not just as a tool but as a transformative force that can elevate their marketing efforts and product development. By being proactive and innovative, marketers can create remarkable products that resonate with consumers and drive business success.

In summary, the podcast with Seth Godin provides valuable insights into the intersection of AI and marketing. It highlights the potential of AI to revolutionize product development and underscores the importance of creating remarkable products in today’s competitive landscape. Seth’s practical advice and call to action inspire marketers to embrace AI and leverage its capabilities to stay ahead of the curve.

Latest podcasts

Episode 12  |  56 Min  |  June 25

Uncovering GenAI tools and infrastructure with Rajat Monga, Co-Founder, TensorFlow

Uncovering GenAI tools and infrastructure with Rajat Monga, Co-Founder, TensorFlow

Share on

Engaging topics at a glance

  • 00:16:10
    Google Brain and TensorFlow
  • 00:19:10
    TensorFlow for AI world
  • 00:21:55
    Tooling and infrastructure needs of previous AI models vs. GenAI
  • 00:25:40
    Trade-offs, Open-source library and framework vs. private company's framework
  • 00:31:30
    Present quality of tool and Infrastructure available for GenAI
  • 00:32:55
    How do I build a GenAI team?
  • 00:37:50
    Vendors and the Infrastructure available today to take GenAI into production
  • 00:41:51
    How to differentiate in the GenAI world?

Join us in exploring the evolving space of GenAI tools and infrastructure, featuring Rajat Monga, Co-Founder of TensorFlow and Google Brain.

A good power tool can make the difference of easily six to seven hours of work when you're doing woodworking. And the world of AI is no different. The types of tools and infrastructure that are developed to help you as enterprise leaders build artificial intelligence products and features are very, very important. In this episode, with our guest's help, we will unpack the infrastructure that surrounds you and the tooling that will help you as enterprise leaders build AI products and services.

We will look at how tooling and infrastructure needs are changing in the world of AI with the increasing adoption of GenAI. One key change that emerged from the talk is that now things have evolved to the extent that we don’t need to train the models from scratch. We already have foundation models available that know our world up to some level, reducing the burden of training the model with tonnes of data. However, models today have become so large that they sometimes don’t fit in a single machine.

As connecting your database with these models is very important, we also discussed the trade-offs between the open-source and private libraries. Should companies manage data on their own or outsource it? When you are not training your model, the easiest and fastest way is to use API, and if you want your data on-prem, then it will mostly cost you more. In the end, it boils down to what core to you is, and often, not all part of the infrastructure is core to companies. So, if your data is not your core strength, then better outsource it.

This episode also uncovered the current tools and infrastructure available for GenAI. The current tools and infrastructure available for large-level deployments are going through a rapid evolution; they are not very hard to rebuild or replace, and new companies are emerging in the tooling and infrastructure for GenAI space.

When looking at the talent and skills needed for GenAI implementation in your organization, it is important to have technically sound people with domain expertise in the organization's particular area.

For the differentiation in the market domain knowledge in your area, relationship with the customers, distribution channel, your execution, etc., today plays an even bigger role. However, in this data-driven world, having proprietary data and knowing how to leverage it can be an added advantage. To find out more, tune in to the full podcast.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Top trending insights

Episode 4  |  53 Min  |  June 25

Performance and choice of LLMs with Nick Brady, Microsoft

Performance and choice of LLMs with Nick Brady, Microsoft

Share on

Engaging topics at a glance

  • 00:12:23
    Introduction
  • 00:14:20
    Current use cases being deployed for GenAI
  • 00:19:10
    Performance of LLM models
  • 00:36:15
    Domain Specific LLMs vs General Intelligence LLMs
  • 00:38:37
    How to choose the right LLM?
  • 00:41:27
    Open Source vs Closed Source
  • 00:44:50
    Cost of LLM
  • 00:46:10
    Conclusion

"Exploring what should organization considering when choosing to adopt LLMs" with guest Nick Brady, Senior Program Manager at Microsoft Azure Open AI Service

AI has been at the forefront of transformation for more than a decade now. Still, the Open AI launch of chat GPT in November 2022 will be noted as a historical moment – the scale of which even Open AI did not expect – in the history of technological innovations. Most people don't realize or fully appreciate the magnitude of the shift that we're in. Now, we're able to directly express to a machine a problem that we need to have solved; equipping these technologies with the right reasoning engines and the right connectivity could bring the biggest technology leapfrog not just for enterprises but even in everyday lives.

The onset of leapfrog does bring out a few questions for enterprises looking to adopt GenAI as a part of their strategy, operations and way ahead, like:

What use cases are best suited to adopt the models?

While most customers are looking for how this could reduce business costs in their organizations, the true value is when it is used to maximize business value productivity and downstream that could lead to employee satisfaction and customer satisfaction. Any place where there's language – programming or natural language – is a good use case for generative AI, and that probably would be the most profound shift. So, if you have language, if you have a document, if you have big data where you're trying to sort of synthesize, understand what that content and what the content is, generative AI models can do this ad nauseam without any delay.

The most common metric used across the world to describe LLMs is the number of parameters; in the case of GPT 3, it is trained on 175 billion parameters, but what does this mean?

Parameter size refers to essentially the number of values that the model can change independently as it learns from data and stores all information in the vast associative ray of memory as its model weights. What's perhaps more important for these models, and it speaks to more of their capability, is their vocabulary size.

How does one decide and evaluate which would be the best-suited model for the selected use cases?

The best practice really is to start with the most powerful and advanced language model like GPT 4.0 to test, if it's even possible, with your use case. Post confirming the possibility of use case trickle down to simpler models to find its efficacy and efficiency. If the simpler model can probably achieve 90% of the way, with just a little bit of prompt engineering, then you could optimize for costs.

Organizations would have to define what quality means to them. It could be the model's output, its core response, or performance in terms of latency, where the quality of the output may not be as important as how quickly we can respond back to the user.

The key for leaders is to pay close attention to the potential use cases, test them with the best model and then optimize the model to balance the cost, efficacy and efficiency factors.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Co-create for collective wisdom

This is your invitation to become an integral part of our Think Tank community. Co-create with us to bring diverse perspectives and enrich our pool of collective wisdom. Your insights could be the spark that ignites transformative conversations.

Learn More
cocreate-halftone