Engaging topics at a glance
Explore the vital link between AI and sustainability as we discuss strategies for eco-conscious AI deployment with Dr. Eng Lim Goh, Senior vice president of Data & AI at Hewlett Packard Enterprise
In this episode of Unpacked, Arvind introduces Dr. Eng Lim Goh, SVP of Data and AI at Hewlett Packard Enterprises, to discuss the topic of sustainable AI. They agree on the importance of being conscious of the planet’s well-being while charting business growth and profitability.
Dr. Goh explains the need for corporations to consider sustainability due to their net-zero goals, the conscious younger generation of employees, and the economic implications of power consumption in supercomputing. He shares his experience of working on the International Space Station and how he realized the importance of a sustainable approach to technology.
So for e-waste, what’s better than, what’s more sustainable than e-waste is to recycle. But in fact, what’s more sustainable than recycle is reuse.
– Dr. Eng Lim Goh
Similarly, he suggests that businesses should consider long-term goals while investing in AI and related technologies, adding that it is important to measure the impact of such efforts quantitatively. He also talks about the importance of collaboration between businesses, governments, and academia to achieve sustainable progress. The conversation then moves on to the topic of energy consumption in AI, and Dr. Goh explains how the power consumption of large models has been a challenge in the supercomputing industry. He suggests that businesses should consider using more efficient hardware and software to reduce energy consumption and how they can approach this. He also mentions the importance of using renewable energy sources to power data centers.
The conversation concludes with Dr. Goh’s vision for the future of AI and sustainability. Dr. Goh emphasizes the need for businesses to consider the long-term impact of their actions and to invest in sustainable technologies. He believes that AI can play a crucial role in achieving sustainability goals and that it is important for businesses to collaborate and share knowledge to achieve sustainable progress.
I shouldn’t say pick the right size model. Pick the right model.
– Dr. Eng Lim Goh
Overall, the conversation highlights the need for businesses to consider sustainability while investing in AI and related technologies. It emphasizes the importance of transparency, collaboration, and measurement in achieving sustainable progress.
Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha
Engaging topics at a glance
Join us in exploring the evolving space of GenAI tools and infrastructure, featuring Rajat Monga, Co-Founder of TensorFlow and Google Brain.
A good power tool can make the difference of easily six to seven hours of work when you're doing woodworking. And the world of AI is no different. The types of tools and infrastructure that are developed to help you as enterprise leaders build artificial intelligence products and features are very, very important. In this episode, with our guest's help, we will unpack the infrastructure that surrounds you and the tooling that will help you as enterprise leaders build AI products and services.
We will look at how tooling and infrastructure needs are changing in the world of AI with the increasing adoption of GenAI. One key change that emerged from the talk is that now things have evolved to the extent that we don’t need to train the models from scratch. We already have foundation models available that know our world up to some level, reducing the burden of training the model with tonnes of data. However, models today have become so large that they sometimes don’t fit in a single machine.
As connecting your database with these models is very important, we also discussed the trade-offs between the open-source and private libraries. Should companies manage data on their own or outsource it? When you are not training your model, the easiest and fastest way is to use API, and if you want your data on-prem, then it will mostly cost you more. In the end, it boils down to what core to you is, and often, not all part of the infrastructure is core to companies. So, if your data is not your core strength, then better outsource it.
This episode also uncovered the current tools and infrastructure available for GenAI. The current tools and infrastructure available for large-level deployments are going through a rapid evolution; they are not very hard to rebuild or replace, and new companies are emerging in the tooling and infrastructure for GenAI space.
When looking at the talent and skills needed for GenAI implementation in your organization, it is important to have technically sound people with domain expertise in the organization's particular area.
For the differentiation in the market domain knowledge in your area, relationship with the customers, distribution channel, your execution, etc., today plays an even bigger role. However, in this data-driven world, having proprietary data and knowing how to leverage it can be an added advantage. To find out more, tune in to the full podcast.
Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha
Engaging topics at a glance
Join us in this episode featuring Anupam Datta, Co-founder and Chief Scientist, TruEra, as we dive into the evolution of LLMs and what they hold for the future!
This world of generative AI has caught us by storm. And as enterprise leaders in your companies, understanding the technology behind generative AI will give you a competitive advantage as you plan your companies and businesses. And to help you do this, we will unpack a technology, large language models (LLMs), that powers AI today and represent a paradigm shift in this field of Artificial Intelligence.
LLMs can craft meaningful responses across many domains. Their performance has notably improved recently thanks to the substantial increase in model size and data volume.
With the increasing acceptance of this technology, numerous companies are unveiling various Large Language Models (LLMs). It's important to recognize that opting for the largest or highest-performing LLM isn't always the most suitable approach. Instead, one might prefer LLMs that excel in specific tasks relevant to their application. As a leader in the enterprise, it's crucial to integrate this understanding into your company's strategy, aiding in identifying the appropriate LLMs to match and adapt for your applications. Achieving equilibrium between LLM selection, cost considerations, and latency considerations stands as a pivotal concern for enterprises. Equally essential is the thorough validation and assessment of generative outputs, serving as a safeguard prior to embarking on consequential choices. Hence, the undertaking of reliability testing at this current juncture is paramount.
Furthermore, enterprises need to consider a few other key aspects in this evolving landscape of LLMs as they build out LLMs. Starting with a well-defined business use case that offers real value is crucial. As LLMs move from development to production, it's important to establish thorough evaluations and observability throughout their lifecycle. Education across the organization is vital to implement LLMs effectively. Companies should train their workforce to adapt to this changing technology stack. Fostering a community around responsible AI development and evaluation can contribute to a better understanding and management of LLMs. With these steps, enterprises can navigate the complexities of LLMs and harness their potential for positive impact.
Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha
This is your invitation to become an integral part of our Think Tank community. Co-create with us to bring diverse perspectives and enrich our pool of collective wisdom. Your insights could be the spark that ignites transformative conversations.
Learn MoreKey Speakers
Thank you for subscribing!!!