Episode 11  |  55 Min  |  April 21

How to deploy AI sustainably with Dr. Eng Lim Goh, SVP at HPE

Share on

Engaging topics at a glance

  • 00:16:45
    Why is sustainable AI important?
  • 00:20:48
    Apart from power, what else matters for sustainable AI?
  • 00:26:12
    What about e-waste and recycling?
  • 00:29:15
    Why is AI so power hungry?
  • 00:32:19
    What model should business leaders adopt for AI deployment?
  • 00:36:47
    More on energy use by AI
  • 00:39:56
    Choosing the right hardware for AI
  • 00:45:08
    Organizational effort for sustainable AI
  • 00:48:56
    Considerations when deploying AI

Explore the vital link between AI and sustainability as we discuss strategies for eco-conscious AI deployment with Dr. Eng Lim Goh, Senior vice president of Data & AI at Hewlett Packard Enterprise

In this episode of Unpacked, Arvind introduces Dr. Eng Lim Goh, SVP of Data and AI at Hewlett Packard Enterprises, to discuss the topic of sustainable AI. They agree on the importance of being conscious of the planet’s well-being while charting business growth and profitability. 

Dr. Goh explains the need for corporations to consider sustainability due to their net-zero goals, the conscious younger generation of employees, and the economic implications of power consumption in supercomputing. He shares his experience of working on the International Space Station and how he realized the importance of a sustainable approach to technology. 

So for e-waste, what’s better than, what’s more sustainable than e-waste is to recycle. But in fact, what’s more sustainable than recycle is reuse.

– Dr. Eng Lim Goh

Similarly, he suggests that businesses should consider long-term goals while investing in AI and related technologies, adding that it is important to measure the impact of such efforts quantitatively. He also talks about the importance of collaboration between businesses, governments, and academia to achieve sustainable progress. The conversation then moves on to the topic of energy consumption in AI, and Dr. Goh explains how the power consumption of large models has been a challenge in the supercomputing industry. He suggests that businesses should consider using more efficient hardware and software to reduce energy consumption and how they can approach this. He also mentions the importance of using renewable energy sources to power data centers. 

The conversation concludes with Dr. Goh’s vision for the future of AI and sustainability. Dr. Goh emphasizes the need for businesses to consider the long-term impact of their actions and to invest in sustainable technologies. He believes that AI can play a crucial role in achieving sustainability goals and that it is important for businesses to collaborate and share knowledge to achieve sustainable progress. 

I shouldn’t say pick the right size model. Pick the right model.

– Dr. Eng Lim Goh

Overall, the conversation highlights the need for businesses to consider sustainability while investing in AI and related technologies. It emphasizes the importance of transparency, collaboration, and measurement in achieving sustainable progress.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Latest podcasts

Episode 14  |  58 Min  |  April 21

Reasoning and evolution of GenAI with Dr. Srinivas Padmanabhuni, testAIng.com

Reasoning and evolution of GenAI with Dr. Srinivas Padmanabhuni, testAIng.com

Share on

Engaging topics at a glance

  • 00:13:00
    Introduction
  • 00:15:00
    Why Artificial General Intelligence (AGI) is still very far
  • 00:16:30
    Are AI founders hyping up AI
  • 00:18:00
    Types of reasoning
  • 00:20:00
    Why current AI based on LLMs perform so well
  • 00:22:05
    What is Abductive reasoning
  • 00:25:10
    Does current AI only do inductive reasoning
  • 00:25:30
    Will the current architecture suffice in spawning abductive reasoning
  • 00:27:15
    Does current AI summarize documents in a way that a human does
  • 00:30:30
    Does statistics play a role in human reasoning
  • 00:33:35
    About emotional reasoning
  • 00:36:10
    Biggest risks of current AI models
  • 00:36:50
    Applications of current AI and use-case for inductive reasoning
  • 00:41:00
    Creativity and reasoning
  • 00:46:35
    Concluding thoughts

This podcast offers valuable insights about the limitations of current AI reasoning, including common sense reasoning, and debunks the hype around AGI, which viewers can apply in their field.

In this podcast, Arvind interviews Dr. Srinivas Padmanabhuni, who has a PhD in common sense reasoning from the University of Alberta and is a founder of an AI startup testAIng.com.

Dr. Srinivas explains that the concept of the old Aristotelian logic is just not enough to capture human-level reasoning. The traditional logic that is taught in schools is based on zero one, which is just one part of our reasoning. Human beings reason a lot more than that, and the way AI reasons today is a very narrow, myopic way of reasoning.

Dr. Srinivas then shares his thoughts on AGI (Artificial General Intelligence). He thinks that it is too premature to talk about AGI, and it’s only marketing. He says that AI is not able to invent anything new unless it is actually coming from somewhere, some large corpus existing somewhere. He says that unless you really have a common sense level agent, which can think about reasoning, you cannot just think about AGI.

The podcast also discusses the limitations of AI and what we can expect from it in the future. Dr. Srinivas explains that AI is not capable of inventing anything new, but is only capable of processing information and making predictions based on existing data. He suggests that AI is best used for tasks that involve large amounts of data processing, such as financial analysis, medical diagnosis, and weather forecasting.

Dr. Srinivas also talks about the importance of common sense reasoning and how it is an essential part of human-level reasoning, which is missing in AI today. He explains that common sense reasoning is necessary for tasks such as understanding jokes, sarcasm, and metaphors.

Towards the end of the podcast, Arvind and Dr. Srinivas discuss the future of AI and its potential applications. They suggest that AI has the potential to revolutionize industries such as healthcare, retail, and transportation. However, they also caution that the development of AI should be done in an ethical and responsible manner, with consideration for the potential risks and consequences.

Overall, the podcast is very informative for anyone who wants to learn about the field of AI. It gives a good understanding of the limitations of AI and what we can expect from it in the future. The podcast also highlights the importance of common sense reasoning and how it is an essential part of human-level reasoning, which is missing in AI today. Lastly, the podcast emphasizes the need for ethical and responsible development of AI, with consideration for the potential risks and consequences.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Top trending insights

Episode 5  |  57 Min  |  April 21

Exploring reinforcement learning with MIT Professor Vivek Farias

Exploring reinforcement learning with MIT Professor Vivek Farias

Share on

Engaging topics at a glance

  • 00:16:50
    What is Reinforcement Learning
  • 00:20:10
    Reinforcement Learning for LLMs
  • 00:24:00
    How do you reward your model?
  • 00:33:00
    Revealed preferences v/s just a few individuals doing that
  • 00:36:00
    AI model training AI in the future?
  • 00:40:18
    Methodologies other than Reinforcement Learning
  • 00:43:10
    Considerations when in the Reinforcement Learning with Human Feedback (RLHF) Phases
  • 00:48:10
    About Cimulate

“Exploring Reinforcement Learning” with guest Vivek Farias, Professor, MIT, discusses what role reinforcement learning has to play in this world of Artificial Intelligence.

Learning systems with humans date back to almost 5,000 years ago. And these learning systems have what allowed us to progress as a society. Being able to teach other people what we know and share knowledge has been the foundational pillars of our evolution and civilization. And interestingly, these learning systems are not unique to just humans. Animals also have these learning systems. When you look at orcas, dolphins, the higher-order intelligent animals spend time training and teaching their young ones. In the last 50 to 60 years, we have not just been teaching humans how to learn, but we have been teaching machines how to learn. And this artificial intelligence area has benefited from our understanding of these learning systems.

The guest started with highlighting the importance of acknowledging uncertainty and balancing between exploiting what is known and exploring to learn more about the environment. This problem is referred to as a "multi-arm bandit problem" and is considered fundamental in reinforcement learning, where the goal is to optimize actions in an environment.

When looking at it specifically for Large Language Models (LLMs) the role of Reinforcement Learning. RL has played the central role in building general purpose chatbots that are based on LLMs. Because the resulting model that has been trained on data might not give you the refined output that you are expecting from it.

When discussing about rewards and losses in reinforcement learning phase, it came out that the way we structure rewards and penalties for AI models greatly influences their reliability, how they interact with public and the accountability.

Overall deploying AI involves a balance. Backend deployment offers some level of predictability, while front-end deployment is uncertain. Successful business must experiment and capitalize in both aspects.

Production Team
Arvind Ravishunkar, Ankit Pandey, Chandan Jha

Co-create for collective wisdom

This is your invitation to become an integral part of our Think Tank community. Co-create with us to bring diverse perspectives and enrich our pool of collective wisdom. Your insights could be the spark that ignites transformative conversations.

Learn More
cocreate-halftone