
Lack of transparency can impact the adoption of AI models. Credit: Laurence Dutton/E+/Getty Images
A growing rush to harness artificial intelligence (AI) to accelerate and scale efforts and find solutions to common challenges highlights the need to closely examine the impact of technology on the environment and ethical concerns around transparency and fairness.
The launch of AI Innovation Great challenge Participation in the 2023 UN Climate Summit was an important step in promoting AI in climate action in developing countries. It was about achieving the Sustainable Development Goals, the global plan to end hunger and poverty, clean up the environment and provide health care for all by 2030.
Let’s assume that generative AI is used daily by billions of people around the world. In this case, the total annual carbon footprint could reach approximately 47 million tonnes of carbon dioxide, contributing to a 0.12% increase in global carbon dioxide emissions.
According to our analysis, a generative AI chatbot application that supports 50 call center employees, each supporting four customers per hour, can generate approximately 2,000 tons of carbon dioxide per year. Water consumption from widespread adoption of generative AI – with half the world’s population sending 24 queries per day per person – could match the annual water intake of more than 328 million adults .
Data centers serving AI technology workloads host large-scale computing infrastructures, particularly networks of graphics processing units. These IT infrastructures generate high thermal energy when serving AI workloads, which must then be removed from the data center server room to avoid overheating and keep the machines operating within the temperature zone of functioning. Two types of cooling systems are generally used: cooling towers and outside air cooling. Both of these elements need water.
The lack of clear information about the decision-making processes of AI models makes it difficult to understand biases, which can lead to unfair results. Transparent models are essential for meeting ethical standards and ensuring accountability for errors. Lack of transparency can impact the adoption of these models in industry, academia and other sectors.
A recent example is a lawsuit brought by the New York Times against OpenAI and Microsoft, the creators of ChatGPT and other AI tools, for copyright infringement. The lawsuit claims that AI models, including ChatGPT, were trained using millions of New York Times articles, raising concerns about their unauthorized use, potential competition and impact on journalism.
Set up standards and frameworks making AI sustainable is essential. Frameworks such as Montreal Declaration for Responsible AI and the Organization for Economic Cooperation and Development Principles of AI are widely accepted and adopted by governments, organizations and industry in the pursuit of sustainable AI. THE AI Alliancelaunched in December 2023, also advocates the use of AI in a sustainable way.
Solutions to mitigate AI’s carbon footprint and ethical concerns
Effective AI models can be developed without the need for extensive data. Prioritizing targeted, domain-specific AI models over constant scale increases aligns with sustainability by optimizing resources and efficiently addressing specific use cases. This approach minimizes environmental impact and promotes responsible development.
Actions such as rapid engineering, rapid tuning and fine-tuning of the model can optimize the use of hardware, thereby reducing the carbon footprint when adapting base models (a form of generative AI) to tasks.
Techniques that make models more efficient for deployment on resource-constrained devices or systems (quantization, distillation, and client-side caching) and investment in specialized hardware (e.g., in-memory computing, analog computing) , improve the performance of AI models and contribute to overall sustainability.
Moving AI operations to energy-efficient data centers within cloud computing helps reduce environmental impact. This involves shifting the computational workload to data centers with greener practices, mitigating the overall carbon footprint associated with running AI in the cloud.
To assess the transparency of generative AI, a multidisciplinary team from Stanford, MIT, and Princeton designed a scoring system called Foundation Model Transparency. Hint. The system evaluates 100 aspects of transparency, from how a company builds a foundation model, to how it operates and how it is used downstream.
The challenges are real, but the potential of AI as a transformative agent in sustainability is just as great.
(Indervir Singh Banipal (indervir.singh.banipal@ibm.com) and Sourav Mazumder (smazumder@us.ibm.com) are with IBM.)