The views expressed in this article are personal and do not represent the official positions of our member organizations or consensus view of the GSF or the Green AI Committee.
The European Union's recently published EU AI Act is a milestone in AI governance. It is the first binding regulation on AI globally to address the negative impacts of AI. A few member representatives on the recently formed Green AI Committee share their thoughts on the publication of this Act and what it can mean for greening AI:
Sanjay Podder, Managing Director at Accenture, Chairperson of the Green Software Foundation, and Co-Chair of the Green AI Committee
The Act positions the EU as a global leader in sustainable AI governance. Article 40 of the EU AI Act establishes a framework for developing harmonized standards aimed at improving AI systems resource performance, with a particular focus on energy efficiency. The act outlines a process where the European Commission will request standardization bodies to develop standards that address the energy consumption of both high-risk AI systems and general-purpose models throughout their lifecycles. Article 40 also emphasizes the importance of global collaboration in AI standardization. By mandating that participants consider existing international standards and strive to strengthen global cooperation, the EU seeks to create a unified framework for AI development that promotes interoperability and trust. Recognizing AI as a global phenomenon that requires international cooperation for effective regulation has the potential to drive significant improvements in AI’s environmental footprint.
One of the most promising aspects of the EU AI Act is its emphasis on AI regulatory sandboxes. These controlled environments could facilitate the development and testing of AI systems while also considering a specific focus on protection of biodiversity, environmental protection and climate change mitigation. These sandboxes encourage innovation and can help policy makers identify best practices for the development of energy-efficient AI solutions.
Interdisciplinary cooperation, another cornerstone of the EU AI Act, is crucial for addressing the complex challenges posed by AI. By bringing together AI developers, environmental experts, and social scientists, the act recognizes the interconnectedness of technological advancement, environmental sustainability, and societal well-being. The EU AI Act gains further significance when seen in the context that the EU is currently reviewing eco-design minimum efficiency requirements for servers and computers. Additionally, they are finalizing rules to monitor the energy performance of data centers, including their energy and water footprint. This broader legislative landscape demonstrates the EU’s commitment to tackling energy consumption across the entire digital ecosystem.
The EU AI Act missed an opportunity to fully tackle AI's environmental impacts. While article 40 can potentially reduce AI's resource consumption, it falls short of the ambitious goals initially outlined by the Parliament. The focus on standards rather than mandatory requirements and the absence of clear enforcement mechanisms, raises concerns about the effectiveness of this approach.
The scope of the Act is somewhat restricted. Its primary focus on energy consumption overshadows other critical environmental concerns such as water usage, resource extraction, and electronic waste. The Act also does not fully consider the potential of AI to address sustainability goals. AI can be a force for good in minimizing carbon emissions in the rest of the economy, creating a net positive impact on the environment and society.
To ensure that AI becomes a truly sustainable technology, a more comprehensive approach is necessary, which can include:
Developing standardized metrics to create a common framework for measuring AI’s overall environmental footprint.
The role of public procurement in driving demand for sustainable AI solutions.
AI-powered tools for environmental management to address environmental challenges.
Considering the potential social and environmental consequences of AI-driven climate solutions.
By delving deeper into these areas, we can gain a more comprehensive understanding of how to build a sustainable AI future. The Act’s success will depend on how detailed and rigorous the standards are and how quickly the industry adopts and implements them.
Thomas Lewis, Principal Developer Advocate in Microsoft's Developer Relations for Spatial Computing + Green Software Engineering and Co-Chair of the Green AI Committee
The EU AI Act is a movement forward in a fast-paced, unprecedented technological wave, that is still in its infancy and unknown future. As always, in this current state, it is a challenge to collectively develop an approach that addresses the issues that are being discussed today and anticipates the challenges of tomorrow.
The EU AI Act as mentioned in items (4) and (5) on Page 2 does attempt to find the balance needed. There are industries and work streams where AI will create innovation and rapid acceleration of solutions and allow us to help solve worldwide challenges in sustainability and beyond. That also has to be balanced with the opportunities for AI to be used in ways that could harm.
Item (6) is a good design principle: “As a prerequisite, AI should be a human-centric technology. It should serve as a tool for people, with the ultimate aim of increasing human well-being.” That is a good principle by which people and organizations should operate. Hopefully the EU AI Act will help enable this principle, establish a litmus test for its appropriate use, and make sure that people and organizations are abiding by it.
André Racz, Chief Technology Officer at Avanade Brazil and Co-Chair of the Green AI Committee
The EU AI Act is a step in the right direction but falls short when addressing AI’s environmental impact. While it highlights both the benefits and harms of AI, it lacks specific requirements for reducing environmental footprint, unlike its focus on transparency, governance, and security. This makes it unlikely to drive significant change in AI's environmental footprint.
The Act is centered around mitigating risks, but environmental impacts are often known trade-offs, not risks. Addressing these challenges will require efforts beyond risk management, such as improving energy efficiency and optimizing AI models—topics that the Act does not tackle.
At Avanade, we believe that environmental responsibility should be integral to any AI strategy. That’s why we’re partnering with the Green Software Foundation to help set standards for green AI practices, as this is an emerging area without a clear roadmap. We are actively exploring ways to reduce AI’s environmental impact, including using eco-friendly software, hosting in sustainable data centers, and running more efficient AI models.
Moreover, the EU AI Act’s demands for disclosure—like energy consumption for general-purpose AI(Article 51) puts larger models (such as OpenAI GPT-4) into scrutiny. With closed models, this could be a problem as some requirements of disclosure (such as energy consumption in Annex XI, section 1) requires disclosing information previously held private to protect trade / IP secret.
This could deter providers, with companies like Apple and Meta citing regulatory unpredictability as reasons for withholding services from the EU. There's a real concern that the Act might push key AI providers away from the European market.
Yusuke Kobayashi, Manager, Green Innovation Office, NTT DATA
At NTT DATA, we perceive this Act as the first instance where AI governance has been concretely socially implemented through a risk-based approach. It’s a framework that not only regulates but also ensures accountability while advancing future business and technological developments. It is expected to prevent incidents that may occur due to the premature use of technology. Vendors are anticipated to be judged not only by their ability to develop component technologies but also by their comprehensive ability to implement them socially. There is a concern that it might lead to excessive self-regulation by businesses. Therefore, we look forward to the detailed conditions for applicability and conditions for extraterritorial application.
Nisha Menon, Technical Expert - Data & AI Architecture at Siemens
The EU AI act must not have a negative impact on the speed of innovation, also with a view to international competition. Therefore, industry sees a continuous dialog – to which we are happy to contribute – as highly necessary during its implementation to make the best possible use of AI’s potential.
Siemens is a global pioneer in Industrial AI and has for many years continuously worked on workable guidelines and practices like explainable decisions, human intervention mechanisms, and robust risk mitigation strategies. The focus of our Trustworthy Industrial-grade AI lies on the optimization of products, production, and supply chains and is designed to make processes faster, more sustainable, more efficient, and to safeguard critical IT/OT infrastructures. In this regard, we have also released our internal set of Generative AI guardrails, as our commitment to deliver systems that are safe, risk-averse, and reliable.
On another note, it is highly important not to regulate AI as such nor the underlying mathematics. Also, the “high risk AI systems” will need further detailed implementing measures and guidelines for their risk classification to avoid that the complete sectors would be classified as high-risk.
Jose Lopez, Head of Data Science and Artificial Intelligence at Globant
The publication of the EU AI Act is undeniably a milestone in the global landscape of AI regulation during a very challenging and uncertain time. The principles and values on AI development align with ethical standards and human rights while also emphasizing the importance of environmental sustainability. These principles deeply resonate with Globant’s AI Manifesto. In the last decade Globant has shown how it can impact sustainable solutions with AI. As a member of the Green Software Foundation, we recognize the Act's significance in promoting a human-centric and trustworthy approach to AI.
In relation to the business implications, as the Act is just coming into force, the effects are yet uncertain. However, it is crucial to balance regulation with the need to foster innovation and support autoregulation within companies. Some potential negative impacts could relate to the stringent registration and compliance requirements, which may impose a high administrative burden on companies, potentially stifling innovation. Likewise, the joint liability clauses for developers, in cases of misuse of AI technology, place an undue responsibility on those who might only play a minor role in the AI supply chain.
Regarding environmental concerns, the EU AI Act makes a commendable attempt by including provisions for energy consumption reporting (Article 40.2). We welcome these efforts to integrate environmental considerations into AI development, promoting sustainable practices throughout the AI lifecycle. The Act could have been a great opportunity to create incentives for reducing the software environmental footprint or fostering AI-powered environmental solutions. For a truly green AI future, it is crucial to have clear incentives and robust standards that not only minimize environmental impact but also encourage innovation in sustainable AI solutions.
Vincent Caldeira, APAC CTO for Red Hat representing IBM
The EU AI Act marks a significant regulatory step forward by setting standards that aim to integrate environmental sustainability into the lifecycle of AI systems. Articles like 4a(f), 12(2a), 28b(2)(d), and 29a(g) expand a regulatory framework that promotes environmentally sustainable practices in AI alongside goals of fundamental rights protection and sustainable development within the EU.
The Act presents a complex regulatory framework and has gaps that might limit how well it can reduce AI's environmental impact. In the final version of the act, key environmental provisions were watered down, and became mostly voluntary rather than mandatory.. This may require a heavy reliance on standardization bodies, which are often influenced by industry players.
The Act also offers exemptions for open-source AI systems, which could free them from some strict controls placed on high-risk categories. While the intent is good, the tech industry hasn’t agreed on a clear definition of open source AI yet. Without consensus, these exemptions could fail to protect consumers or limit innovation by deviating from open source principles.
Federica Sarro, Professor of Software Engineering at University College London
The EU AI Act is a necessary step towards regulating a rapidly evolving field with profound societal implications such as the AI field. It represents both a pioneering framework for ethical AI and a potential catalyst for more sustainable growth.
The Act primarily focuses on ensuring that AI systems are trustworthy, ethical, and that they respect fundamental rights. While environmental sustainability is not explicitly emphasized as a core objective of the Act, it is indirectly addressed through risk assessments, alignment with broader EU sustainability goals, and the promotion of responsible AI practices. The act mainly focuses on encouraging, rather than mandating, environmental sustainability in AI; it promotes collaboration between regulators, companies, and civil society to set standards for AI development. Therefore, stakeholders such as the Green Software Foundation, can play a significant part in advocating for incorporating environmental sustainability into these standards, ensuring that green principles become an integral part of AI compliance frameworks.
The Act positions Europe as a global leader in regulating AI, setting a precedent for other regions and countries. This proactive approach could influence international standards and in the long term, the AI Act could lead to a more sustainable and ethical AI landscape. By fostering trust and setting high standards, it could drive quality over quantity, resulting in a more mature and stable AI industry that supports both innovation and public welfare. On the other hand, the AI boom in Europe might experience a short-term slowdown due to compliance challenges and the need to adapt to new regulations set by the Act. Also, some AI research and development may move outside of Europe, where regulations are less stringent. However, companies around the world might need to align with these standards if they wish to operate in European markets, potentially driving global harmonization. Moreover, while some areas of AI may see slower growth due to the need to adapt to new regulations set by the Act and related compliance challenges, there might be a surge in innovation focused on devising automated regulatory technology and ethical AI solutions.
In this respect, software engineers will have a pivotal role in shaping future AI-powered software systems by embedding compliance, ethical and sustainability considerations into every stage of their development process. The group I lead at University College London has been at the forefront of the research in engineering responsible AI-powered software systems, and we aim to translate our research into action to support companies to adhere to the AI Act.
Moving Forward
The EU AI Act is seen as a major step forward in AI governance and a legislative step that positions Europe as a leader in creating sustainable and human-centric AI solutions. However, there are concerns about the lack of strong enforcement mechanisms.
The need for standardized methodologies to measure AI's environmental footprint remains. With the Green AI Committee and our Standards and Policy Working Groups, we are ready to support this journey, using our strengths in developing standards and tooling to reach a future where AI is truly green.
To get involved in the Green AI Committee and shape standards and policy in green software, join us.
This article is licenced under Creative Commons (CC BY 4.0)