Jeff is the Institute's general manager. Throughout his career, he’s gathered significant leadership experience in governance risk compliance transformation and product development at Goldman Sachs, USAA, and PwC.
Leveraging product leadership and team-building expertise, he aims to drive member value that helps organizations achieve success operationalizing responsible AI practices.
Can you tell us about the Responsible AI Institute?
Founded in 2016, the Responsible AI (RAI) Institute is a global and member-driven non-profit dedicated to enabling the successful implementation of responsible AI in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks, and certifications closely aligned with global standards and emerging regulations.
RAI Institute members include leading companies such as Amazon Web Services, Boston Consulting Group, KPMG, ATB Financial, and many others dedicated to bringing responsible AI to all industry sectors.
But first and foremost, Responsible AI is about unlocking AI’s potential without negatively impacting people or the planet — ensuring that it's safe and trusted and upholds human rights and societal values.
How does environmental sustainability fit into your organization?
Manoj Saxena, founder and executive chairman of RAI Institute, shared, "Ecological conservation is a key part of social responsibility and aligns with our mission at the Responsible AI Institute. Our Sustainable AI Consortium ensures AI systems are responsible and environmentally sustainable. By bringing together leaders from academia and industry, we offer guidebooks, free tools, and education to guide sustainable AI implementation. This approach is vital for ethical use and environmental stewardship of AI technologies."
As research ramps up around the harms and benefits AI can have on the environment, the RAI Institute has proactively established parameters and frameworks for implementing AI sustainably through our newly established Sustainable AI Consortium.
We strive to provide key metrics and resources for organizations looking to implement AI while considering its environmental impact. The consortium will help expand our mission through educational content and thought leadership focused on the environmental sustainability of AI.
Why did you join GSF?
RAI Institute recognizes the growing importance of environmental sustainability in AI development. In our latest e-publication: "AI's Impact on Our Sustainable Future: A Guiding Framework for Responsible AI Integration Into ESG Paradigms," we emphasize how relevant this issue has become in today's landscape.
Joining the GSF aligns with our comprehensive approach to AI governance, including integrating sustainability initiatives with AI development. Furthermore, it underpins our commitment to promoting more energy-efficient AI models.
When it comes to reducing the environmental impacts of AI, where does the Institute see the greatest area for improvement and the most immediate need for more sustainable practices?
The significant impact AI can pose on ESG goals makes it central to determining AI's potential risks and benefits.
There isn't one greatest area for improvement — AI's implementation to support ESG goals requires a comprehensive approach. AI governance should be aligned with sustainability initiatives. Neither ESG initiatives nor responsible AI should be an afterthought from leadership.
To address the intersection of RAI's mission and sustainability, the Institute has identified nine metrics, including GHG, energy, and water consumption, that could be influenced (both positively and negatively) by AI. This structure enables analysis of AI's influence on key ESG domains, offering a nuanced view of both opportunities and risks.
How is responsible AI helping organizations navigate the changing regulatory landscape?
A crucial step towards responsible AI is ensuring that AI systems are built right from the start, thereby avoiding the accumulation of technical debt and misuse early on. Introducing the "AI by Design" approach helps ensure AI systems align with current regulations. It makes it easier to finetune as the landscape evolves.
To successfully introduce and operationalize AI-based solutions, organizations need a solid internal strategy to protect business and build trust with their customers. We identified industry best practices, which include:
Investing resources to build a robust and mature AI framework.
Adopting AI standards to provide common objectives, guidelines, and specifications for organizations, policymakers, and researchers.
Staying informed and complying with new and emerging laws and regulations regarding AI.
Publishing Responsible AI principles and frameworks, which demonstrates a strong commitment to AI responsibility to consumers and investors. Additionally, robust frameworks help establish internal AI governance policies and practices from the ground up that align with evolving global and local policies and recommendations.
Finally, utilizing external expertise and resources, such as support from the RAI Institute, offers access to customized generative AI policies, guidelines, and best practices.
Current AI regulations, including but not limited to The EU AI Act, proposed AIDA (Canada), and California AI Legislation, help protect human rights and privacy, ensure transparency and accountability, promote fairness and non-discrimination, foster innovation and economic growth, and enhance safety and reliability.
How do you hope to contribute and benefit from the GSF?
We aim to support the GSF’s mission by providing expertise to help integrate AI-specific considerations into the standards and best practices for environment-friendly software development.
By joining GSF, we trust that we can expand our reach, extend an existing focus on sustainable AI practices, and further reduce the environmental impact of AI systems. Looking ahead, we envision this partnership as a route to enhance our ability to guide organizations in developing environmentally sustainable AI solutions.
This article is licenced under Creative Commons (CC BY 4.0)