Google says it’s currently falling short on its climate-friendly goals, and artificial intelligence is the reason.
The tech giant announced this week in its 2024 Environmental Report that its greenhouse gas emissions have increased by nearly 50% since 2019, largely due to increased demand on the data centers and supply chains used to build and power its AI technology.
This is a major setback for Google’s goal of reaching zero emissions by 2030. In the report, officials acknowledged that “the goal is subject to significant uncertainty because the future environmental impacts of AI are complex and difficult to predict.”
It’s hard to imagine how things will improve by then. There’s still no standardized way to quantify the total emissions caused by AI use, but we know that the data centers that power it (which often rely on fossil fuels to run) are responsible for nearly 4% of global greenhouse gas emissions—more than the entire aviation industry, for comparison. And rare earth metals needed to make AI data chips are being mined at a rate that likely exceeds the world’s supply, but demand is only growing.
But after pumping billions of dollars into this emerging technology, it’s highly unlikely that Google will cave in on this particular investment, especially considering that the global AI market is experiencing tremendous growth and is projected to be worth $826.7 billion by 2030 (the same year that Google committed to going zero emissions).
What on earth is that for?
Meanwhile, the pressure on the planet is expected to increase by 160% over the same period. As global pressure on the Earth’s resources increases, questions call for answers: What can we gain and achieve through such collective environmental sacrifice?
To be fair, there are beneficial applications of AI technology: Educators can use AI to create lesson plans that better fit their students’ needs; image description and other assistance can help people with disabilities navigate the world more easily; and health care providers can use AI to more efficiently scan patients for signs of disease.
But while these are notable and beneficial use cases, they do not negate the harm caused by AI technology. The problem starts with who is behind the technology. To no one’s surprise, it is primarily white men who are designing the products of the future. (The committees assembled to oversee AI implementation in our largest companies also tend to be, shall we say, depressingly homogenous.) Women are finding opportunities in this emerging field, but according to a 2021 Deloitte report, they make up just 22% of the overall AI workforce.
Indeed, the AI industry itself has seen the same sexism and exclusion of female innovators as other sectors of tech. “No one took us seriously,” AI company founder Davar Ardalan told The Story Exchange, recalling being rejected by 350 investors. “It was incredibly humiliating.”
AI developers’ implicit biases manifest in a variety of ways on the consumer-facing side. In a March 2024 study, researchers at Capitol Technology University, a private university outside Washington, DC, found numerous examples of how AI harms women. These include threats to job security, as AI candidate search tools filter out more female job seekers and AI technology automates management and retail jobs that women are more likely to fill. And when it comes to healthcare, AI scans and assessment tools operate on biased data that minimizes the importance of women’s symptoms.
AI-generated images are also rife with sexism, often making assumptions about the jobs women can have, and in most cases skewing us to fit younger, Eurocentric standards of beauty, regardless of whether users are prompted for such details. Even worse are fake pornographic images of naked women, made without their consent. And so far, there is no remedy for such violations.
It’s also important to note that beyond biases within the industry and technology itself, the impacts of AI-driven climate change may also disproportionately harm women.
Find a way out
Beyond recognition, what we need is regulation and law. Of course, with our democratic processes facing unprecedented challenges and oversight agencies reeling from a U.S. Supreme Court decision that severely undermines their authority, it’s a tough time to turn to either concept as a solution. Yes, that includes those in the tech sector.
But there are several bills pending in state legislatures that offer a path forward. For example, bills have been introduced in California, Vermont, New York, and Oklahoma that would hold AI developers accountable for ensuring that their AI doesn’t produce discriminatory outcomes for users. At the federal level, a bill has been proposed that, if passed, would require studies of the environmental impacts of AI and mandate action plans based on those studies.
Not only that, but there’s also a need for more women involved in AI development, insiders say. As tech founder Ardalan points out, “Women have a lot of great ideas to bring solutions that can really help make AI more empathetic, more human-centric, connect people, and create really interesting experiences.”
This means more than avoiding the soulless Toys R Us ads or the (seemingly inherent) goofiness of AI-generated hands. We are destroying our own planet, which experts say is becoming an “uninhabitable” world; we are developing technologies that further rob us of what we need and that discriminate against women in both their development and implementation.
We need change and answers, and they won’t come from ChatGPT.