Author: Benjamin Shindel, PhD Candidate in MSE at Northwestern University
Contact: [email protected] While the proceedings at the UN Climate Change Conference will undoubtedly focus on avoiding the global catastrophic risks that climate change threatens, another topic will compete for attention in 2023. Over the last few years, the risks associated with the development of artificial intelligence have risen to the forefront, reaching a fever pitch with the release of software from tech industry leaders that suggests humanity is on the cusp of developing “weakly general artificial intelligence”, or an AI that can rival the average human in its capabilities. OpenAI, Google, Meta, and others have developed AI capable of writing, locomotion, logical reasoning, and the interpretation of visual and auditory stimuli. Simultaneously, researchers around the world have made substantial progress in the application of narrow AI tools for specific scientific problems. While AI offers tremendous promise in accelerating humanity’s timelines for solving grand challenges, including climate change, it also poses an existential risk to humanity. While there’s an ongoing debate as to the shape, likelihood, and severity of this risk, many of the world’s top AI scientists and thinkers have signed onto statements endorsing the need for action to study and avoid the risk of extinction from a superintelligent AI. The recent leadership crisis/coup at OpenAI, the current unquestionable vanguard of AI development serves as a particularly shocking example of the rift within the AI world between pushing forward AI capabilities and ensuring the safety of humanity. This specific existential risk is challenging to describe in a short blog post, and it is even more challenging to convince the reader of its seriousness, since it can sound like science fiction, but I’ll try here:
While the effects of anthropogenic climate change are massive and have already begun, it is unlikely that they pose a true existential risk to the survival of humanity. It can be challenging to balance attention between a ~100% proposition of damaged ecosystems, enormous infrastructure costs, and issues of food insecurity, climate refugee crises, etc, that will develop over decades… against a ??% proposition of a world-destroying machine intelligence. There are parallels to be made to the rise of nuclear technology, where atomic fission posed a potentially unlimited clean energy source alongside the growing threat of mutually assured destruction pursued by the parties of the cold war. At COP28, I expect that people will focus on the more pleasant or pedantic aspects of artificial intelligence. There will be discussions of the benefits of AI for scientific research in the fields of inquiry that can benefit climate and clean energy technology. AI has already proven invaluable in searching for more efficient materials for energy generation and storage, finding catalysts to synthesize clean fuels, and even in the genetic engineering of more resilient crops. There will also be discussions on the risks of AI in spreading misinformation about the climate, or perhaps for its benefits in combating that misinformation. Unfortunately, these discussions will likely miss the crux of the debate. The growing power of AI will be immense and if we’re lucky, we’re just beginning to scratch the surface of some of the utopian benefits that it can provide for the world. It’s easy to imagine a world where the efficiency, automation, and optimization brought on by tools that augment our species’ intelligence can lead to rapid solutions for the major climate challenges of today. However, if we’re unlucky, the risks of AI could outweigh these benefits, perhaps dramatically so.
0 Comments
Leave a Reply. |
Categories
All
Archives
March 2024
|