Summits | Meetings | Publications | Research | Search | Home | About the G7 Research Group Follow @g7_rg |
Italy's G7 2024 Presidency, AI Safety and the Debate on Its Future
Joanna Davies
July 2, 2024
G7 leaders met in Apulia on June 13-15, 2024, hosted by Italy whose G7 presidency lasts until December 31, 2024. The digital economy and artificial intelligence (AI) were high on the summit's agenda. Italian prime minister Georgia Meloni invited the Pope to participate in the G7's session on AI, signalling the importance of this issue and building on the Holy See's Rome Call for AI Ethics of 2020. In January this year the Pope called for putting "systems of artificial intelligence at the service of a fully human connection." In the lead-up to the Apulia Summit, Meloni stated her expectation that the Pope's presence would contribute to an ethical framework for AI.
From 1975 to 2023, the G7 dedicated just 2% of its communiqués to digitalization (including information and communications technology in its earlier summits). According to the G7 Research Group, it made 229 collectively agreed commitments on the subject, and complied with 17 of those commitments at an average of 85%. G7 leaders first addressed AI itself in a major way at their Charlevoix Summit in 2018, making 24 commitments on the issue. They are now increasing their attention to AI, releasing the G7 Leaders' Statement on the Hiroshima AI Process in October 2023, with the digital ministers following up in December with a comprehensive policy framework for it. Under Italy's presidency, in April 2024, the digital ministers reaffirmed this framework.
How can the G7 effectively address the major challenges of AI governance, not least AI safety, what can Italy as host contribute, and what can the G7 leaders do to improve their governance of AI beyond the 2024 presidency?
In March 2023, Eliezer Yudkowsky, a computer scientist and researcher leading UC Berkeley's AI Safety team, called for a six-month moratorium on the development of AI systems more powerful than GPT-4. Published in TIME magazine, this plea came amid an intense AI arms race, with influential figures such as Elon Musk and Steve Wozniak among the signatories. Concurrently, OpenAI transitioned from a non-profit to a privately held entity, further intensifying discussions about AI governance.
Yudkowsky outlined several concerns about artificial general intelligence (AGI). First, he highlighted the challenge of predicting the future accurately, emphasizing that successful futurism depends on the predictability of certain events. Second, he pointed out the complexity of defining and achieving "goals" in AI. Goals are not merely preferences but include fundamental conditions such as consent, autonomy and the communal interests of others. This complexity introduces a layer of unpredictability.
The intentionality-goal issue leads us to a broader issue related to AGI: its efficiency compared to human cognition via neural nets and gradient descent. Yudkowsky discussed "epistemic efficiency," which is the inability to systematically predict biases in AI estimates (e.g., stock market), and "instrumental efficiency," which refers to the difficulty in perceiving better strategies relative to the AI's goals (e.g., playing stockfish chess). Efficiency finds itself on a gradient, since AGI is prompted by machine learning engineers to create "neural nets" (much like a brain), with different pathways across different layers of the machine that engage in supervised reinforcement training and, in some cases, unsupervised learning (unclosed loops using self-collected data). The concept of "smartness" in AI, often portrayed in science fiction, may not necessarily equate to making effective predictions and strategies. These discussions underscore the intricacies involved in defining and achieving decision-making criteria for machine learners: AGI relies on language games and related actions in reality. So, what are the criteria for "being good at making decisions" to a machine learner?
The potential threats posed by AGI are significant. During an April 2024 colloquium at New York University on techno-philosophy, Yudkowsky advocated for international cooperation to halt the AI arms race. He suggested that a minimum of three G20 members would need to agree to pause AGI development to manage the risks effectively. The United Kingdom took a step in this direction by hosting the first Global AI Safety Summit in November 2023, where world leaders and major tech companies reached an initial agreement on AI's future. Following this inaugural meeting, South Korea set up a mini AI safety summit this May. It aimed at creating guardrails for the rapidly advancing technology that raises concerns such as algorithmic bias that skews search results and potential existential threats to humanity. In March 2024, the United Nations General Assembly approved its first resolution on AI, lending support to an international effort to ensure the powerful new technology benefits all countries, respects human rights and is "safe, secure and trustworthy."
A study by the International Monetary Fund sparked further debate on AI's labour market impact, highlighting significant differences in AI access and readiness between advanced and developing economies. Routine jobs are at higher risk of being replaced by AI, while high-value jobs could benefit from increased productivity.
Since the G7's Hiroshima Summit in May 2023, Italy has taken several key AI safety initiatives throughout its G7 presidency, which began on January 1, 2024:
Meloni's government, described as the "most right-wing" in Italy since World War II, has taken a conservative yet proactive stance on AI safety and tech governance. In April 2024, it passed laws punishing the distribution of AI-generated content with up to five years in prison. Italy has also led efforts to create a global framework to protect the workforce, including hosting a symposium at the Italian embassy in Washington DC titled "AI and Human Capital," focusing on AI's impact on the labour market.
Meloni's cautious approach reflects the concerns of Italian academics about the societal and philosophical implications of technological advancement.
Unlike Yudkowsky, Scott Aronson, a computer scientist at OpenAI, is less concerned about AI's potential dangers. He believes AI will transform civilization through tools and services that cannot plot to annihilate humanity, similar to how Windows 11 or the Google search bar operate. However, Aronson acknowledges a high probability of existential catastrophe in the coming century due to various factors including AI but also, notably, climate change and nuclear war. He expects AI to be intricately woven into all aspects of human civilization, affecting political processes and societal functions, as evidenced by AI's influence on the 2016 US election through the Facebook recommendation algorithm.
In conclusion, the debate on AI safety continues to evolve, with varying perspectives on its risks and benefits. Yudkowsky calls for precautionary measures and international cooperation, and others such as Aronson view AI's integration into society as inevitable and potentially beneficial. The global community, led by initiatives from countries including Italy, is recognizing these challenges through strategic investments, ethical considerations and regulatory frameworks.
To address these challenges further, G7 leaders – under Italy's 2024 presidency and looking ahead to the 2025 summit that Canada will host in Kananaskis – should take the following actions.
— |
This Information System is provided by the University of Toronto Libraries and the G7 Research Group at the University of Toronto. |
Please send comments to:
g7@utoronto.ca This page was last updated September 24, 2024. |
All contents copyright © 2024. University of Toronto unless otherwise stated. All rights reserved.