AI's rapid evolution is triggering breakthroughs in various sectors such as research, healthcare, agriculture, and environmental sustainability. It enhances productivity and democratises access to knowledge. However, this potent tool also poses risks that need to be adequately addressed.
An increasing number of stakeholders assert the necessity of maintaining AI within the realms of human control, rule of law, and ethical guidelines. But what would the standards for responsible AI usage look like, and how can we collectively ensure their implementation?
The AI race also represents a geopolitical race, aiming to harness the potential and benefits of this technology. Who will emerge victorious in this race, and what are the prospects for implementing global norms and regulations?
Calls for responsibility and state regulation
During their visit to NUPI, key individuals such as Brad Smith, President of Microsoft, his Microsoft colleague and co-author Carol Ann Browne, and Nicolai Tangen, CEO of Norges Bank Investment Management (NBIM), deliberated on these pertinent issues.
Brad Smith plays a key role in spearheading Microsoft’s work on critical issues involving the intersection of technology and society. He recently offered a five-point blueprint for the public governance of AI:
- Implement and build upon new government-led AI safety frameworks.
- Require safety brakes for AI systems that control critical infrastructure.
- Develop a broader legal and regulatory framework based on the technology architecture for AI.
- Promote transparency and ensure academic and public access to AI.
- Pursue new public-private partnerships to use AI as an effective tool to address the inevitable societal challenges that come with new technology.
Smith and his Chief of Staff and Executive Communications, Carol Ann Browne, have co-authored the bestseller Tools and Weapons: The Promise and the Peril of the Digital Age, where they urge the tech sector to assume more responsibility and call for governments to move faster in addressing challenges created by new technologies.
Nicolai Tangen, manager of the world's largest sovereign wealth fund, echoes the call for state-level AI regulation. Furthermore, the Norwegian wealth fund will establish guidelines for companies it invests in on ‘ethical’ use of artificial intelligence. NBIM itself aims to increase work efficiency by 10 per cent within the next year exclusively by implementing AI tools.
The future of AI in geopolitics
The AI roundtable was a collaboration between Microsoft and NUPI, facilitated by Kristine Beitland, Corporate, External & Legal Affairs Director at Microsoft Norway; NUPI Director Ulf Sverdrup; Niels Nagelhus Schia, who heads NUPI's Research Centre on New Technology; and Åsmund Weltzien, Head of Communications at NUPI. During the discussion, the attendees delved into key issues such as:
- The impact of increasing AI adoption on the geopolitical landscape, and the role tech giants like Microsoft can play in guiding these outcomes.
- Strategies to promote transparency and accountability in AI development, especially regarding collaborations with governments, to foster trust and manage geopolitical risks.
- The way governments and technology companies can work together to create global norms and regulations that promote responsible and ethical AI use while mitigating geopolitical apprehensions.
Following the roundtable, Microsoft President Brad Smith, NUPI Director Ulf Sverdrup, and NBIM CEO Nicolai Tangen continued their conversation in a podcast, scheduled for release later this summer. Stay tuned!
(Speaking of podcasts, both Microsoft’s Kristine Beitland and NUPI’s Niels Nagelhus Schia contributed to the podcast episode “Den digitale slagmarken” – The digital battlefield. The podcast reveals how Russia's full-scale invasion of Ukraine was initiated not by gunshots, but by cyberattacks. And Microsoft was first to detect them. This episode is in Norwegian.)