Podcasts

Podcasts

Episode 1: Not so peaceful technology? What risks may civilian AI pose to peace and security?

In this first episode, project leads Charles Ovink (UNODA) and Vincent Boulanin (SIPRI) kick-off the series by looking at some key concepts, explore the ways misuse of civilian AI can present risks to international peace and security, and introduce the EU funded Promoting Responsible Innovation in Artificial Intelligence for Peace and Security programme.

Responsible A.I. for peace podcast.

Show notes:

Episode 2: with Emily Bender, on the risks of large language models and generative AI.

In this episode, Charlie Ovink (UNODA) and Vincent Boulanin (SIPRI) guest Professor Emily Bender, an American linguist who co-authored one of the most cited article on the risks posed by large language models. They unpack the relationship between Large Language Models (LLM) and the current hype around Generative AI. They also talk about what LLM can and cannot do and how she sees the technology evolving. More importantly, they discuss the risks she associates with tour increasing reliance on LLM based AI tools and what the AI community – and regulators – could do about those risks.

Responsible A.I. for peace podcast.

Episode 3. Eleonore Pauwels: The dark side of AI for medicine, pharmacology and bioengineering

In this episode, we guest Eleonore Pauwels, a Senior Fellow with The Global Center on Cooperative Security, who works on the security and governance implications of the convergence of AI with other dual-use technologies, notably the biotechnologies used for medicine, pharmacology, bioengineering and for our scientific understanding of biological processes. 

We wanted to explore with Eleonore why the convergence of AI and biotechnology has been generating so much excitement as well as concerns in the scientific and policy community recently.  

We talk first about the promises AI holds in different areas such as medicine, pharmacology, and bioengineering. Then we dive into the question of whether – and if so how –these advances could be misused for bioterrorism, bioweapons and bio-crime. Finally, we talk is or should be one about these risks.  

Responsible A.I. for peace podcast.

Resources

Brockman, K., Bauer, S. Boulanin V. Bio+X. Arms Control and the Convergence of Biology and Emerging Technologies, (SIPRI: 2019) 

Carter et al. The Convergence of AI and the Life Science, (NTI: 2023),⁠  

⁠Sandbrink, J., Artificial intelligence and biological misuse: Differentiating risks of language models and biological design tools (Arxiv: 2023)⁠  

⁠Urbina, F., Lentzos, F., Invernizzi, C. et al. Dual use of artificial intelligence-powered drug discovery. Nat Mach Intell 4, 189–191 (2022).⁠  

⁠Mouton, C., Lucas, C., Guest, E., The Operational Risks of AI Use of AI in Large scale Biological attacks (RAND: 2023)⁠  

⁠Soice, E., et al, Can large language models democratize access to dual-use biotechnology? (Arxiv, 2023)⁠  

 

Episode 4. Boston Dynamics: How to deal with possible misuse of general purpose robots?

Episode Overview

In this episode, our guest is Brendan Schulman, who is Vice President of Policy and Governmental Relations at Boston Dynamics. Boston Dynamics is one of the world’s most famous robotic companies. Boston Dynamics is notably known for its “agile legged robots” like Spot and Atlas that can autonomously navigate uneven terrain, climb stairs or open doors. Boston Dynamics is also known for having taken a position in the debate on weaponization of general-purpose robots. In 2022, Boston Dynamics signed with five robotics companies, a pledge “not to weaponize their general purpose robots” or the software that makes them function. 

We wanted to explore with Brendan, how Boston Dynamics approaches the risk that their robots could be misused by some malicious actors for harmful purposes, and possibly be turned into autonomous weapon systems. We discuss what companies like Boston Dynamics can do at their level to prevent or mitigate that risk, whether there are any lessons that other organizations could learn from what Boston Dynamics has done or is doing, and whether Boston Dynamics sees a need for governmental regulations and, if so, what type of measures the companies would deem useful at the national and international levels. 

Resources

Pledge: https://bostondynamics.com/news/general-purpose-robots-should-not-be-weaponized

Responsible A.I. for peace podcast.

Funding Statement

This programme was made possible by the generous support from the European Union