Podcasts

Podcasts

Episode 1: Not so peaceful technology? What risks may civilian AI pose to peace and security?

In this first episode, project leads Charles Ovink (UNODA) and Vincent Boulanin (SIPRI) kick-off the series by looking at some key concepts, explore the ways misuse of civilian AI can present risks to international peace and security, and introduce the EU funded Promoting Responsible Innovation in Artificial Intelligence for Peace and Security programme.

Responsible A.I. for peace podcast.

Show notes:

Episode 2: with Emily Bender, on the risks of large language models and generative AI.

In this episode, Charlie Ovink (UNODA) and Vincent Boulanin (SIPRI) guest Professor Emily Bender, an American linguist who co-authored one of the most cited article on the risks posed by large language models. They unpack the relationship between Large Language Models (LLM) and the current hype around Generative AI. They also talk about what LLM can and cannot do and how she sees the technology evolving. More importantly, they discuss the risks she associates with tour increasing reliance on LLM based AI tools and what the AI community – and regulators – could do about those risks.

Responsible A.I. for peace podcast.

Episode 3. Eleonore Pauwels: The dark side of AI for medicine, pharmacology and bioengineering

In this episode, we guest Eleonore Pauwels, a Senior Fellow with The Global Center on Cooperative Security, who works on the security and governance implications of the convergence of AI with other dual-use technologies, notably the biotechnologies used for medicine, pharmacology, bioengineering and for our scientific understanding of biological processes. 

We wanted to explore with Eleonore why the convergence of AI and biotechnology has been generating so much excitement as well as concerns in the scientific and policy community recently.  

We talk first about the promises AI holds in different areas such as medicine, pharmacology, and bioengineering. Then we dive into the question of whether – and if so how –these advances could be misused for bioterrorism, bioweapons and bio-crime. Finally, we talk is or should be one about these risks.  

Resources

Brockman, K., Bauer, S. Boulanin V. Bio+X. Arms Control and the Convergence of Biology and Emerging Technologies, (SIPRI: 2019) 

Carter et al. The Convergence of AI and the Life Science, (NTI: 2023),⁠  

⁠Sandbrink, J., Artificial intelligence and biological misuse: Differentiating risks of language models and biological design tools (Arxiv: 2023)⁠  

⁠Urbina, F., Lentzos, F., Invernizzi, C. et al. Dual use of artificial intelligence-powered drug discovery. Nat Mach Intell 4, 189–191 (2022).⁠  

⁠Mouton, C., Lucas, C., Guest, E., The Operational Risks of AI Use of AI in Large scale Biological attacks (RAND: 2023)⁠  

⁠Soice, E., et al, Can large language models democratize access to dual-use biotechnology? (Arxiv, 2023)⁠  

Funding Statement

This programme was made possible by the generous support from the European Union