Artificial Intelligence (AI) is converging with an extraordinary array of other technologies, from biotech and genomics, to neurotechnology, robotics, cybertechnology and manufacturing systems. Increasingly, these technologies are decentralized, beyond State control, and available to a wide range of actors around the world. While these trends may unlock enormous potential for humankind, the convergence of AI and new technologies also poses unprecedented risks to global security. In particular, they create challenges for the multilateral system and the United Nations, which operate at the inter-State level.
With its potential to enable real-time, cost-effective and efficient responses to a variety of human development and security-related issues, AI has a significant role to play in managing complex risks to vulnerable populations, averting outbreaks of crisis, and building resilience against shocks within our societies. While AI, in convergence with other technologies, cuts across all pillars of the work of the United Nations, its role will be crucial in the prevention agenda.
Yet as a dual-use technology, AI is as likely to disempower populations as it is to empower them. The ability of AI to intrude upon—and potentially control—private human behaviour has direct implications for the prevention and human rights agenda of the United Nations. New forms of social and biological control could, in fact, require a reimagining of the framework currently in place to monitor and implement the Universal Declaration of Human Rights. Evolving security risks will certainly require the multilateral system to anticipate and better understand the rapidly emerging field of AI convergence.
Therefore, within the prevention agenda and its intersection with new technologies, it is necessary to use inclusive foresight and normative guidance, based on a renewed interest in the Charter of the United Nations, to shape the creation, deployment and governance of AI systems.
The Promises of AI Convergence
We have entered an era of technological convergence that seeks to merge our physical, digital and biological lives. Computer scientists are developing deep learning algorithms that can recognize patterns within massive amounts of data with superhuman efficiency and, increasingly, without supervision. At the same time, geneticists and neuroscientists are deciphering data related to our genomes and brain functioning, learning about human health, well-being and cognition.
The result? Functional capabilities for averting crises that were previously unimaginable are now real, upgrading efforts from precision medicine and food security to conflict prevention. For example, deep learning algorithms are diagnosing retinopathy1in patients living in rural India where there is a shortage of ophthalmologists. The same algorithms can identify malign biomarkers among large swaths of genomics data from human populations to design blood-tests for various cancers.2 Companies like Zipline3 are using AI technology in autonomous drones to deliver critical medical supplies, such as vaccines, to rural hospitals in Africa.
AI could also become a powerful tool for the international development efforts of the United Nations. In collaboration with the Organization and other global partners, the World Bank is building a Famine Action Mechanism,4 which relies on deep learning systems developed by Microsoft, Google and Amazon, to detect when food crises are about to turn into famines. The same tool allows agile financing to be connected directly with sources of food insecurity. UN Global Pulse is spearheading an initiative5 that uses machine learning to monitor the effects of extremist violence on hateful speech online.
The combined optimization of biometrics, genomics, and behavioural data is giving rise to “affective computing”, algorithms that can successfully analyse us, nudge us and communicate with us. This form of emotional analysis will improve human-machine interactions in applications that could empower underserved populations, from precision medicine6 to targeted education.7
Enhanced by affective computing, AI systems will also watch, record and evaluate us: we will go from the predictive power of one algorithm to the next. To enter this world of AI convergence is to step into a web of pervasive and precise surveillance.
Precision Surveillance and Social Control
Equipped with facial recognition, algorithms will capture an ever more refined cognitive understanding, not only of our biometrics features, but also our human emotions and behaviours. This new form of intrusive computing in our personal lives has significant implications for self-determination and privacy, specifically children’s privacy.
For example, the My Friend Cayla smart doll8 sends voice and emotional data of the children who play with it to the cloud, which led to a United States Federal Trade Commission complaint9 and a ban on the doll in Germany. In the United States, emotional analysis is already being used in the courtroom to detect remorse10 in deposition videos. It could soon be part of job interviews11 to assess candidates’ responses and their fitness for a job.
Facebook is perfecting an AI friend that recognizes suicidal thoughts12 through conversation and emotional analysis. Start-ups Neuralink and Kernel are working on brain-computer interfaces13 that will read people’s mental processes and influence the brain mechanisms powering their decisions. One company already relies on wireless sensors to analyse workers’ brain waves14 and monitor their emotional health.
The tech giant Alibaba is deploying millions of cameras equipped with facial recognition across a number of cities.15 Government-sponsored databases of faces, genomes, financial and personal information are being created to connect to credit ratings, jobs and the loyalty rankings of citizens, as well as classifications of DNA samples to find related family members. Recently, 5,000 school students16 had their photos and saliva samples collected, without informed consent, to feed a database of faces and genomes. Furthermore, one facial recognition software company, Cloud Walk,17 is developing AI technology that tracks individuals’ movements and behaviour to assess their chances of committing a crime.
The ability of AI to nudge and control private human behaviour and impact self-determination could increasingly limit the capacity of the United Nations to monitor and protect against human rights violations. This capacity is further limited when the private sector, almost exclusively, owns the required data and is better equipped with the know-how to understand and design algorithms.
Degradation of Truth and Trust
The powerful dual-use implications of AI are also becoming difficult to anticipate, contain and mitigate. Relying on behavioural and sentiment analysis, AI enables targeted propaganda to spread more efficiently and at wider scale within the social media ecosystem.
Take Deepfake18 as an example. Sophisticated AI programs can now manipulate sounds, images and videos, creating impersonations that are often impossible to distinguish from the original. Deep learning algorithms can, with surprising accuracy, read human lips, synthesize speech, and to some extent simulate facial expressions.
Once released outside the lab, such simulations could easily be misused with worrisome implications (indeed, this is already happening at a low level). On the eve of an election, Deepfake videos could falsely portray public officials being involved in money laundering, or public panic could be sowed by videos warning of non-existent epidemics or cyberattacks. These forged incidents could potentially lead to international escalation.
The capacity of a range of actors to influence public opinion with misleading simulations could have powerful long-term implications for the role of the United Nations in maintaining peace and security. By eroding the sense of trust and truth between citizens and the State—and indeed among States—truly fake news could be deeply corrosive to our global intelligence and governance system.
Cyber-Colonization and New Security Risks
The ability of AI-driven technologies to influence large populations could lead to the very real prospect of a cyber race. Powerful nations and large technology platforms could enter into open competition for our collective data as fuel to generate economic, medical and security supremacy. Forms of “cyber-colonization” are increasingly likely, as powerful States are able to harness AI and biotech to understand and potentially control other countries’ populations and ecosystems.
Such forms of bio-intelligence are a strategic advantage in a nation’s security arsenal and will lead to increasing developments at the forefront of medical countermeasures and military research.
Yet, the moral responsibility that comes with using AI systems in our evolving security environment does not belong only to States anymore. It is increasingly distributed among developers, users and hackers. This form of “atomized responsibility” leads to a sweeping set of interrelated challenges for the multilateral system.
What Role for the Multilateral System?
Politically, legally and ethically, our societies are not properly prepared for the deployment of AI and converging technologies. The United Nations was established many decades before this technological revolution. Is the Organization currently well placed to develop the kind of responsible governance that will channel AI’s potential away from existing and emerging risks and towards our collective safety, security and well-being?
The resurgence of nationalist agendas across the world points to a dwindling capacity of the multilateral system to play a meaningful role in the global governance of AI. Major corporations may see little value in bringing multilateral approaches to bear on what they consider lucrative and proprietary technologies. Powerful Member States may prefer to crystallize their own competitive advantages and rules when it comes to cybertechnologies. They may resist United Nations involvement in the global governance of AI, particularly as it relates to military applications.
But there are some innovative ways in which the United Nations can help build the kind of collaborative, transparent networks that may begin to treat our “trust-deficit disorder”. First, the United Nations should strengthen its engagement with the large technology platforms driving AI innovation and offer a forum for truly meaningful cooperation between them, along with State actors and civil society. For AI cooperation, the United Nations will need to be a bridge between the interests of nations that are tech-leaders and those that are tech-takers.
In this brokering function, an array of entities within the United Nations system could play a role that is sorely needed at the international level: 1) technological foresight, which is inclusive of diverse countries’ challenges; 2) negotiating adequate normative frameworks; and 3) the development of monitoring and coordination standards and oversight.
Inclusive foresight and normative monitoring and coordination will be particularly crucial in the promotion and protection of human rights. Given the powerful, sometimes corrosive, implications that AI may have for self-determination, privacy and other individual freedoms, United Nations entities will need to collaborate to monitor and guarantee coherence across multiple normative efforts spurred by national, regional and private actors.
Finally, achieving the United Nations prevention agenda will require providing sharp and inclusive horizon scanning to anticipate the nature and scope of emerging security risks that will threaten not only nations, but also individuals and vulnerable populations. Such foresight will become increasingly critical as AI converges with other technologies that are beyond State control and more accessible to a wider range of actors around the world.
Spurred on by a mandate given to the United Nations University (UNU) in the Secretary-General’s Strategy on New Technologies, the Centre for Policy Research at UNU has created an “AI and Global Governance” platform as an inclusive space for researchers and policy actors, as well as corporate and thought leaders, to anticipate and explore the global policy challenges raised by AI. From global submissions by leaders in the field, the platform aims to foster unique cross-disciplinary insights to inform existing debates from the lens of multilateralism, coupled with lessons learned from work done on the ground. These insights will support United Nations Member States, multilateral funds, agencies, programmes and other stakeholders as they consider both their own and their collective roles in shaping the governance of AI.
Perhaps the most important challenge for the United Nations in this context is one of relevance, of re-establishing a sense of trust in the multilateral system. If the above analysis tells us anything, it is that AI-driven technologies are an issue for every individual and every State. Without collective collaborative forms of governance, there is a real risk that they could undermine global stability.
1. Daniel Shu Wei Ting and others, "Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes", The Journal of the American Medical Association, vol. 318, No. 22 (12 December 2017), p.p. 2211-2223. Available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5820739/.
2. Claire Asher, "Artificial Intelligence to boost liquid biopsies", The Scientist, 26 June 2018. Available at https://www.the-scientist.com/news-opinion/artificial-intelligence-to-boost-liquid-biopsies-64380.
3. For more information, see Zipline International website at http://www.flyzipline.com/.
4. World Bank, Famine Action Mechanism (FAM). Available at http://www.worldbank.org/en/programs/famine-early-action-mechanism (accessed on 4 December 2018).
5. Alexandra Olteanu and others, "The effect of extremist violence on hateful speech online", Association for the Advancement of Artificial Intelligence, 16 April 2018. Available at https://www.unglobalpulse.org/sites/default/files/The%20effects%20of%20extremist%20violence%20on%20hateful%20speech%20online.pdf.
6. Massachusetts Institute of Technology Media Laboratory, "Predicting students' wellbeing from physiology, phone, mobility, and behavioral data". Available at https://www.media.mit.edu/projects/predicting-students-wellbeing-from-physiology-phone-mobility-and-behavioral-data/overview/(accessed on 4 December 2018).
7. Massachusetts Institute of Technology Media Laboratory, "Affective learning companion: Exploring the role of emotion in propelling the SMET learning process". Available at https://affect.media.mit.edu/projectpages/lc/nsf1.html(accessed on 4 December 2018).
8. For more information about the smart doll Cayla, see the toy’s website at https://www.myfriendcayla.com/.
9. Federal Trade Commission, "Complaint and request for investigation, injunction, and other relief", 6 December 2016. Available at https://epic.org/privacy/kids/EPIC-IPR-FTC-Genesis-Complaint.pdf.
10. To learn more about legal video deposition management platform MediaRebel, see Affectiva’s website at https://www.affectiva.com/success-story/mediarebel/.
11. Minda Zetlin, "AI is now analyzing candidates' facial expressions during video job interviews", Inc., 28 February 2018. Available at https://www.inc.com/minda-zetlin/ai-is-now-analyzing-candidates-facial-expressions-during-video-job-interviews.html.
12. Jordan Novet, "Facebook is using A.I. to help predict when users may be suicidal", CNBC, 21 February 2018. Available at https://www.cnbc.com/2018/02/21/how-facebook-uses-ai-for-suicide-prevention.html.
13. Rafael Yuste and others, "Four ethical priorities for neurotechnologies and AI", Nature, vol. 551 No.7679 (8 November 2017). Available at https://www.nature.com/news/four-ethical-priorities-for-neurotechnologies-and-ai-1.22960.
14. Stephen Chen, "‘Forget the Facebook leak’: China is mining data directly from workers’ brains on an industrial scale", South China Morning Post, 29 April 2018. Available at https://www.scmp.com/news/china/society/article/2143899/forget-facebook-leak-china-mining-data-directly-workers-brains.
15. Hua Xiansheng, "City brain and comprehensive urban cognition", Alibaba Cloud, blog, 30 September 2017. Available at https://www.alibabacloud.com/blog/interview-with-idst-deputy-managing-director-hua-xiansheng-city-brain--comprehensive-urban-cognition_221544 .
16. Wenxin Fan, Natasha Khan and Liza Lin, "China snares innocent and guilty alike to build world’s biggest DNA database", The Wall Street Journal, 26 December 2017. Available at https://www.wsj.com/articles/china-snares-innocent-and-guilty-alike-to-build-worlds-biggest-dna-database-1514310353.
17. For more information about CloudWalk Technology, see the company’s website at http://www.cloudwalk.cn/.
18. Hilke Schellman,"Deepfake videos are getting real and that’s a problem", The Wall Street Journal, 15 October 2018. Available at https://www.wsj.com/articles/deepfake-videos-are-ruining-lives-is-democracy-next-1539595787.