CTC 20th Anniversary | An Interview with Jennifer Bramlette on Information and Communications Technologies

CTC 20th Anniversary | An Interview w/Jennifer Bramlette on Info & Communications Technologies

2021 marks the 20th anniversary of the adoption of Security Council resolution 1373 (2001) and the establishment of the Counter-Terrorism Committee. As part of the year of commemoration, CTED experts reflect on their work.

 

Ms. Jennifer Bramlette is a Legal Officer and the CTED Coordinator for Information and Communications Technologies (ICT). Ms. Bramlette has worked at CTED since 2007. This interview has been edited for brevity.

 

What motivated you to come to the United Nations and to CTED?

Ms. Bramlette: I joined CTED in 2007. At the time, I was living in the United Kingdom and working for the U.S. Government. I had finished a Master's Degree in International Peace and Security and had started a PhD in International Law, focused on counter-terrorism. And I’d been working in the counter-terrorism sphere since 1997. So, as a counter-terrorism analyst in intelligence and then in intelligence management, I was very well versed in what terrorism was and how to counter it from the military and national viewpoints. At the time, there wasn't that much counter-terrorism legislation on the books. I was very interested to move into the policy and legal aspects of counter-terrorism. And so I applied for a post online and was fortunate enough to be selected. 

 

What does work on ICT involve in the context of countering terrorism and violent extremism?

Ms. Bramlette: When we think of information and communications technologies, we think about video games; we think cell phones, the Internet, and social media. And, just as we all use these tools every single day (whether to communicate with our friends, arrange dinner dates, or find information about a topic we're interested in) they’re the very same tools that a person with terrorist intent, or a terrorist organization, is using. Any organization can use the Internet as a tool for spreading information, for organizing, for messaging, and for giving instructions. So if, for example, a terrorist organization is putting out propaganda or looking to inform people about how to travel to a conflict zone, or passing coded messages about how to build a bomb, for example, that's all out there and publicly available. And so counter-terrorism also has to work in that space and try to limit terrorist recruitment, limit the spreading of terrorist propaganda, limit the spread of very specific instructions about how to illegally cross borders or send money to a terrorist group or to build devices that are illegal and harmful.  That all needs to be taken into consideration. 

 

Why did the Security Council include ICT in some of its resolutions? And how have ICT requirements evolved over time?

Ms. Bramlette: Technology has become part of our everyday lives. We all walk around with cell phones and we all connect on social media to stay in touch with friends and family. And we’re using a technological platform right now to hold a virtual meeting. Because technology is so prevalent, it has to be understood,  and the cyber domain must be included in counter-terrorism. And this isn't new. In fact, Security Council resolution 1373 (2001) referred to ICT and the abuse of communications technologies. So it's been on the table for 20 years. And there’s been an organic development over time, as technology has been infused into popular culture. In its resolution 1373 (2001), the Council was talking about the need to exchange operational information on the use of communications technologies. So it was more focused on Member States and how they interacted with each other. This of course was in the wake of the “9/11” attacks carried out in the United States. And it was more about how Member States would exchange that operational information so that they could not only figure out what happened but also prevent such attacks from ever happening again. And to bring those perpetrators (the financiers, the operators, the logistics supporters, and the recruiters) to justice, Member States needed  to work together. But there it was: the use of ICT, in writing, in a binding Chapter VII Council resolution. And Security Council resolution 1624 (2005) specifically noted that Member States needed to prevent terrorists from exploiting sophisticated technology, communications, and resources to incite support for criminal acts. So we were looking for the first time at how terrorist actors were actually using technology and communications to disseminate messages and build support for what they were trying to do, as well as at how Member States could work to prevent that. And then, in 2010, Council resolution 1963 looked at the use of new ICT and, in particular, the Internet. This was the first time that the Internet had appeared in a Security Council resolution. And the resolution mentions it specifically, in terms of its use for the purpose of recruitment and incitement, as well as the financing, planning, and preparation of terrorist acts. 

Those three resolutions were actually the building blocks for what we have now. The focus on operational exchange of information mentioned in resolution 1373 (2001) was reflected in subsequent resolutions (in 2014, 2016, 2017, and 2019). From resolution 1624 (2005) onwards, you see the growth and expansion into resolutions that really deal with counter-narratives and how to prevent abuse of the Internet for recruitment and radicalization purposes. And resolution 1963 (which laid out how the Internet could be abused) was the foundation for several subsequent resolutions. So you can really see, when you look at all these resolutions, how they gradually bring together a number of components, including law enforcement and border control; the use of new technologies such as facial recognition, smart gates, e-readable passports, fingerprint technologies; and how biometric data is used and exchanged. That's a direct result from 20 years ago. And now an immense body of work is being carried out (not only by the Security Council and its subsidiary bodies, but also by other United Nations entities, Member States, and civil society organizations) on countering terrorist narratives and radicalization and incitement to violence through online terrorist propaganda. 

I’ve already talked about the three core resolutions and how they served as a foundation for expansion. And an important thing to look at in this context is the evolution in the understanding of what ICT are. We went from the use of communications technologies, to exploiting sophisticated technology, to specifically citing the Internet, in 2010. Then, in 2014, resolution 2178 specifically cited social media and made reference to resources, including audio and video. So, as the understanding of ICT grew, one of the other things that started to come in was the understanding that there was an evolving nexus between terrorism and ICT; that the two were not separate. And we also understood that, as ICT became more complex and usable and integrated into our lives, it was also becoming a viable platform for terrorist messaging and for terrorist operations and communications. 

One of the other things mentioned in resolution 2178 (2014) was the use of ICT to facilitate the travel of foreign terrorist fighters. And that was one of the things that really pushed ICT into the forefront, both politically and in terms of the operational capacity of Member States. At the time, a lot of logistical information was being put out online. That included fundraising so that terrorists could travel, the sharing of information on the best travel routes and how to circumvent law enforcement, instructions on how to engage in broken travel patterns, and tips for FTFs, disseminated through messaging and social media services. All this was coming out through ICT. And the Security Council made a very specific note of that and responded proactively to provide  tools to encourage Member States to continue to engage on this front. Several Council resolutions, as well as the 2015 Madrid Guiding Principles, and the 2018 Addendum in 2018, refer to the transmission of terrorist content and the need for mutual legal assistance and the gathering of digital data and evidence. A number of resolutions focus on digital evidence (or “e-evidence”). And the importance of this (again, in follow-up to the FTF phenomenon) is that, since much of the material to encourage recruitment, encourage travel, and facilitate travel was done online, the evidence required to bring terrorists to justice was  now electronic and digital evidence. And this was a whole new frontier in criminal justice. How do you get the data out of the Internet (and, already at that point, out of the cloud)? And what laws are in place on the sharing of data? And what about privacy? And what about Internet Service Providers who don't have the policies or capacity to share that data? And what rights do law enforcement officers have to get that data? And so there's this whole new dimension of the challenge, which must be resolved through legal means and with respect for human rights and fundamental freedoms. And of course it must all be done in close association with the tech industry. And this of course represented a whole new challenge. Member States are used to working with one another and through international and regional organizations but they're not always terribly well equipped to work with the private sector (and especially technological corporations) on counter-terrorism. So there was this whole other new area: how to work together and how to build public-private partnerships so that, as technology continues to evolve and expand, it can be done in such a way that there is legal access to data, cooperation with law enforcement, and respect for human rights and fundamental freedoms. It’s a balancing act that involves a complicated web of relationship-building, capacity-building, trust-building, and sustained dialogue.

And ICT is also mentioned in the Council’s most recent resolutions on terrorism, such as 2462 (2019), which deals mostly with countering the financing of terrorism and specifically states that new payment methods and fundraising methods such as crowdfunding, cryptocurrencies, and virtual currencies can be abused for terrorist-financing purposes. So, ICT is layered into pretty much everything that CTED is working on. And when we start looking at emerging trends and new developments, we have to keep an eye on where technology is taking us because, where there is a viable platform; where there is viable technology, it can be misused. So, the Security Council is staying on top of all that and also looking forward.

 

ICT is layered into pretty much everything that CTED is working on. And when we start looking at emerging trends and new developments, we have to keep an eye on where technology is taking us because, where there is a viable platform; where there is viable technology, it can be misused. So, the Security Council is staying on top of all that and also looking forward.

 

What progress has been achieved? Do you feel like the work CTED has done on ICT and counter-terrorism has made an impact?

Ms. Bramlette: Yes, I do. It really started back in 2010 and 2013, when CTED became the first UN organization to devote time, attention, and resources to this issue. CTED established relationships, first with Microsoft, and then with Telefonica, Twitter, Google and Facebook. My predecessor made a series of visits to Silicon Valley and Redmond to meet with senior leaders in those companies. And Microsoft was the first to agree to adopt the UN Sanctions List as a benchmark for content moderation. This was important because, if we are going to decide that something is terrorist content and therefore can't be on the Internet, what's the baseline for that decision? And so that set a standard. And the other big players also agreed to adopt it as the baseline. Taking on the Security Council's Sanctions List as that baseline was a massive development because it not only set the stage for cooperation between the United Nations and the tech industry, but basically established a common set of global rules and guidelines for companies on counter-terrorism. 

And one of the other things that happened at the time was that CTED began working with a foundation called ICT4Peace. In cooperation with ICT4Peace and the big industry players, CTED organized a special meeting of the CTC on ICT and respect for human rights. And from that followed a number of truly inspired cooperative initiatives, such as a group called Tech Against Terrorism, which is a not-for-profit company that works with smaller platforms (which of course don't have the same resources that the big players have) to help them understand counter-terrorism in the cybersphere and help them with regulatory policies. Tech Against Terrorism is fully active today as a standalone organization that’s making a huge impact on small content service providers and working directly with Member States, training, teaching, and just really bringing clarity and understanding to and about the tech industry. Another outcome of those original relationships was the creation of the Global Internet Forum to Counter Terrorism. And the work that the GIFCT is doing now was launched by the tech industry itself. So, a partnership between the “Big Four”, other service providers, and a number of Member States has evolved, and CTED is on the advisory board. The work being done by the GIFCT is essential for education, for training, for policy, for legislation, for ensuring implementation of good standards. It’s immensely impactful. 

The final area I'll really draw attention to is on digital evidence, which I mentioned earlier. Through cooperation between CTED, UNODC and the International Association of Prosecutors, over a number of years, we've put together now, two handbooks, two guidebooks on the handling of electronic evidence across borders. We can say that the second edition, which just came out this summer, is kind of a bestseller on UNOD’s website. And it's a practical guide for practitioners for working with content service providers to request digital evidence in a legal way, using a standardized format and process, which greatly facilitates small content service providers, again, who don't have the big legal departments and big policy and data retrieval departments. So by putting it all into a standardized format, it's really facilitated the ability of law enforcement to make requests for digital data. And making it easy for content service providers to provide that. And already, we've seen a big uptake in the legal request for digital data, the voluntary provision of that data based on requests without having to go to court, without having to, you know, take five years to get something. And this made a huge impact on the ability to bring terrorists to justice.

 

What challenges remain?

Ms. Bramlette: There are still a number of challenges. First, I would say that there’s a difference in pace. The tech industry is on a massive trajectory, that’s fuelled by public interest, fuelled by science, and fuelled by profit. They’re  private companies and they’re profit driven. And then you have Member States, which may be more conservative towards advances in technology and of course have their respective national concerns. We also have the opening of new, largely unregulated frontiers, which spark differences of opinion as to how they’re developed and used. For example, artificial intelligence itself is neutral but how can we ensure that the people who are programming the AI are not subconsciously biased and that the programming behind it is comprehensive, thorough, holistic, and respectful of human rights and fundamental freedoms? We're also looking at the impact of applications such as the use of AI in content moderation. AI is being used to identify terrorist propaganda and misinformation and disinformation that can be marked for moderation or de-platforming. But AI algorithms have also accidentally promoted or amplified extremist content. So it can be used both ways. And there's not a yet a great deal of regulation in this sphere. 

Another issue is privacy and private data. Here, there are so many blurred lines (especially when, for example, people are willing to embrace new technology). Even to open our cell phones, we can use either our thumbprints or facial recognition. So people are willingly giving out their data. This blurs the lines with respect to the definition of private data. And so there are new frontiers regarding agreement, consensus, regulation, legislation, transparency, oversight of legislation and regulation, and the ways in which tech is being used.  For example, surveillance has been shown to have been extremely useful in some Member States. So again, it's always about weighing and balancing. And the litmus test really is: where do the uses of new technologies fit in with, enhance, or infringe the inalienable and non-negotiable standards set out in international human rights law, and the fundamental freedoms? There is a fundamental right to privacy, a fundamental right to freedom of expression, and a fundamental right to freedom of religion. And these fundamental rights cannot be questioned. So how can we find that balance, in content moderation, between preventing the abuse of ICT for terrorist and extremist purposes and ensuring that the cyber sphere remains a safe space? And new communication spaces (for example, Facebook’s “metaverse”, which is now being beta tested, and Twitter's new live audio chat feature “Spaces”) continue to be created and need to be addressed by the private sector, in cooperation with Member States and in cooperation with CTED, other bodies that are looking to assist in these areas, and the Security Council, which is looking to see where it too may need to lean in and provide guidance to Member States on the issue of State and corporate responsibilities.