Davos, Switzerland

17 January 2024

Secretary-General's joint press encounter with members of the UN High-level Advisory Body on Artificial Intelligence

António Guterres, Secretary-General

The following is a near-verbatim transcript.

AMANDEEP SINGH GILL [Secretary-General's Envoy on Technology]: We have with us James Manyika, the Co-chair of the Advisory Body, Hiroaki Kitano [Chief Technology Officer of Sony Group Corporation], Ian Bremmer, [President and Founder of Eurasia Group], His Excellency Omar Sultan Al Olama, the Artificial Intelligence Minister [of the United Arab Emirates] and we might be joined pretty shortly by Marietje Schaake [International Policy Director at Stanford University Cyber Policy Center], who is a member as well and who is present here in Davos.  So, I would like to request the Secretary-General to say a few words to get us going.  

SECRETARY-GENERAL: I want to express my enormous gratitude to the members of the panel, first of all for having accepted to be part of this exercise.  I must tell you that I consider I am very proud that we were able to have a panel with this extraordinary level.  We are talking about 39 distinguished men and women, or I would say women and men because there is one more woman than men.

Fifty per cent from the north and 50 from the south, and representing governments, the private sector, the academia, the civil society, in a truly inclusive manner and with a level of expertise that is absolutely outstanding.  So I am extremely happy that they have accepted to cooperate with us and to assume the leadership of this project.

Now, the UN does this with a lot of humility.  Artificial intelligence is undoubtedly the most important potential contribution for global development.  And, at the same time, the most complex issue when we talk about risks, when we talk about the future and so, this is something that cannot be dealt with business as usual.

I mean, the present methodologies and structures of government are, to a certain extent, ill-equipped, ill-prepared, to deal with this new reality.  And so, it is necessary to think about it with humility.  There is a number of initiatives, the G7 has an initiative, there was a British initiative, there were, there are initiatives from other areas.

I have one main concern and one thing I believe the UN can give a contribution.  I mean, this is something that will affect the whole of humankind.  And so, I think it's important to have a platform where this can be discussed in a way that is truly universal and inclusive.

This cannot be, I would say, the exercise for the club of rich countries, or worse an exercise for two clubs fighting each other.  No, in the context of the geographic divides that we are witnessing today, and using a Ian Bremmer expression, the G2 that extends to a G-zero.  No, that's not what we want.

What’s really important is to have a universal and inclusive approach and we are extremely concerned by the fact that developing countries until now are particularly ill-prepared in this domain.  And probably that there is not sufficient interest for, by other parts of the world to make sure that this exercise is an exercise where indeed this can become a fundamental tool, to allow developing countries to catch up instead of another instrument to increase the divide and to increase the inequality in the world.

Now, this group managed in two months to prepare this interim report.  I think this is a world record.  And this interim report will now be the source of a very meaningful discussion with the different governments, with the private sector, with the civil society, with the different components of the academia and there will be a next report, which we hope will be a fundamental contribution for our Summit of the Future that will take place in, during the High-level Week in September. And in which we will discuss the agenda for peace, the global digital compact, some proposals of reform of international financial system and other aspects related to global governance.  

Now, the UN looks into these with humility.  I repeat the word humility.  We are not asking to be the leaders of a global governance system for the AI.  We believe that, and here I might be corrected by the distinguished members of the panel.  We believe that any system of governance needs to be something that is networked, that is inclusive, that has different levels at national level and regional level.  But with some aspects that might require some global, some global perspective.  And there was not in relation to that global perspective, there was not in this report, any concrete specific proposal and I think this was the right thing to do because we are opening a discussion.  But there were a number of functions that were put on the table, very clearly.

I think one function has to do with, we need to have some form of global permanent horizon scanning.  To a certain extent, that's what the group has done, but this is something that needs to be sustained in time.  I think there is a major question of capacity-building of developing countries in order for developing countries to be able to take profits of the enormous opportunities generated by AI.

But a huge effort of capacity-building is necessary.  And on the other hand, and in my participation in one of the meetings, I found a lot of interest of the members in the questions of inter-operability and the idea that we shouldn't have a fragmentation of standards and, and again this is an area where I believe a contribution, important contribution can be done.  Obviously we saw a lot of discussions in the media and in other meetings that what we need is something similar to the IPCC or what it is, is something similar to the international agency in public energy or to The International Civil Aviation Organization (ICAO) or whatever.

I don't think there is any formula of any other entity that corresponds exactly to what might be necessary in this context, but this is exactly something that will be in the next stage of discussion.  What, what is it that makes sense in order for these functions and other relevant ones to be seriously taken into account.  So, in this context, we thought it would be very interesting if I would shut up, first of all, because I tend to talk too much and to have a brief presentation to the distinguished members of the media, by our Co-Chair, and the other members of the panel and the dialogue, an open dialogue. This is not Chatham House rules.  This is an open source, so you can use, you can [laughter].  It's an open source, but so you can obviously use the results of this here, but a very informal, it's not a press conference, it's a very informal discussion.  

JAMES MANYIKA, Co-chair of the AI Advisory body: Secretary-General, thank you, thank you very much. It has been and it is my distinct honour and privilege quite honestly to be involved in this work and to co-chair this extraordinary group of members of the body.

Unfortunately, my Co-Chair Carme Artigas, the former Minister from Spain was unable to come to Davos, otherwise we'd be doing this together, but she and I have been working very closely together. I guess I'll make a few points, by observation, first on the process. First of all, we had an intense deadline.

Thank you to the Secretary-General. We were aiming to have an interim report by the end of the year. So what you have in front of you is a result of a lot of meetings.  I think by one count we had 400 meetings I think, in the space of just over two months, amongst ourselves.

I learned a lot, to your point Secretary-General, by humility, because the body that's been assembled has extraordinary expertise. All of these are distinguished colleagues who spend time whether it's in academia, civil society, government or the private sector and bring actually a lot of depth and expertise on the topic. Many of them have been working on this for a very long time.

I mean, that went a long way in actually allowing us to make the kind of progress that we did in such a short time. I should also say that we were enormous beneficiaries of lots of discussions that have already happened, whether it's in the regional dialogues in the EU, in Europe, in other forums, what’s happening in the US and particularly the White House. So there was a lot to build on and evaluate and debate and I think, you know, I'd be understating it if I tell you that we had vigorous debates. A group that diverse from 33 countries, coming from different sectors. We, I think, we had a very, very vigorous debate, but I think, before I go to the actual highlights on the report, I think there are a few things worth pointing out I think that the body, as a whole, as a group came to some clear consensus on a few things.

And I'd like to just take these affirmatively because they came out on this. There are a few affirmative things that I’d like just to affirm that came from the Body before we get to the recommendations or the draft recommendations.

The first was that I think AI does represent a tremendous opportunity to improve the world in many ways, for individuals, for the economy, for science, for society, and could contribute to the SDGs. So there's a sense that this is an enormous opportunity for humanity, first.

Second, I think there was a clear view from the Body that there are some enormous risks and challenges that we're going to need to grapple with.

And I think we talked about them in many, many different ways. I mean, in fact, one of the things we avoided trying to do as a Body was to try to list all the risks. Although, we did think about them in a few different categories. One of the phrases that I think at least stuck with me, and my colleagues of the Body can also comment, is that, I think, just as much as we think about, I mean, they clearly risks coming from the development of the technology, some of the performance issues, bias and all those things, we talked about that, but there's also the risk for me that misapplication, misuse, but one phase, I was also added quite strongly is the idea of missed uses. So this idea of misapplication, misuses and missed uses was very, very clear.  So we spent a lot of time on those risks.

The third thing, I'd like to highlight that the Body spent time on is recognizing all the gaps we've got, all the governance gaps we have and those were in a couple of favours.  So, on the one hand, while of course, a lot has happened with, you know, comes in private sector making voluntary commitments, although that's not enough. Some regulatory initiatives in different countries and different regional bodies and all of that, I think there was a sense amongst the Body that, that was not enough. There were still some important gaps. Another important gap I think that the Body quickly came to view on was the fact that one of the big gaps was that the, this is not, had not been inclusive in the ways that it should, especially with respect to the global South.

So many of the things that were coming out as initiative had not fully included and involved the global South. So, there were some clear gaps that the group felt and there was a sense that what was, what would be useful, is some kind of Global framework and we emphasize the term framework.

At least a way to think about this in some harmonized coordinated fashion. So that's what led us I think to what you see in the report, which are and we kind of tried to articulate this at least in two areas, in two ways. One was to try to see, we could establish a set of principles? Kind of fundamental principles that were somewhat universal we could all agree on. And you'll see five of them, but I want to highlight, to call out a few of them because I think they're very important for the group. One was, we have to be thinking about this in the public interest, in the public benefit. We have to think about that where we have to be thinking about this as a way to realize things that are important to everybody and the SDGs are almost an expression of that, an expression of the ways in which, I think, we all want to improve the world.

And the SDGs are a good impression, expression of that. And we also wanted to base them, base all of this on things that we have all agreed on. Sure, there's more we can build on, but there's some very fundamental things that are somewhat universal, we've all agreed on.

Members States have agreed to fundamental human rights, to international law, so there’s a set of things that we all agreed, least let's build on those things. So we had a set of principles like that. You can see them spelled out in the interim report.

And then the second part that we try to focus on is we took a form follows function approach. To say, okay so what functions would you like to see and we think are important to try to coordinate as part of a global framework and not rush to say what form should it take but what are those functions?

And you'll see several of them articulated. I think the Secretary-General mentioned several of them, but I think in that it ranges from: could we at least articulate some? Have some mechanism function where we can articulate, get your scientific consensus or a horizon scanning approach, where we understand how the technology is developing, being used, being deployed, the risks and so forth in a way that we, we keep up with that, you know, in some fashion.

Another function, for example, and I don’t want to enumerate all of those -my colleagues of the Body can to jump in too - are things like interoperability. How do we avoid patchworks evolving everywhere?  Regionally, nationally and so forth. Is there some way to build interoperability? Another important one, particularly with regards to this idea of including everybody, how do you think about capacity-building for those who are either not able to participate, what are some of the enablers and what are the mechanisms that should be put in place to build capacity so that everybody can participate and benefit. So, we went through several of these and there are seven of them. But with the idea that we want to take form first, you know, form follows function, we should say that even though we looked at analogies, the IPCC, IEA and all these analogies, and there's a lot to learn from them. I think most members and I think the majority felt that it's like, we can't just do a copy and paste.  AI presents some new questions, some new challenges. So what we can learn from those other models and mechanisms, we may want to rethink how they would work in this context, even if we build on them and learn from them.

And then, finally, where we are now. And I'll pause for a second so my co-members can also contribute, comment. Where we are on the process, is we, we've now reached a point where we've published a report. We've been getting, we've tried to get feedback and we're going to be in a period of consultation.

We've begun that with the Member States.  Some of them have already given us feedback and have commented on what they've read in the reports and more, I am sure, more is still coming.  Then we're going to be continuing these consultations between now and the Summer, leading up to the upcoming Summit [of the Future] and so forth.

So that's what we've done. I should stop and pause talking and invite my …

AMANDEEP SINGH GILL: I know some members have to leave quickly so we perhaps give the floor to …

IAN BREMMER, President and Founder of Eurasia Group: I’d like to make a very short comment, which is just that the Secretary-General has been talking about this issue since he became Secretary-General, recognizing that climate change and disruptive technologies and artificial intelligence, are fundamental opportunities for the world to get things right for future generations, but must have global governance.

SECRETARY-GENERAL:  My first speech in 2017.  

IAN BREMMER: I remember and it's taken a while but the UN got there and, I'm very proud to be part of the panel, I know we all are, but the Secretary-General does not always put himself in front of the media.

He's not always first interested in communicating; he’s interested in getting it done. That's served him very well, but we need to give him some credit here because this does not happen without António getting out there. And I also want to say, you've heard way too much about AI at the WEF [World Economic Forum] this week.

Okay, we go down the promenade. It's at every storefront, okay? And there are a lot of people there talking, that are giving you their elevator speech for two minutes for how they're using AI to change the world.

Before we do that, we need to recognize that foundational principles which we already have enshrined in the UN Charter need to be applied to make sure that AI works for humanity. I am deeply concerned.

My biggest concern is not what is being fundamentally discussed right now by the American executive order and other places.  My fundamental concern, is that AI is going to transform human beings in a few years.  That we are, as you just heard from Sam Altman [CEO of OpenAI] yesterday, you're going to have a testing on individual data which means that these AI bots are going to be aligned with us as people. And we will become hybrids, we will have these on all the time. We'll be engaging with them and they're going to change us much more than social media has changed us.

We can't do that unless we are basing this technology on how we want humanity to function. That's one thing that the world can agree on and has to agree on. So, I mean, I and I know that as James said, there's been a lot of debate and disagreement among all of the global members, there's been no disagreement on that.

The fundamental principles were the easiest thing to agree on because they already exist in the United Nations. So, I mean, I just want to applaud that and thank you so much.

AMANDEEP SINGH GILL: And Omar please thank you again.

OMAR SULTAN AL OLAMA, Minister of State for Artificial Intelligence of the United Arab Emirates:  I just want to say that I have been appointed as Minister of Artificial Intelligence since 2017.  The amount of impact and actually outcome that the Secretary-General and Amandeep and the team have been able to create over the last couple of months has been really much bigger and better than anything we've seen in the past.  If AI stands for Artificial Intelligence, I think the UN stands for All Inclusive.  We have incredible views from around the world.

It's been a great learning experience. And to be very honest, it's interesting to see that there is a body like the UN taking a very pragmatic approach.  One of the issues that we have had in the past were many of the conversations were relevant to either certain companies or certain parts of the world.

These conversations here are extremely relevant, they are extremely practical and we can feel like there's a roadmap that can be built, whether it's for single countries, like the UAE, or for blocks or even for the world. So I'd like to thank the Secretary-General as well as Amandeep and the whole team for the incredible work.

And I’m really optimistic now that the future is going to be better because of this. That’s a starting point that’s going to create an inflection point for human. Thank you.

MODERATOR: Should we go to a few questions and then allow all of you to, to answer these questions.  I see everybody has a question.

Let me start with you just because you're close to me, Financial Times.

QUESTION: Hi, I'm Madu. I work for the Financial Times. I'm curious what these disagreements were about, what was everyone disagreeing on? Were there any patterns or trends and that did you resolve them? But what remains outstanding of your major disagreements?

JAMES MANYIKA: Oh, I'm happy to start, but let’s encourage others. So I'll tell you, one of the things that was clearly different. This distinction between the global South and the global North was actually quite different.  I think as I indicated before, many tended to be, in the global South, optimistic about AI, but concerned about not being involved. A lot. So it was that distinction.

I think there was also at some point initially a debate about, should we focus on the things today or the things in the future, but I think we've quickly got very practical on that question so I don't think that disagreement in debate lasted very, very long because the initiative of some was we started, we wanted to think about long-term risks. And others would think about the issues in front of us right now. So, there was a little bit of debate and discussions that way.  I think the other discussion that we had a lot was to what extent should we just apply what we've got. And very quickly go to let's use this form, let's use this form, let’s this, let’s that, but I think we very quickly resolved that and actually, one of the most thoughtful voices in some of that particular discussion was actually Marietje, and maybe I could ask you to jump in and comment, but that was one of the discussions we had about how quickly do you go to a particular forum versus think about what are we trying to solve in the first place before we rush to the forms. 

MARIETJE SCHAAKE, International Policy Director at Stanford University Cyber Policy Center: I fully understand your question. And I am personally very struck by how harmonized the group has been in endorsing these very principled points.

I mean, this is a global representation when it comes to backgrounds professionally, geographically, generationally, you know, all kinds of diversity are reflected in the group and it's been a very positive, I would say, fun process to be a part of.  Well, maybe unexpectedly so.  I was positively surprised.

But what you see is that people are very driven by the things that they care about. And so it's more of a - you know - passionate discussion that we have going on about what can we all include? Because people feel this enormous opportunity, real leadership from the United Nations, but also a necessary role for a globally legitimate body, like the UN to step up. And so people are very keen to say, well, but what about young people? But what about women? What about, you know, states or, and the risk of humans, like everybody is just, you know, passionate about the points that they care about, but in light of that, and also in light of the limited time that we've had to work on this, I'm excited about this sketch that is before us. I see it as the sketches of a painting. The lines are incredibly important, but there will be more, more colour, there will be more detail, and indeed, there will be more, you know, sense of direction. In terms of what kind of institutions might best be equipped to ultimately safeguard and govern what we believe needs to be governed.

But indeed, we had back and forth. Some people said we need to put a marker down now, otherwise, nobody will care about the report. Indeed, I said, let's be a little bit modest also ourselves. We've had six weeks or so of having endless meetings. It was probably mentioned that we have had endless meetings really between ourselves, with other people, but still, it would be, I think perhaps a little bit too ambitious, but you know in an inappropriate way to after six weeks say hey we know how to govern AI globally while the world has been breaking their heads over this. So, we really care deeply also about the process we are in now.

This is one of opening the doors, asking more people what they believe is missing, what they believe should be deepened, and we've been getting great inputs. So this is also exciting for us to kind of feel, does this resonate with people etc etc. 

HIROAKI KITANO, Chief Technology Officer of Sony Group Corporation: If it's the back of the process I think it's more like a learning process actually rather than a disagreement. I mean, you know, I've been through like many AI, governance conference, AI, safety conference in the UK and the OECD… But this UN body is a most diverse group of people with real multi stakeholders when I look at all the membership. Wow, this is what the UN is talking about when you talk about multi stakeholders. OK, this is really a diverse group of people. And so you know what we're getting at. Like you know, when we talk about some specific aspects to the AI, like, you know, some people start to say, ok well how about this, like oh I haven't thought about this. OK, so like that, that is really a continuous process. Like we've been talking this is the way to go, you know, some different perspective coming in. So we’ve got to, you know, fix these issues. We're going to include these issues, we haven't thought about it. So this is really the learning for, I think I got all members, you know, in this and then the final report actually fixed the day as diversity on the true sense of the multistakeholder.

And I think this is a value that the United Nations steps into and I'd like to thank you all for the Secretary-General and Amandeep and all the members who made it possible.

MODERATOR: Let me now go to the other side of the room. Politico.  Don't worry.  I will get to all of you.

QUESTION: Suzanne Lynch from Politico. And a question for the Secretary-General.  Just in terms of I’ve heard a lot about the need for global governance, but are you concerned about some actors?  I'm thinking, more autocratic regimes, countries like China, of course, and how they're going to approach this hugely, open, at the moment, prospect of artificial intelligence.  Do you think they will play a constructive role in this?

SECRETARY-GENERAL: Well, it's very interesting because I only had two moments of interaction with the group. And, as a matter of fact, if you look at the composition of the group, I would say that all the views that have been expressed by different entities, in different formats, were included in this group. So, I mean, you mentioned China. There were two very active Chinese members of the group and I mean two very active, and I listened to them, especially to one of them. I mean, so very committed an Xi Jinping told me when we met that he considered that this is something that is very important and that the UN should be at the centre of this process.

So, I mean, there are lots of preconceived ideas about what people and my surprise, positive surprise, was that first having put as a condition the real diversity and no exclusion and on the contrary that all kinds of, I mean, contributions. And second, not having given any instructions that the UN would like to have this or that.

No. I shut up completely. No, completely. I was extremely surprised to see that indeed there is a much broader margin of consensus when the discussion is not politicized. Of course, this discussion was not politicized. No, the people were discussing based on their ideas and their opinions, not on what is the programme of this entity or what that entity or the interest. No. If the discussion is not politicized, I think there is a meaningful chance of a large consensus. Not a total consensus, but of a large consensus.  If the discussion becomes heavily politicized, then other factors undermine the consensus, it's not that the consensus is not possible.  It is that other factors can undermine the consensus and that is what I believe this group is able to avoid.

JAMES MANYIKA: If I could add one quick comment to what the Secretary-General just described, it is the case that you know, at the outset, the Secretary-General, the only instruction we had was to be independent, to be ambitious, to be bold.  We've tried to be bold, but I think he makes a particularly important point as follows. I never felt that any of our discussions were politicized. The debates that we had often tended to reflect the vantage points and expertise that people brought to the table.

So there are people, for example, who spend their entire career thinking about safety. People thought a lot about economic impacts. People thought about the technical development of the technology, people thought about trade, people thought a lot about workforce issues, people thought a lot about inclusion and diversity. So, people brought those kind of experts, topical views.  I never thought it was politicized as a political composition that we're having in any of the discussions.

QUESTION: I'll be, I hope not… a bit blunt, but hopefully not rude, but to be efficient in my question.  James, you said you had 400 meetings in two months. And now you're going into a consultation phase. So, a pity I suppose to be hearing that if that's the case, but does this risk becoming a talking shop and that's, that was one thing that I thought the consultation sounds like.  The last six weeks of consultation were fairly intense and now there's a consultation phase. I mean, that sounds like a lot of talking. But that's not my only question. 

I'll just quickly get through and the Secretary-General what you said. 

Just some things didn't quite hang together in your opening remarks. You said that we're not asking to be leaders of a global governance system for AI. That you said there must be a universal and inclusive approach and I don't understand who's more universal and inclusive than the UN.  You said there shouldn't be a fragmentation of standards, but then you pointed out that the G7, the British, now the UN so it sounds like the, the fragmentation is, is happening. You said that you offered no concrete, specific proposal, but we've heard how you also have tried to be bold.

So I'm, I suppose that's, that sounded to me like a contradiction. Perhaps not and my final question is, we think that AI and I don't want to be too tin foil hat about the whole thing, but I think it's not unrealistic to expect that the world's military probably have technology which is superior to what we commonly know.

And they are also your members, your Member States, and so is there a discussion about saying to the NSA or whoever it is? Hey, we've got a global South out there, we've got, we’re here talking about the applications about this technology for the good of the world. If you've got something that would help the world, why can’t we talk about it. Was that part of your discussion?

JAMES MANYIKA: Maybe I could begin the first part of your questions about the meetings.  Most of the meetings, the meetings I described are mostly amongst ourselves, Committee Members of the body because we organize ourselves into working groups by the way, just to be very practical. So, a group of us working on opportunities and enablers, a group of us who are working on challenges and risks, a group of us working on several sub-groups on governance. So it was in that context, that we were having a lot of discussions amongst ourselves.

We did also consult, not extensively, to some extent, other parts of the UN and others outside, but it was mostly amongst ourselves. 

On a go forward basis, the consultations are going to have a few anchor moments that I should describe.  The first was actually about a week ago when we had a meeting with Member States who had actually read the report and gave us their feedback, their initial reactions and responses and thoughts and ideas. So that was actually very very helpful. I think there's probably 18 countries actually spoke and actually talked about that.

We are going to have a few key meetings around the world. We've got one coming up in Geneva.
I think the [inaudible] going to land in March, where the Body's going to meet with a whole bunch of stakeholders.  We're going to have another one in Africa, in probably in East Africa, probably most likely Kenya.  We are still working on exactly where that will be and we will have another one in Asia. I think we're planning to do that, probably in Singapore.

So we are going to have this important moments of consultation where we get feedback. Members have also, Member States have also told us what they told us were their preliminary views, having read the draft report.  Many of them are probably going to come back with additional views and we are going to have to incorporate them.  So hopefully it's not.

AMANDEEP GILL: I think it's better to over consult than under consult with civil society, independent experts, they'll be a consultative group of expertise outside the advisory body that will accompany the advisory body on itself, future steps and we'll continue working hard.

MARIETJE SCHAAKE: Maybe one more comment on your, your reference to the various other initiatives that you called standards, but a lot of them are not.

They're voluntary, they’re intentional declarations and I think it's very clear in the report and also in our approach that we want to be complementary, but we do recognize, there's a unique role for the UN with its global legitimacy in this particular emphasis to correct the wrong if you want to think about it that way of not having included people, their contexts, their lived experiences their needs from, from the global majority or global South. And I think if you look at a number of these initiatives and they may lead to more binding efforts down the road, they are still quite Western centred and that is something that is deeply felt by everyone in the group that that needs to be balanced out.

SECRETARY-GENERAL: You asked some questions about the UN.  First of all, the UN has no power and no money, which is an enormous advantage, which means we have not any specific agenda of protecting interests. We have only two things:  one is the legitimacy of having all members represented. And the second is some convening capacity. And I mean this group is the demonstration of that convenient capacity.

We invited a number of highly qualified people that do not need the UN for anything. And they generously accepted to be part of this group and to produce this.  Now, what's the next steps? Up to their suggestions and up to Member States’ decisions. The UN can be simply a platform where these things can be discussed, nothing more than that, or the UN can host different things, or that's for Member States to decide. And that’s for this group to present Member States with the proposals of the things that they think that will work better. So, we do not have the objective to say, we want to have a UN Agency that is able to increase this, this, this and that.  We are not that.

We are, with humility, looking into what comes from those that we trust, to propose to Member States and then accepting the decision of the Member States.

QUESTION: And did you discuss the military implications that AI and the fact that your Member States may have technologies that could help the world and how to get it?

MARIETJE SCHAAKE: Everything has been discussed.

MODERATOR: The session will have to wrap up in just a few minutes. I think you wanted to say something and then I will take …

HIROAKI KITANO: A very short comment on the fragmentation. So you mentioned, there are many forums like the Partnership AI, AI safety summit, OECD.

Some of the members actually are also the members of all these. Actually, I'm a member for like Partnership AI, OECD, Hiroshima process and WEF and others. We are very aware of what's been discussed out there, and then my gut sense is like, it's pretty much in conversion.

We don’t have like a really diverging discussion going on at this moment, so I think what’s been reported here is really affecting what’s been discussed in the community in a much larger sense. Not just like a study, you know, plus a 40 member of us. Actually, this is like our independent view as independent expert individual rather than a replicate organization behind it. But at the same time like you know, the number of members actually are also the member of other you know, organizations so we know.

JAMES MANYIKA: If I could just interject on this very quickly, if you all go and look at the backgrounds of the people on the Body, they reflect the point that's just been made because many of us I mean, my co-chair Carme Artigas [Secretary of State for Digitalisation and Artificial Intelligence of Spain] was leading the negotiations with the EU for example.

I am the Vice Chair of the national AI committee that advises the US President. So, so if you look at the membership of the Body, so this is, these are not separate views. They're drawn from all these other compositions going on around the world, I think. And I think I felt personally that this was the most inclusive conversation on this topic I have ever had and I have spent a lot of time on this topic.  I’ve not been in group as diverse as this or as inclusive, globally inclusive as this.

MODERATOR: Let me give the final question to AFP.

QUESTION: Thank you. Agence France Presse, normally based in Brussels covering EU regulation. I kind of wanted to pick up on what you said Secretary-General in terms of you said that Xi Jinping would like the UN to be at the heart of it. And I remember from Li’s remarks yesterday, about how he was saying about the red line and that the world should work together, but I'd like a bit more detail about your chats with Xi Jinping and did he say how he would like the UN to be at the heart of it, where he would like this process to end up?

That's a question for you, Secretary-General. And for the rest of the Panel, I would like to know when you've been at WEF where AI has been the topic of conversation aside from Trump. I kind of want to know what like, what are people's concerns when they talk to you? What are the real fears? And do you think that what you're doing, kind of jumping off. Is there a harmony with what you're doing in the regulation that different countries are working on?

SECRETARY-GENERAL: About the Chinese, not only Xi Jinping, I mean, the discussion was a lot on the idea of the risks of having a fragmentation of AI, with different sets of regulations, different sets of standards and, and even a competition in which AI becomes an element of geostrategic competition.

And what was said from the Chinese side is that we would like the AI to be seen on a universal perspective and we would like to have the UN in the centre of that discussion.  That was, I mean basically not more than that, we didn't enter into any negotiation or anything.

It was, it was on a visit, we discussed several things and he expressed this, this point of view which by the way has been expressed publicly by the Chinese Government.

JAMES MANYIKA: I think to your question I would say too about the views here and elsewhere.   I would say two things. First of all, I've been struck both in the initial feedback we received from the Member States who have given us feedback, which include all, many of the largest countries, by the way, in the world have given us feedback, was I think none of them want a patchwork. So in that sense they were very supportive of this process of trying to get to some sort of global framework. The same view is also expressed by many of the actors, whether it's companies or researchers and academics.

So this idea of avoiding patchworks and wanting to get some collective framework based on things we all agree on seems to be consistent everywhere actually. So that's been quite gratifying at least for me.

MARIETJE SCHAAKE: You were referencing the multiple discussions and sort of hype maybe around AI here at the WEF this year.

Ian [Bremmer] also touched upon every storefront mentions AI in one way or another and almost every panel mentions it. And, I think there are two ways to look at it. On the one hand, some of these are still parallel discussions because some of them are quite initial. So you have companies making specific products that are keen to, you know, share their hopes of what these products might mean.

They obviously want to sell them. And there are others who are from their vantage points, you know, civil society actors that want to protect human rights, or people who care about development or people who care about the inclusion of youth. That are, you know, extending sort of their home base of whatever core business they have, health care, and then looking at that issue through the AI lens.

Ultimately, what I believe our task is, and is incredibly difficult. But we are hopeful that it can be done is to make sure that these are not parallel discussions, but that they gel together somehow and that's on a basis of core principles like the respect for international law, the respect for international human rights.

These various pathways can be taken, but never below a certain threshold of respect for you know, those core principles that protect people wherever they might be in the world. And that's also what I think should join the various recommendations, inputs, outputs that we will give in sort of one core stem, if you think about it as a tree, that can go into different directions and we are now in listening mode, so when people come and say, hey, don't forget about young people, don't forget about health care, don’t forget about this, that or the other, we absorb it and take note and take it on.  We're not in some discussion mode or something like that.

HIROAKI KITANO: I've been in this field pretty long, actually 40 years. Look I am a researcher, I spent four years [inaudible] computing and also I'm a biologist as well. So I have a little bit broader perspective of what the science is all about and how science practice means.

So what if we have, would have, what we're seeing here is really fundamental change in the capability of what we called AI that they are bringing us. So, there are a few theoretical reasons why this is created a little bit different from the previous AI boom. We had AI boom, AI winters, AI boom, AI winters and I am one of the survivors, you know, so like this is style like in 2012 when we have like a deep learning and then a five-year that we have a game called the generative [inaudible] network, with a huge jump in the capability. Now, we have a transformer diffusion model that is yet another jump.

My prediction is we're going to have, you know, numbers of quantum leap breakthroughs in the coming years and people are aware that the technology is in the very early stage. You know, we are actually often like saying this is yet another industrial revolution.

And where we are is the steam engine.  Internal combustion engine is yet to come and that's something, that’s my take, that requires a series of improvement in a powerful AI, it will govern and must improve the safety AI. And, you know, what we're looking at is [inaudible] member of top level talent and the funding from the private sector, government sector, all kinds pouring into this area. I haven't seen this kind of like a big investment on the [inaudible] talent in any other field in last four years. So that that is what we are seeing at this moment and that not only impacts in the productivity of what we are talking about, you know, Davos, that was the economic forum, so people are talking about productivity.

But in our session, for example, we had like a meeting in national academy science, they're talking about scientific discovery, cure for disease, new material, how we improve the issue of sustainability. So like, we are looking at more than the economical industrial perspective of the revolution, we are looking into that, there are other sides as well.

So the big issue which also we portray here is, can we use the AI in a proper way to solve SDG issues? The issues that we are facing, we need technology. Not only AI, maybe biomedical and others as well. But to solve it, the AI can be one of the driving factors to actually break through the new approach how we tackle the SDGs. 

MODERATOR: Do you have any concluding words or should we close right away? Maybe Amandeep or maybe James?

AMANDEEP GILL: Just to say thank you very much. My name is Amandeep Gill.  I am the Secretary-General’s Technology Envoy.  It's a pleasure to host the SG here today, and the Members of the Advisory Body.

Please stay in touch as we go through the next phase of our work and stand by for the final recommendations. Thank you.