It's a pleasure to welcome you to this morning's UN Geneva press conference on the Internet Governance Forum with stellar speakers.
Vinton Cerf, who many people already know, Chair of the Internet Governance Forum Leadership panel, Vice President and Chief Internet Evangelist, the father of the Internet at Google.
And also to his left, Maria Ressa, Vice Chair of the Internet Governance Forum Leadership Panel, as well as the 2021 Nobel Peace Prize laureate, well known for her work and a regular visitor to Geneva, happily.
So we're going to discuss the the panel's mandate and the scope of its activities since it was established and look ahead to the Internet Governance Forum meeting in Japan later this year.
So for opening remarks, I'll hand over to Mr Cerf and then Miss Reyesa over to you.
It's a pleasure to see you remotely and some of you here in the room.
So why are we holding this press conference?
Speaking for the Leadership Panel, we propose that the Internet Governance Forum should evolve to become more output oriented given its 18 years of experience.
If multi stakeholder work on Internet and digital ecosystem governance matters, we believe that the Internet Governance Forum is the best vehicle for considering the challenges and opportunities implied by the establishment of the Global Digital Compact.
So the key message today is the evolution of the Internet Governance Forum and the Leadership Panel's support for that evolution.
Let me give you a little bit of background.
The Internet was invented 50 years ago and went into operation 40 years ago in 1983.
The World Wide Web was announced in 1991 here at CERN in Geneva and became globally visible by 1995.
Tens of thousands of independent networks all cooperate to form the Internet.
The World Wide Web application rides on top of the global Internet as its most used application.
The Internet Governance Forum was established at the close of the World Summit on the Information Society in Tunis in 2005.
Its first meeting was in Athens in 2006.
It's a multi stakeholder body comprising civil society, the private sector, academia, the technology community and the government representatives.
Its mission is to articulate the opportunities, challenges and potential paths forward for the best use of this global digital ecosystem.
150 national and regional multi stakeholder Internet Governance Forums have self organised over the past decade or more to bring local attention to the challenges of the Internet and World Wide Web applications and to continue and to contribute to the annual IGF meetings.
The Leadership Panel of the Internet Governance Forum was chartered and appointed by the Secretary General of the United Nations with the goal of amplifying and spreading the messages of the IGF, supporting its operation, helping to shape its agenda and assisting it to respond to the changing demands of the emerging Internet and World Wide Web ecosystem.
The Global Digital Compact is being developed with the facilitation of Sweden and Rwanda and is assisted in this work by the Office of the UN Technology Envoy.
It seeks to create a global framework for the safe and secure use of digital technologies such as those we use daily with the Internet, the World Wide Web, our smartphones, laptops and pads.
Once in place, the implementation of the provisions of the Compact will require multi stakeholder collaboration to ensure that the Internet, the World Wide Web and other digital ecosystem components can meet the expectations of the Compact.
The Internet and the World Wide Web have delivered massive economic, educational and social benefits to its over 5 billion users.
But the excitement of artificial intelligence, Internet of Things, cryptographic applications and social media, and the emergence of misinformation, disinformation and other harmful behaviours have led to renewed attention on the need for broad agreement on the responsible use of these powerful technologies.
The Leadership Panel believes that in addition to preserving all the constructive values of the Internet and the World Wide Web, we must also respond to the potential risks and harms that have arisen in recent years.
The Internet we want must address those concerns while fulfilling the aspirations of the Global Digital Compact.
I would like to invite my Vice Chair, Maria Rossa, to summarise the human rights concerns that have become apparent as the Internet expands to serve the world's population, and the role that the Leadership Panel and the Internet Governance Forum can play to address these concerns.
So, Maria, the floor is yours.
Thanks so much, Vin, and thank you so much for joining us today.
It's been an incredible 24 hours, a little bit more where we've been focused on and, and it's the Internet in so many ways.
But since I'm the journalist, I'm the bearer of bad news.
And you know, the, the social harms, well, the harms aren't just social, they're political.
Many of the things I used to study counterterrorism and the the kinds of marginalisation that we're seeing today, all of this on the counterterrorism part, what used to radicalise people are now in our politics.
So offhand, just the three points, right, which we've raised in the IGF.
The 1st is it is by design.
So I'm focused more on the apps on top of the of the Internet, which the father of the Internet helped create.
But on of these, you have heard me say this over and over and over again, when lies spread 6 times faster than facts.
That is an MIT study from 2018.
It is the beginning of the cascading failures that makes it impossible to have facts.
Without facts, you can't have truth.
Without truth, you can't have trust.
Without that, we have no shared reality.
We cannot solve any problem, let alone the existential problems we're dealing with.
Those are some of the issues that we're starting to grapple with now.
And it is amazing to have an incredible group of people to do this and to hopefully harness all, all of the, the United, all of what the United Nations can do through the Internet governance forum.
I think just on the last part of that, what can we do right now?
These cascading failures are impacting your kids, are impacting your independence, right?
Because if you don't have integrity of facts, you cannot have integrity of elections.
So this goes throughout our entire society.
Finally, the last part, and you will have just seen the headlines of a prominent politician who resigned yesterday, a woman politician.
It is those who are marginalised, who are already marginalised in the physical world, who are further marginalised.
And I would point that women in particular, whether they're journalists, researchers or politicians, are attacked significantly more.
In the Philippines, as early as 2017, women journalists were attacked at least 10 times more than men.
You now have another prominent female politician in the EU just resigning yesterday.
This is you do not want to do this.
Women hold up half the sky to quote another journalist.
So at this point we will be happy to respond to questions that the press may have.
I do want to emphasise how important it is for us to simultaneously preserve all of the demonstrated values of this online ecosystem while defending against the things that Maria so clearly points out.
This is not a trivial challenge, but it's one that the Leadership Panel and the Internet Governance Forum are focused upon and have been for quite a long time.
If you would like to ask some questions, I see a hand up online, but in the room I think first of all, Boris, could you introduce yourself?
And your yes, Boris and Nelson the local journalist freelance.
I basically have two questions.
Maybe you are too young to know, but before the advent of the Internet, all people who were not part of the noble hierarchy of information, mainstream publishers, mainstream media, were dreaming of something like the Internet.
And we saw that this device would create the universal library and push all knowledge and debates upwards.
How do you explain that we are so far away and in a way even one step backwards from the traditional library classification system?
And a question to Maria Reza, I didn't know you were involved in the IGF so far.
I was surprised by the little involvement of journeys in the IGF and Whizzes.
So how do you explain that and what could be the role of media journeys with all their flaws, but they are, besides academia, the main supply content, supplies?
Why are there so low profile and what could be their role?
So these are provocative questions and I'm sure that was by intent and I appreciate those and I'm sure Maria does as well.
I was one of those optimists who hoped that the Internet would bring information to everyone at their fingertips that would create opportunities for education, to help people increase their skills, to improve their economic condition.
And I believe that the Internet does have the capacity to do that.
It's that's been demonstrated.
But at the same time, as Maria points out, information is information.
Misinformation is also information.
And the problem we have is this powerful tool for distributing and discovering and, and using information has the amplification potential for everything, including, you know, counterfactual things.
And as she points out, sometimes counterfactual things propagate faster.
So I don't regret that we have this fantastic and powerful tool, but I do believe responsible use is important and we can't rely only on people to decide to use this system responsibly.
We need to create a framework in which bad behaviours are held accountable.
And because the Internet is global in scope, the challenge is transnational.
It's going to require cooperation among countries to identify those parties who are behaving badly and should be held accountable for that.
And that's why we have this activity within the UN framework.
The Internet Governance Forum was created as as part of the output of the World Summit on the Information Society.
We will need the help of the UN and its agencies and all the member countries to collaborate to create a safer and more secure environment in this online space.
So it's like many other powerful tools.
We have to learn how to use them carefully and properly.
Thanks Vint and thank you so much for that question.
I am new and I am learning from the people in the IGFI mean the 10 people who were selected by the the Secretary General, I think just showed you, well, you've heard from his statement from the Global Digital Compact that this is existential.
In general, we journalists have been too caught up in our own battles for survival.
For one right, our business model is essentially dead.
The tech that has empowered earlier on, I would say up until 2014 when they became the gatekeepers, they abdicated responsibility for the public sphere.
And I would also say that many democratic governments abdicated responsibility for the safety of the people in the public sphere.
And that has been ruled largely by profit, right?
The Nobel laureate or 300 Nobel laureate civil society groups.
We came up with A10 act, 10 point action plan.
Six of those 10 points were directed to the EU what what I lovingly call the fastest of the turtles running to try to protect the public sphere, right?
And, and those 10 points just come to three buckets.
Stop surveillance for profit, stop coded bias.
And the third, journalism as an antidote to tyranny.
Democracy has rolled back for the last 15 years.
And in the last decade, journalists have taken the brunt of that.
We've been attacked, gaoled, killed numerous countries around the world.
I could still go to gaol for being a journalist, but the reason I think they asked me to join, and I was asked to join and I thought about it, I was like, am I the right person for this?
I think it's, we're looking for ways to take existing systems of governance, whether that is in a government or in an institution like the United Nations, and try to find ways to push it up to the speed at which technology moves and the speed at which our world is changing.
I like to think, you know, I have learned a lot from my colleagues in the IGF about the hardware part of it, the telecoms that provided.
You know, I, I look at Myanmar because I was there in 20 O 8 when the, the government, well, the Myanmar was just opening up and there were only 2% of the population on online, on cell phone and, and they were giving it out and all of a sudden it escalated to 70%.
And that was partly what led to having Facebook on every cell phone and then having genocide happen.
The UN sent A-Team to Myanmar, as did Mehta, right?
I think the goal here is to stop the impunity.
We need to have these rules.
Our societies and our politics are divided and polarised to the point that often times we cannot act.
I guess what we are going to try to do is to say that it is in the interest of every person here, whether you are in an autocratic country or in a democratic country, you do not want to be insidiously manipulated for profit.
So that's our goal and I got to say the the fellow members of the IGF, incredible.
I have learned a tonne and I apologise if I speak too much.
I I think it's not possible for journalists to speak too much.
I do want to emphasise some of the things that Maria has said.
The first observation I would make is that the amplification effects of these online technologies have a both a positive and a negative side, and Maria is careful to show how and why we have to constrain some of the potential behaviours and some of the potential side effects.
What we need is not detailed regulation as much as it is principles that can apply over a long period of time.
The technology is changing very rapidly and detailed, overly detailed regulations will frankly not keep up.
We met yesterday with a number of representatives from the European Union and to that point was made several times by several of the participating ambassadors.
So we want something that will work effectively over time.
The second thing that's very important is that we don't want to lose all of the positive features, but we absolutely need to to build mechanisms into the system that provide for the kind of safety and security that Maria so carefully lays out.
So I hope that responds to your questions and we thank you for those and we now ask if there are any others.
I believe there are questions online, including from Jamie Keaton.
Yes, I can Thank you very much.
This my question kind of follows up on on what's already been said, but I wanted to try to drill down specifically on the issue of artificial intelligence, if I might, Mr Cerf, I've lo and behold, been searching the Internet to try to see what you may have said about this to to find out.
There are a lot of concerns, of course, about artificial intelligence, both in terms of the development of the Internet and on journalism and and the use.
So if I could just get you to sort of chime in.
I know obviously you're with Google and Bard and and and we've seen that we've seen that Bill Gates has chimed in on this issue, most recently in the MIT technology review a couple days ago.
But, and I just wanted to know your thoughts on how that is going to philtre into the future of the Internet and, and what concerns, what, what you would say to people that are, that are concerned about AI and, and its, and its development.
And if that question could also go for Miss Ressa as well.
That's question #1O2 and lots of people have been asking this.
First of all, let me say that artificial intelligence is a very general term.
It was invented in the 1960s by John McCarthy.
The Advanced Research Projects Agency, which created the ARPANET, the predecessor to the Internet, was researching and and supporting research in artificial intelligence way back then.
Because now, 60 years later, we have discovered that some of the methods for achieving this artificial intelligence have advanced pretty dramatically.
I would like to use a different term though, if I might, and that's machine learning.
Because in fact that is the the current mechanism that is producing such startling results with the so-called large language models.
It's terribly important for our journalists and the rest of the population to understand that not all of the value of machine learning is exhibited in these large language models, which seem to be so glib.
And so easy to to interact in human terms.
They are amazingly confident as they assert falsehoods.
And and because they sound very human, they it's easy to accept the discourse incorrectly.
But let me give you some other examples of machine learning applications that are incredibly powerful.
At Google, as you must know, we run lots of data centres and they get hot because the computers are, you know, consume a lot of energy and they radiate a lot of heat.
So you have to cool them all off with air flows and fans and sometimes even water cooling.
It takes a fair amount of power to cool the data centre.
So we used to once a week control and and reconfigure the valve and pumping systems to cool the data centres and try to reduce the amount of energy required to do that.
Someone got the idea that we could train a machine learning algorithm to optimise the cooling system and reduce the amount of power required to get the same result.
So after several weeks of training, we managed to get a machine learning algorithm to reduce the power requirement by 40%, and that's a significant reduction, A very powerful tool.
And it has nothing to do with misinformation and disinformation or anything else that just has to do with optimising a particular function.
The same kind of argument can be made with from some work by another sister company called DeepMind, which some of you will know announced that it had built a machine learning algorithm that figured out how proteins fold that are produced by the DNA of the human body.
But they folded all 200 million possible proteins.
You know why is that important?
But once you know how a protein folds up, you know what its shape is.
The shape is what allows that protein to interact with various parts of your biological system, which means we might be able to use that knowledge to find new cures for various and sundry diseases or to improve people's health.
These are only two out of literally thousands of possible examples, so let us not throw away the value of machine learning in the face of the the peculiarities of large language models.
So let me give you one other example that shows you why the large language model is simultaneously intriguing and of concern.
I decided to take one of these chat bots and ask it to write an obituary for me.
And I know that sounds a little macabre, but I thought, well, the obituary format is probably well known to the large language models because there are lots of obituaries to be found in the World Wide Web.
And of course, these large language models are trained by consuming content on the World Wide Web and building models of discourse based on what it reads in the World Wide Web.
So I assume that it would know how to produce an obituary.
And I also made the assumption that because I've been involved in the Internet for literally 50 years, that there was probably some information about me also available.
So I asked it to write an obituary and it it completed a nicely formatted obituary.
We're sorry, doctor Sir passed away, blah, blah, blah.
Then it talked about my career and then it talked about the family members that were left behind.
Well, it gave me credit for things other people have done and it gave other people credit for the stuff I did.
And when it got to the family members, it made-up some family members that I don't have, at least I don't think I have.
So the question was, how could this possibly happen?
And just to give you a cartoon model of how this can happen and why this is such a challenge for the machine learning and large language model builders is that imagine for a moment Maria Resa and I are connected with the leadership panel.
So there are probably some web pages that have her name and my name and our BIOS.
But the machine learning system is consuming all this text and it doesn't necessarily notice that this words or these words came from Vince Serf's bio and these other words came from Maria's because we're both on the same web page.
Now remember, this is just a cartoon model, so don't break me over the coals for lack of precision.
But the the large language model could easily conflate facts about Maria with me because we're sort of Co located on the same web page.
So it's easy from that simple model to understand how this conflation can happen and how factual material can still present counterfactual information.
So we have a job to do and that is in the technical community and that's to understand how and why these things happen and how do we discipline, so to speak of the large language models, not to be not to be confused by this kind of conflation.
So my view right now is that we're rather at an early stage in understanding how to accomplish that objective for any of the Freudians in the group.
My little other cartoon model of this is that what we have is the ID and the ego have been created artificially, but we're missing the artificial superego to discipline the behaviours of the ID and the ego.
So that's my somewhat non-technical response to what do we do with this.
It's still incredibly powerful stuff, so we don't want to lose its utility, but we clearly have to discipline it.
Maria, do you have anything you wanted to add to that?
So this is the reason we work well together because he gave you all the positives.
Now I'm going to give you all the negatives and we do this all the time.
You know, Vint talked about ID and ego and then it's lacking a super ego.
My my perspective is that it shouldn't be released publicly until it develops a super ego.
But the problem, of course is that the the large language models, these companies require us to actually make the product right.
When Open AI came out and they had 100 million subscribers, of course, that's now been beaten by threads.
But when that happened, you are the ones that are, we are the ones that are creating the next phase that are helping these companies fine tune it.
And if that is the case, you know, there need to be protection for those of us because you know, there was a Brussels man who committed suicide because of what an AI fed him.
There are many other harms that are there now.
But the other things is to think about businesses, right?
Since these are huge businesses, what happens to copyright right when it is fed?
There are now legislation being tested.
There are cases that have been filed in different jurisdictions.
Everything that is fed into that that is then going to be reused to create more value for the for that company.
Will the ones who actually created that will the copyright will you get anything from it?
That's the, there are cases now on privacy issues.
Will your, your data, would you get anything for that?
I mean, of course this is generative AI is what we're talking about on the large language models.
But go back to what AI really is.
The first contact of humanity with AI is machine learning and that is in our digital platforms that connect us.
These are the social media platforms.
And again, they were optimised for profit at our harm.
Our data essentially was picked up.
Everything that you put into your, whether it's YouTube or whether it's Facebook or whether it's Twitter, if you're still there, you, you essentially machine learning comes in and takes everything you've posted and creates, they say, a model of you that knows you better than you know yourself because it has all your relationships, It has everything.
Change the word model to clone, right?
So you're cloned and then the companies come in and use AI to take all of our clones and make that the mother lode database that they can then use to micro target.
Advertising in media in the old world is not the same as micro targeting.
This takes your weakest moment to a message and feeds it to you, right?
So this is this has turned us into Pavlov's dogs.
Those harms are documented.
They haven't been addressed.
Our governments, our institutions are slow to address them and those of us on the front lines are the ones paying the price for that.
It is generating tremendous profit, but we are paying a huge price.
So the last thing on large language models, garbage in is garbage out.
And what has gone into these large language models?
A lot of the unstructured data of social media, which prioritises the spread of lies over facts.
Again, that MIT 2018 study, right?
So if that's the case, what about fear, anger, hate?
And this goes back to, I'll end with this, how our biology has been hacked by the technology.
Another brilliant man actually said this.
He studied emergent behaviour in ants, right?
EO Wilson said that the greatest crisis we face is our palaeolithic emotions, our mediaeval institutions and our God like technology.
So in the first instance, the first contact with AI, it was our fear, anger and hate.
That was how our biology was hacked.
In the second contact with generative AI, we're seeing the kind of, and I will say, you know, the way that the large tech companies have rolled it out, they've tried not to anthropomorphize it.
But what we are seeing increasingly is how our loneliness is weaponized.
This is moving through our biology, triggering our emotions and it is taking away agency from real people.
So these are the dangers.
We will find solutions because we cannot not right.
But our window for finding those solutions, proposing them and taking a multi stakeholder approach to implement them, that window is closing.
So I feel compelled because of our normal interactions here too.
Yes, just just to remind everyone that not all of the interactions with the large language models have negative consequences.
They aren't quite empowering.
People who are trying to write software get help.
People who are trying to write documents and things like that get help.
And so we know that these can be potentially very, very advantageous.
The problem, of course, is disciplining them to avoid all of the concerns that Maria has so correctly raised.
So I hope that answers your question rather a lengthy response to a deep question.
Thank you very much, both of you.
I don't know if there are any other.
Maya plants, would you like to introduce yourself please?
Thank you for taking my question on my appliance from the UN brief we call for the United Nations and its agencies from the prism of new and emerging technologies it its impact on multilateral issues.
How does Google plan to address the problem of machine learning conflating information that scrapes from the web?
So first of all, this is not a press conference about Google.
And so I don't think it's appropriate to respond for Google.
I'm willing to take a question that is specific to the Internet Governance Forum and the leadership panel.
Maya, would you like to ask you a question, another question about the IGF?
So how is the Internet Governance Forum?
Issues related to the fact that.
Machine learn conflicts information that's great that scripts from the web and creates many issues in terms of the veracity of information.
What is going to be the role of the Internet Governance Forum in that aspect?
So thank you for rephrasing the question.
The Internet Governance Forum has been dealing with questions like this for some time.
Not necessarily specifically large language models which are relatively new in the in the scene, but all kinds of other uses of the Internet and the World Wide Web have been part of the discussions in the IGF literally ever since it was started 18 years ago.
And so the IGF absolutely will be wrestling with these problems.
The multi stakeholder character of the Internet Governance Forum is what gives it its value and power.
So many different participants with different perspectives and different experiences allow us to see a much more full picture of both the hazards and the benefits of these online technologies.
So I have no doubt that there will be the perhaps six or seven specific discussions in the upcoming IGF in Kyoto on this topic and in related topics as well.
Bear in mind that large language models are not the only thing that we are concerned about in the World Wide Web and the Internet.
And so the Kyoto conference will be covering quite a large range of questions having to do with safety and security and privacy, economics, benefits and and hazards in the online environment and the expectations that we should all have of an Internet that is constructive and safe.
So we will have a rich discussion, including responding to some of the questions you've raised.
So thank you for asking that.
Thank you very much, Mr Cerf.
Sorry, I wanted to just follow up on one other thing that you mentioned, Doctor Surf about accountability for for misinformation and the Internet.
Having listened to what Miss Ressa had just finished saying about the insidious manipulation for profit, one might assume that that also means to some extent that maybe from her perspective, and I'd like obviously you to chime in on this Miss Rest as well about the, the responsibility of corporates to to to have some level of accountability.
But from your perspective, where could you just elaborate a little bit more about where you see that accountability needing to come?
I mean, there are so many different players that that take advantage of the Internet and the World Wide Web and various other tools where who who should be really held accountable.
So this is absolutely topic A, the whole question of both accountability and agency.
Let me speak to both of those.
I believe that parties, and I mean by this, individuals, organisations, companies and maybe even countries are, are parties involved in the use of the Internet and they should be held accountable for their behaviour.
Now in order to hold any party accountable, you have to figure out who they are.
And so in some sense, an absolute anonymity may not be your friend in this framing.
And people will then say, but it's important to preserve anonymity for people who would otherwise be harmed if it was known who they are.
A typical example of this is whistle blowing.
And it's my belief and my hope that even in those circumstances, a a party who feels the need to reveal and speak might speak through a channel which has validated that party's identity or or authenticity, but has also committed to protecting the privacy of that source.
And knowing that Maria and many of you are journalists, I know that most journalists are determined to protect their sources for all the reasons that you can imagine.
So we have this important challenge, though, which is to hold parties accountable.
We need to know who they are.
I mentioned also agency and I want to emphasise how important it is to give agency to individuals, organisations and even countries to protect the interests and safety of their citizens and the stability of the of the state or the organisation.
And, and I think Maria could probably speak and will want to speak to the concerns that we have about the erosion of democratic societies in the presence of some of these complex new capabilities and the potential for the use of these capabilities, not to preserve the safety and security of the citizens, but the safety and security of the regime.
And so, once again, a very complex environment.
All the more reason that it should be addressed in the Internet Governance forum in the context of the UN.
Maria, I can tell you have something to say.
And thank you for the question.
Because that is actually at the crux of it.
The tech companies will actually say they hold a mirror up to us, right?
But that is not just wrong, it's not just a lie.
It is the design of these platforms, responsibility and agency.
Those are accountability and agency is what Vint talked about.
I'll talk about it from the fact that technology is the least regulated industry globally.
No drug company would be able to roll out a drug and give drug A to this part of the room and drug B to this part of the room and do a real time test on real people.
And then oh, you on this side of the room died.
This has happened in tech, right?
The UN has sent people to Myanmar and the UN and it was under Marzuki Darussman, who was former commissioner on human rights in Indonesia, a former justice minister, right?
He he came back with the same results Mehta did, which is that these the way the platforms were used and designed.
Led to genocide in there.
So has anyone been held accountable?
Let me talk about it from my own personal experience.
When you get 90 hate messages per hour, that is not normal.
Nor should a journalist, A researcher, or a politician be subjected to that.
But those information operations, information warfare, free speech used to pound someone else, to silence and to change a public narrative.
Who gets away with this with impunity?
The tech companies and the actors, right?
So you, we now have all of this documented.
I think, you know, you look at this Dutch minister, a top Dutch minister who would have led the opposition party.
She just resigned yesterday because because of intimidation and threats Enabled by what?
Information operations on these tech platforms?
No woman or LGBTQ plus should have to deal with this.
We are protected from this in the real world.
Segregation is illegal in the real world, yet in the virtual world it's allowed for profit.
I think the last part is if you think about algorithms as a repetition, like, you know, let's say you make an algorithm of the decisions, editorial decisions I make.
They can be good, they can be bad, but you amplify the bad and it's repeated millions of times.
This is the reason why we need to do this in testing, right?
It should be, this is engineering, this is technology and it should not be tested in the real world on real people.
Having said that, let me switch to what Vint would say, which is yes, they're tremendous positives.
Rappler was founded on, you know, if Facebook had better search, I probably wouldn't have built a website.
And I built a website and thank God I did because, you know, at one point Facebook told all news organisations that video was going to be was getting more views.
Except a year or so later, we found out that the data was wrong and they knew it, and they continued to propagate it while news organisations fired editorial people and hired video people.
Who is held accountable for these?
These are changes in our public ecosystem.
I think accountability first begins with the same way that a news organisation is held legally accountable in the United States.
Section 230 gives impunity to the tech companies, the American tech companies.
It was so interesting that with TikTok, it was much easier for American legislators to act, right?
This should not be the case.
This we should be looking at the safety of people on these platforms.
Thank you very much, Maria.
I see another question in the room of follow up from Boris.
Yes, I fear my question does not have answers unless IGF plans a one week summer school next year for us to discuss that at length.
When I ask you why are we so short of the dream of universal, transparent, **** semantic content, you said among all that information is information and even fake news are information.
But there is, it seems to me, a more difficult aspect to that with the net.
We pass overnight from a world you know better than me of information scarcity to information overflow.
In information scarcity, the big issues were censorship and lack of means and visibility in the overflow.
How to not to be flooded by near 1 billion answers for each word of the index And the other end?
The opposite model would be to have a renewed Encyclopaedia Britannica led by UNESCO.
But that might be one way thinking, academic thinking.
So is there an answer to that, to that dilemma?
In the old world we had tables of content and index.
In today's world, the table of content is obsolete, only the index survive.
It may be a good or bad thing.
I don't want to speak I'll of Google because as everyone I use it 1000 times a day.
I curse it 1000 times a day, but we still we cannot it's it's the best by day four.
That's why we need the summer school next year.
Is that when I go to a traditional library, they not a large one, those where you see all the books, you are immediately aware not only of what you know and the documents which will teach you to know more, but you see the immensity of what you don't know.
And that was more or less the model of Paul Outlet I think 100 years back is.
Wouldn't this be an element to bring in back some semantics in artificial intelligence?
To always reposition the information, the information we know in the entire world of current knowledge?
This reminds me of the graduate of the the final exam question.
Describe the universe in 25 words or less.
Give 3 examples, but I will try to be brief.
First of all, provenance turns out to be a very important notion in the information space.
Where did the information come from?
And then of course, we hope that we can figure out which sources should be more relied upon than others.
In the early stages of the Google search engine, one of the indicators that a website was valuable is that lots of other websites pointed to it.
It turns out that's not as sufficient indicator, but it was remarkably powerful in dealing with a search response that had 10 million hits because you can't present all 10 million hits on the page except in .001 font.
And so you had to rank order them somehow in the subsequent years, yes.
So in the subsequent years, though we and others who, who, who do search services have found ways of finding more indicators of value of information or utility of it.
But in the world of the large language models, we don't have the same mechanisms yet.
But one thing I will say is that if we are successful in adapting these tools, they might have the ability to cope with the massive amount of information that's discovered by summarization and by, you know, making a shorter response.
But again, you don't want to lose factual factuality as a consequence of this kind of summarising.
But that's what these information tools have the potential to do, which is to help us cope with so much information by boiling things down and by pointing to the the more valuable factual information.
One last point on this, I think we still rely on another practise called critical thinking.
And you must do this as every journalist I know, trying to figure out what should I say and where is where does the truth lie?
So I think we need to train people to think critically about what they're seeing and hearing in these online environments.
I think it's can can our rational minds, you know, thinking fast, thinking slow, right?
Can our rational minds cope with so much information coming at it?
Our world, we were used to a world where it moved at the speed of human comprehension and now we live in a world that moves at the speed of broadcast.
I mean, that's been used to describe different things, but it is at the speed of machines and we cannot absorb that.
Because I'm, I'm really, a lot of my work has focused a lot on information operations and information warfare.
And in that sense, where things go really bad are when the interests for profit of the companies align with the interests of the, the people who are trying to insidiously manipulate us.
As in as in the public, right.
I don't want to, I've already said this.
And you know, if you're interested, I did the closing keynote for the Nobel Summit last May.
It was at the National Academy of Sciences.
I lay out the evidence and the data there, but I will end with this because we don't want you to walk out of here thinking the world is going to be horrendous to live in.
Because obviously This is why we are working in the Internet Governance Forum.
This is why we sit there and debate and it's been really quite fun.
What we found in the Philippines is that it requires a whole of society approach.
This is a significant difference from the old days where people can lag, right?
But in in the Philippines, we essentially did an influencer marketing campaign for facts and it was about 150 groups, a four layer pyramid where news organisations work together in a way we never did before.
We worked with civil society, with human rights groups, with the Church Asia, Philippines, Asia's largest Roman Catholic nation.
The third layer were the academes and the fourth layer was law.
Because if you don't have integrity of facts, you cannot have integrity, you cannot have rule of law, right?
And this is the part that actually I think gives us hope to forge ahead.
We found that in the mesh layer, the second distribution layer, the instructions is to share these boring fact cheques which never spread on social media, to share them with emotion.
But they could not use anger or hate, right?
That inspiration spreads as fast as anger.
So governments lead us, inspire us, right?
Leaders of I won't take a dig at that CEOs of social media right now and the kinds of things they're doing, but inspire us because it moves as fast as anger and hate.
We are running out of time because there's another press conference following this one in in just 7 minutes or so.
Could it maybe be a fast one?
Yes, thank you very much.
I would like Mr Doctor Serf and and Maria Ressa also to comment very quickly.
How do you are you or do you see that the IGF could address the question of copyrighted material that's been scraped, how you're going to deal with the issue?
What's the response of the IGF?
I think Maya the response.
Actually, I don't know to what extent the IGF has addressed this question.
It's relatively new issue.
I mean, copyright has been around for a very long time, but the large language models have raised a new kind of question about what, what is a derivative work and who should benefit from the production of these kinds of works with the tools that are now evident.
So my guess is that this will be something that will be discussed in Kyoto, but I don't have anything to report from the previous meetings or the ones that we've been undertaking here in Geneva.
We haven't yet gotten to it, but you know, my position on copyright is clear to me and we will debate it amongst ourselves.
It's I think again the the strength of the IGF is Vint actually rolled this out right?
There are 150 different groups ground up, right?
But it works with big tech, with industry, with governments.
I would have loved to have worked with government and before the time we we joke in rapper before President Duterte, news organisations did.
So the question here is how can we build a bottom up and top down, a process that will help address these problems as quickly as the tech companies roll them out?
Thank you very much indeed, Maria.
Thank you very much, Vint Serf, for your input, for your dialogue, for your insight.
We appreciate it very much indeed.
Good luck with Kyoto in October.
We're going to have to wrap this up.
Thank you for joining us today.