OHCHR presser: Technology and Human Rights 14 July 2021
/
50:37
/
MP4
/
518.4 MB

Press Conferences | OHCHR , UNITED NATIONS

OHCHR presser: Technology and Human Rights 14 July 2021

Teleprompter
Good afternoon, everyone.
Thank you for joining us for this press briefing on freedom of expression online.
Before we get started, I'd love to introduce the team that we have within the UN Human rights office that's working on these issues of digital technologies and human rights.
[Other language spoken]
On the video, you'll see Scott Campbell, who is the lead for this team and with me at the table here, I have on my left Marcelo Dot Dar, who is a human rights officer working on these issues, particularly within our Civic Space unit.
So focusing on issues of human rights defenders and others.
And Marcelo also used to work for the special Rapporteur on freedom of expression, David Kay.
And on my right I have Tim Engelhardt, who is our specialist within our rule of law section, focusing on digital issues, but particularly on right to privacy and and other naughty concerns that we might not get into as much today, but we're happy to do it.
[Other language spoken]
So starting off then, we want to emphasise that we have the same rights online as offline.
But when we look at the online landscape and you see a digital world that is unwelcoming and frequently unsafe for people trying to exercise their rights, you also see a host of government and company responses that risk making the situation worse.
Recent developments in countries including, for example, among many others, India, Nigeria, the UK, the US and Vietnam, have spotlighted these issues and will be influential in how our online space evolves.
These challenges are not new Online.
Offline speech too has posed and continues to pose these problems.
What is new is the reach and speed of this digital public square.
Discussions on how to address lawful but awful speech online tend to devolve into finger pointing between States and companies with political and economic interests often eclipsing public interests.
We have one overarching message we'd like to bring to this debate and that is the critical importance of adopting human rights based approaches to confronting these challenge challenges.
It is of course, the only internationally agreed framework that allows us to do that effectively.
We need to sound a loud and persistent alarm given the tendency for flawed regulations to be cloned and bad practises to flourish.
There are many examples of problematic legislation on online content.
To give you an idea, about 40 new laws relating to social media have been adopted worldwide in just the past two years, and another 30 are under consideration.
Virtually every country that has adopted laws related to online content has jeopardised human rights in doing so.
This happens both because governments respond to public pressure by rushing in with simple solutions for complex problems, and also because some governments see this legislation as a way to limit speech they dislike and even to silence civil society or other critics.
Let's take Vietnam's 2019 law on cybersecurity as an example.
It prohibited conduct that includes, quote, distorting history, denying revolutionary achievements, and quote, providing false information, causing confusion among citizens, causing harm to social economic activities.
That legislation has been used to force deletion of posts, and along with the Penal Code, many of those voicing critical opinions have been arrested and detained.
Facebook initially challenged government orders to take down content, but reportedly has now agreed to restrict significantly more content, apparently as a condition for continuing to do business in Vietnam.
In June, Vietnam adopted an additional new social media code that prohibits posts.
For example, that quote affect the interests of the state.
Laws in Australia, Bangladesh, Singapore and many other locations include overbroad and I'll defined language of this sort, and the list keeps growing.
The United Kingdom in May tabled its Draught Online Safety Bill, which has a worryingly over broad standard that makes the removal of significant amounts of protected speech, IE speech that under international law should in fact be permitted, almost inevitable.
Now, in the wake of abhorrent ***** of black English football players earlier this week, there are demands to get that legislation into place more quickly, as if the bill could have somehow protected the players from the racism they faced.
These laws in general suffer from many of the same problems, 5 different ones.
Poor definitions on what constitutes unlawful or harmful content.
Outsourcing of regulatory functions to companies, overemphasis on content takedowns and the imposition of unrealistic timeframes, powers granted to state officials to remove content without judicial oversight, and also over reliance on artificial intelligence and algorithms.
There is a better way.
We can and should make the Internet a safer place, but it doesn't need to be at the expense of fundamental rights.
I'll now turn to my colleague Marcelo Dyer.
[Other language spoken]
[Other language spoken]
[Other language spoken]
[Other language spoken]
We have outlined 5 actions that could make a big.
Difference.
[Other language spoken]
Focus on process, not.
[Other language spoken]
Look at how content is being amplified or restricted.
Ensure actual people, but algorithms.
Reveal.
Content decisions second, ensure content based restrictions are based on laws clear and narrow, narrowly tailored, and are necessary, proportionate and non.
Discriminatory.
3rd, be transparent.
Companies should be transparent about how they create and moderate content and how they share information with others.
States also should be.
Transparent about their requests to take.
Down content or to access.
[Other language spoken]
[Other language spoken]
[Other language spoken]
Users have effective opportunities to appeal against decisions they consider to be.
[Other language spoken]
And make good remedies available for when actions by companies or states undermine their rights.
Independent courts should have the final say over lawfulness of content.
5th and final point make sure civil.
Society and experts are involved.
In designing and evaluation all regulations, participation is essential.
The EU is currently considering what promises to be a landmark, landmark law in this space.
It's Digital Service Act and the choice being made in that legislation could have rifle effects or worldwide the the draught has some.
[Other language spoken]
[Other language spoken]
In human rights language, it's been developed.
Through a participatory process.
And it contains.
[Other language spoken]
Some contradictory signals remain, including the risk that over broad.
[Other language spoken]
Will be imposed on companies.
[Other language spoken]
Content and that there will be limited.
Judicial oversight.
We're also concerned about other approaches to regulate regulating companies, including requirements for legal representation mixed with threats of criminal.
[Other language spoken]
Storage and access and taxation.
[Other language spoken]
Has faced serious incidents of incitement to violence online, clearly a factor in recent attempts to regulate online space.
In February, India unveiled new guidelines for intermediary and.
Digital Media Ethics Code This new law introduced some useful obligations for companies.
Relating to transparency and redress.
But the number of provisions.
Raised significant concerns, including those.
[Other language spoken]
[Other language spoken]
Authorities.
To request quick takedowns obliging.
Platforms to identify originators of.
Messages and stipulating that companies must appoint local.
Representatives whose potential liability?
Could threaten their ability to protect.
[Other language spoken]
And even to operate the ****** of limiting protected speech and privacy.
Has already surfaced there, including illegal disputes with both.
Twitter and WhatsApp in the past month, which are now before the Indian courts, a number of other countries have introduced.
Or are considering introducing extensive.
Requirements for platforms to operate.
Including.
Amendments adopted last year in.
[Other language spoken]
[Other language spoken]
Regulation Indonesia or a recent proposal in the Mexican Senate.
The impact of which varies significantly based on the political and legal context which they are being enforced.
Of course, the ultimate tool to control online speech are Internet shutdowns, including the blocking of specific.
Apps and the partial.
Or complete shutdown of Internet access.
Access Now, Keep It On campaign has documented 155 shutdowns in some 30 countries during 2020 and 50 shutdowns already this year.
Until the end of May.
The **** Commissioner has expressed specific concerns on shutdowns in Belarus.
[Other language spoken]
In June, the Nigerian government announced their indefinitive suspension of Twitter after the platform.
Deleted the post from.
President Buhari's account that that Twitter said violated its policy on.
Abusive behaviour within.
Hours the country, major telecommunications companies blocked millions from assessing Twitter and later Nigerian authorities threatened to prosecute.
Nigerians who?
[Other language spoken]
The ban.
The Echoes Court of Justice has recent has reportedly ordered recently Nigeria to refrain from such prosecution pending the outcomes of a case filed against the Twitter ban.
By Nigerian civil society organisation restrictions.
Not always violate human rights.
But any limits need to be necessary.
Proportionate and non discriminatory.
To be proportionate.
And to be proportionate restrictions should be the least.
[Other language spoken]
[Other language spoken]
Far exceeding any need shutdown.
Just doesn't interfere with speech.
People rely on the Internet for their jobs.
[Other language spoken]
And their education.
The impact of shutdowns on elections.
When open and safe space for public.
[Other language spoken]
And public protests are vital is particularly serious in the.
Face of these challenges.
Social media companies have become.
Something of a punching.
Bag for everything that goes wrong online.
They are harshly criticised both for failing to take down harmful content and often equally severe ***** when they actually.
[Other language spoken]
Much of this criticism is.
[Other language spoken]
The.
Companies open themselves for open.
Themselves up for such complaints.
By their I'll.
[Other language spoken]
The Facebook Oversight Oversight Board has engendered a great.
Deal of.
Scepticism, but it has succeeded.
In one.
Important way it's cases have shone light on previously unknown or unexplained practises that have.
[Other language spoken]
Consequences for fundamental rights online nowhere was this more true than its review of Facebook decision to.
[Other language spoken]
[Other language spoken]
President Trump to an indefinitive suspension of his.
[Other language spoken]
The board decision LED Facebook to admit that.
It had not applied its political.
Figures exception to President Trump account to explain the cross cheques process we had not known existed and to and to.
Provide information on its.
[Other language spoken]
[Other language spoken]
Another huge gap relates to the opaque and insufficient avenues people have to.
[Other language spoken]
[Other language spoken]
[Other language spoken]
Example last may, when conflict began in Israel and Palestine, critics critics allege that Palestinian voices were disproportionately.
Affected by social media company actions, Facebook acknowledged.
Problems arising in its automated moderation decisions and moderation systems and described as an error Instagram's blocking of hashtags referring to the AL Aqsa Mosque because Al Aqsa is also the name of an organisation sanctioned by the US government.
[Other language spoken]
Back to Penny, The UN Guiding Principles on Business and Human Rights stipulate that all companies have a responsibility to respect human rights.
For tech companies, this means that they should take steps to ensure that they are not contributing to or linked to human rights abuses committed by those who use their products and services.
The actions companies take should be proportionate to the severity of the risk.
Their options include a range of measures, not just takedowns, but flagging content, limiting amplification and attaching warning labels.
Companies need to do much more to be transparent and actively share information about their actions and company policies and processes.
Social media companies also need to grapple with how they address content moderation issues globally.
Context is essential to understanding the potential of speech to incite violence.
For example, while Facebook's experience in Myanmar led to greater investment and what it calls other quote at risk countries, social media companies capacity to understand language, politics and society in many locations around the globe remains limited.
The UN Guiding Principles on Business and Human Rights also make clear that governments have a duty to protect against human rights abuses by companies.
Given the the propensity for online harm, states need to put in place effective guardrails for company actions, including by requiring greater transparency and accountability.
But the dangers of states overstepping the mark are painfully clear.
As countless people currently detained for online posts that contain protected speech make clear, state regulation, if done precipitously, sloppily, or with I'll intent, can easily consolidate undemocratic and discriminatory approaches that limit free speech, suppress dissent, and undermine a variety of other rights.
We face competing visions for our privacy, our expression, our lives.
Spurred by on on by competing economies and competing businesses, companies and states alike have all agreed to respect human rights.
Let's start holding them to that with the That is the end of our introductory statement and we are now open to questions.
I will be watching online for raised hands, so please let me know.
[Other language spoken]
[Other language spoken]
[Other language spoken]
[Other language spoken]
[Other language spoken]
[Other language spoken]
[Other language spoken]
I just wanted to ask you, you know, you were talking about this is relating to businesses and states, but how does this apply to bigger organisations like the UN for instance, in terms of access to information?
Because I think journalists sometimes have complaints that the system that operates does not give them full and free access always to information that they need in press conferences and such like instances when they cannot easily question and access the information that's needed.
[Other language spoken]
Thanks, Peter.
We are looking very closely and and my colleague Scott Campbell may want to add on this at the way that the UN implements human rights standards with regards to new technologies.
The Secretary General in his road map on digital cooperation specifically looked at how the UN itself does human rights due diligence on how it deploys new technologies with regards to access to information and content online.
Obviously the UN isn't, isn't itself a, a social media provider nor a state and, and and doesn't fall within the, the specific regulatory or UN guiding principles on business human rights.
But, you know, the idea that the UN needs to be careful in the way that it uses these spaces, that it needs to provide access, free access to relevant information and avoid mis and disinformation is, is obviously, you know, part of part and parcel of what the UN is committed to doing.
Scott, anything you want to add on that front?
Thanks, Peggy.
Maybe just just quickly that indeed, I mean, the secretary general has, you know, stressed he has a new strategy out last year.
On how to use data, how to leverage information.
So that we can be more effective at at meeting.
The SDGS and.
Promoting and protecting human rights around the world.
A challenge there and a.
Challenge in access to information.
Is of course.
Relates to issues of confidentiality.
And information that may be sensitive and this is something, of course.
That's a very frequent.
Concern in our work, in the in the.
UN, human rights.
Office, so access.
To information access to data.
You know the Secretary General, obviously.
[Other language spoken]
[Other language spoken]
Us more effective as the.
UN, but at the same time there are those concerns about data protection and and data privacy and the Secretary General in parallel.
With his efforts to make sure.
That we're applying the human rights due diligence in how we're.
Using information how we're using data?
Has also undertaken an effort to develop a.
[Other language spoken]
That's specifically around data protection and data privacy, which is something.
Yeah, my colleague Tim Englehart has been working.
On as as well over.
Thanks, Scott.
I see Stephanie and Nabihaya has her hand raised as well.
Stephanie, I I I have met but but if people can make sure to introduce themselves when they when they ask their questions, please Stephanie over to you.
Thanks, very.
Thanks very much.
Peggy, Stephanie Nabihay, Reuters 2 questions, if I may, because I think they're.
Of general interest.
One would be regarding former president.
Trump's lawsuits filed.
Recently against Facebook and Google.
[Other language spoken]
Parent company and.
Sorry, Twitter alleging that that they've.
You know, tried to.
Silence conservative viewpoints unlawfully so.
I realise this is under adjudication, but.
Would you say that you know that that that he has a case against such media companies?
That have just.
That have that have put on this in.
The case of Twitter, at least, you know, sort of an indefinite, indefinite, indefinite.
Ban and then perhaps, if you could, indefinite suspension.
In the case of Twitter and then if I could come.
Back with a second.
[Other language spoken]
Address that please.
[Other language spoken]
Or would you rather have that first?
I'm fine with just taking that one.
[Other language spoken]
So thanks for the question.
It's, it's obviously really interesting issues.
I'm I'm not in this instance, you know, going to opine specifically on, on U.S.
law, but the the two points that I'd make in response to to those lawsuits.
The first is that, you know, what we have seen in monitoring the space and, and what I think has been fairly well document is that while there are many ways in which the company's policies towards content moderation are subject to to criticism or or improvement.
The idea that there is some sort of overall silencing of conservative voices that's disproportionate compared to other voices, I think doesn't have a solid evidentiary base.
Now what impact that will have ultimately on on a legal suit that's for the for the lawyers to say.
I would also mention as, as you said that as it relates to sort of the silencing of speech generally, the idea of either indefinite or permanent bans, actually it's it's interesting that Facebook did an indefinite suspension and that got them into trouble.
Twitter, on the other hand, it's, it's supposedly a permanent suspension or ban.
What has happened, of course, in that case is that the Facebook oversight board has said explicitly first that that Facebook didn't follow its own rules because they didn't have a provision for indefinite suspension.
And then has said, you know, looking at the and very explicitly Facebook looked at the relevant human rights standards, referring to, for example, the International Covenant on Civil and Political Rights and the commentary there to the Rabat Declaration.
So a very detailed analysis of human rights law, which we are pleased to see.
And what they found is that, you know, there needed to be greater clear that the original decision to to suspend was was supportable, but that, you know, again, the response has to be proportionate as we've said in our introductory remarks.
And so one of the questions there will be and I think will continue to arise is whether a permanent or indefinite suspension can ever be determined to be fully proportionate.
And I think that's a question that we'll we'll see coming up again, back to you.
So, Stephanie, for your second question.
[Other language spoken]
[Other language spoken]
[Other language spoken]
[Other language spoken]
[Other language spoken]
[Other language spoken]
[Other language spoken]
To regarding you alluded to the.
Racist, you know, epithets and behaviour.
Followed the the euro, the the the.
Final in Wembley in the UK.
Last Sunday and your concerns?
About that sort of behaviour.
On and on and off.
Off the pitch, do you have anything?
To say I guess the UK is.
[Other language spoken]
[Other language spoken]
Can you expand a little bit about any concerns you have about that, please?
[Other language spoken]
[Other language spoken]
Stephanie, I'm glad you you asked about the the situation in the UK because it's a good current example of where these issues have have arisen.
I think first we need to be clear that you can't wave a magic wand and make racism on the Internet disappear.
A lot of attention has been and can and should be paid to how to make online space less threatening and damaging for people of African descent and for women and for many others.
But the reality is that what we see online is mirroring what people face day-to-day offline.
The **** Commissioner just earlier this week released a groundbreaking report that looks at systemic racism.
And it's important to note that that report arose in the context of George Floyd's ****** and the the concern over racism within law enforcement.
But part of what the report makes clear is that you can't address systemic racism by only looking within law enforcement.
You have to also look at within society, look at the legacies that that racism is built on and address those issues.
So she's called for a transformative approach that looks across all of the ways in which racism is occurring.
And that's of course what that we'd want to see in the context of the racism experienced in sport in the UK and elsewhere.
At the same time, I think we also have to be clear about what we can expect companies to do.
Clearly there are ways to improve and they should be looking at them.
But the idea that they could, you know, uniformly and easily tackle it isn't, isn't so clear.
To block racist speech through algorithm, you would need the type of speech that we've seen in this case, for some of it at least, you would need a really sweeping approach that would undoubtedly block more speech than what we'd want to see blocked.
And I think it's really important to note that some studies have shown that in fact, when we used automated moderation tools like that, they end up censoring more heavily the exact people we want to protect.
So common hate speech detectors have been found to be 1.5 times more likely exactly to flag content by black people as inappropriate than content posted by white people.
So, you know, we need to be careful of what we call ask for in terms of solving this problem.
And we need to address both the online environment, but racism offline as well.
So I'd stop there.
But I maybe we should say a few more words on the Online Safety Bill that the UK has introduced.
[Other language spoken]
[Other language spoken]
[Other language spoken]
[Other language spoken]
Think with with with the online safety view.
We, we have the same same kind of concern we had with other previous legislation adopted in some European countries that are replicated elsewhere.
We have the problem with the vague definition of what would be the harms and to to be covered by this law.
So there there still very broad categories of content that could be considered.
Offensive.
Indecent.
Obscene.
That could be.
Targeted by this new.
Piece of of legislation.
There are also.
Serious concerns with the immense powers it gives to the Office of Communications Ofcom in the preparation of codes of of of practise and in the implementation of these norms.
These is is is is is a typical channel for ***** in other locations where rule of law guarantees are are not there.
And even in in, you know.
[Other language spoken]
In the UK context.
It every time you create a channel that it's too.
Close to government.
Authorities you you offer risk.
[Other language spoken]
There are also risks to privacy, of course, because.
It it it it it has lots of.
Requirements on providing information that could undermine encryption, which is another aspect of concern that we see in other legislation around the world.
So it it has some of these typical.
Features that we see.
In in this sequence of norms that we we just.
[Other language spoken]
[Other language spoken]
[Other language spoken]
And I, I see a question by Nick and I'll come to you next.
But just to add on to that briefly, sorry for the long answer, but Marcelo referenced sort of other European models.
The one that that we wanted to flag of course is the German Nets GG law, which is not only a major one, but one that has been replicated in dozens of countries in various forms.
And one of the questions we're often asked is if that law, you know, didn't have some of the dramatic consequences that we feared for protecting speech, why do we need to be worried about this replication?
And I think what we need to emphasise is that the the replica laws often go far beyond what was in the German law in a variety of ways, both in terms of the types of speech that it affect, but also that it looks at fines for systemic failure rather than specific posts.
And, you know, importantly, and this is something that's coming up a lot in today's context, it's the company that's held liable under that law, not managers that are threatened with ending up in gaol if they fail to, you know, take down a post.
So the bottom line is that the political, legal and economic context in which these laws are implemented really matters.
And so a strong judiciary and and mandated review can mitigate some of the effects of over broad laws.
But when we see them replicated in place without those guarantees, we see, you know, really disastrous consequences.
Nick, over to you for the next question.
[Other language spoken]
I'm sorry my Internet connection is so bad.
[Other language spoken]
Missed chunks of what you've?
[Other language spoken]
About, so I hope I'm not asking you to repeat.
[Other language spoken]
[Other language spoken]
People, some of the mechanisms are of protecting speech likely to affect the people you're trying to protect more than the.
People you're trying to protect them against.
And I just wonder if you could.
[Other language spoken]
On how that works and and where you find that most evident.
And that leads me to also a question.
[Other language spoken]
Really.
Where do you see?
Best practise legislation here.
[Other language spoken]
[Other language spoken]
[Other language spoken]
Speak of the laws of having a raising particular.
Issues and problems is there.
A model code that anybody could point to as a good point of departure.
[Other language spoken]
[Other language spoken]
Thanks very much, Nick and sorry about the sound quality.
We will distribute the the statement that we delivered.
So if if that's helpful later.
But the first part of your question was about the fact that some of these measures that are designed to protect people can ultimately have a, a negative impact on the various people they're trying to protect.
I don't have a specific case study that I can point to on that for you, but what I would say is that what we see is the, the real impact of, of these measures on oppositional speech and on civil society.
I mean, that's, and journalists, we see that happening from, from context to context where the types of speech that are really, in many cases the most important for us to have public space for are the speech that's being affected.
And those are the people that are often being detained and arrested.
The the statistic I cited was that, and it's a a study that was done and published by Vox that found that hate speech detectors were 1.5 times more likely to flag content by black people as inappropriate than content by white users.
So, you know, we can't take that one statistic too far, but the reality is the point is we have to be very careful about using automated approaches and we need human review of complex decisions.
So, you know, we can't expect that that simply putting in place the right algorithm is going to get rid of everything we want to get rid of online.
And any approach that would try to do that comprehensively would likely have really adverse impacts by by bringing in lots and lots of speech that really ought to be there and needs to be there for our societies to operate effectively as well.
The second part of your question was, was what that we were hoping somebody would ask, because it's, it's really important.
One which is about is there, you know, we're, we're quite critical of the existing models of legislation.
Is, is there a Model Law that that we think is, is one that we want to see replicated In my notes was, was a call for the fact that that's one of the things that we're really looking for is to try to get some of the states that are legislating right now to really engage in a serious, thoughtful process to bring in and consult with those that have expertise on these issues.
To learn for the from the examples that we're citing here and many, many others and to really make a better law.
And then once you've made the better law, really monitor it to see if you've gotten it right and whether there have been mistakes.
The place where this is coming up right now, and it's a really important environment for this conversation is within the European Union, which is considering a comprehensive Digital Services Act.
And we know that what the EU will do in this space will be very influential.
Look at the impact that the German, that the GDPR on data protection issued by the EU had.
So it will be influential for both good and for bad.
And we do see some promising approaches, as we mentioned in the Digital Services Act, referred to as the DSA.
But there's still some work to be done in our, in our view, to come up with legislation.
That's the model that we're looking for.
I'd point out a couple of issues in it that we still have.
One is the use of trusted flaggers under the law.
Law enforcement agencies for example will be able themselves to remove content and bypass some of the procedural standards and judicial review that we think is really necessary.
Also we think this is an opportunity to look more carefully at micro targeted advertising and the impacts of that on our societies and would encourage you know that to be picked up as well.
And, and thirdly, we're looking at oversight rules and, and how oversight works within this type of legislation.
And in this instance, we really want to emphasise the need for oversight to be insulated from political and, and commercial influences.
And so to to really set in place an independent oversight mechanism for how this is, is put in place.
I wanted to also emphasise though that within the EU context, it's not just the DSA that is relevant to these discussions of freedom of expression online.
There were new EU wrote rules that came into effect on removing terrorist content online just on June 7th.
And they give online platforms only one hour to act in in removing or evaluating content.
And we're concerned that that short time limit will result over broad takedowns.
And we've urged the European Commission to closely monitor the impact of those rules and revise as needed.
So that's an example of a place where the type of oversight I was mentioning would be really helpful.
I'm looking to see other questions and I don't see any off the top.
Do any of my colleagues have things that I've missed that you'd like to add in?
Scott, anything from your end?
Not seeing any Tim Marcelo?
[Other language spoken]
[Other language spoken]
Ahead, just a short point, I think 1 clear point here is, is is that when for example with this confusing signals, clarity is essential in when you regulate expression and with these overlapping different norms and different attempts from different sides and done just out like as a reaction to different events accumulating the system that is created is a system that is extremely confusing and it's hard for the companies, it's hard for the.
Users and.
Ultimately hard even for the courts to then make their minds about what is permitted, what's not, how it's going to be implemented.
So clarity is some aspect that we need to look at and and that it's concerning any time that a government takes action too fast and without consultation.
And I don't believe I see any other Christopher Vogt.
[Other language spoken]
Sorry, my mic wasn't on Christopher Vogt.
[Other language spoken]
[Other language spoken]
[Other language spoken]
Agent France Press I just had a a question that seems to be a a big tension between the speed of Internet.
And the sheer size of the data that comes out.
And the solution that you offer which is for.
Complex solution for complex problems to have.
People looking into it.
And 2nd, to have independent courts.
Deciding this is all on the time span that is completely different from what you have on the Internet.
And meanwhile, you know this, the message keeps.
[Other language spoken]
[Other language spoken]
Very different speeds.
And and possibilities to to manage the data.
[Other language spoken]
I'll take an initial step, but I think my colleagues may want to come in on this Christophe and sorry for not getting your name right the first time.
I think it's, it's really a valid and important question and it's one where you know, there isn't, you know, slam dunk answer that will solve the problems of the scale and speed required to do these things.
And we will always be trying to do things at scale as quickly as needed to protect rights while, you know, infringing on rights as as minimally as as possible.
That's what the whole standards of necessity and proportionality are about.
But what we do find, we're not saying that every decision needs to be subjected to human review, but what we're saying is there are ways to bring human review into the process that can ensure that the algorithmic approaches are working effectively.
And there are, of course, instances where things should be reviewed more than once.
I mean, that's actually another thing that came out of some of the Facebook oversized boards review of the the Trump cases that it came up that they do cross checking on some cases.
And that gave rise to, you know, a lot of of questions about, well, how do they decide which things are going to cross check?
But the idea that some sort of cross checking on certain types of things that could have enormous societal impact is appropriate, I think we can probably all agree on.
So we need to find those solutions to allow it to be done at scale and at speed.
If we look at the German Nets TG law, I think that was one of the big questions because it, it, it is a very quick timeline.
And as I said, I think what we've seen is that while there have been some takedowns of, of protected speech, it has not had the, the ripple, huge ripple effect that it might have.
And the, it does of course involve a system for judicial review.
So it's an example of something that that has there are there are varying opinions on this, but has has worked in in many of the ways that the authors hoped it would, I think.
And and so, you know, we can learn from it and improve on it and hopefully address those those questions of scale and speed.
Would others like to come in, Tim?
Yes, thanks for the question.
It's of course, one of the, the, the core problems we face in this area.
And as Peggy said, there is no simple answer to that.
And, uh, I think acknowledging that is, is really part of the solution.
Umm, and that points to some, some, uh, important elements.
Regulation of any sort needs to, to ensure, umm, so we have mass, uh, decisions, umm, but we as the, as the public know very little about how they are being done, what the impacts are, etcetera.
So that points to the, uh, need to have a massive increase of transparency, uh, about decision making processes.
Umm, only that can enable, uh, appropriate level of, of accountability.
So actually people can go back to the companies and say hey, here was a problem, please fix it.
That points to to the need to have avenues for for review, for remedies, etcetera if there are mistakes that are being done.
And that also points again to do to something we have referred to several times, the need to keep this these processes as far removed as possible from from undue political influence and hence the Inc involvement of independent judiciary bodies.
Hence the need to have oversight that is not linked to a ministry or of some sort.
And that's actually something that we see all around the world now happening.
Even the Online Safety Bill we mentioned before has one part where the Secretary of State can, umm, determine the strategic, uh, direction of Ofcom, which really collides with the idea of umm, uh, independence.
And you see it in many other instruments.
We have referred to Singapore in, you know, where umm, various authorities can just go to Twitter, say and say please take this down.
So, so there's not one single solution, but we need to have a whole set.
And I don't think there, this will be easy.
I don't think that this will be free of mistakes, but what we need to keep in mind, there will certainly not be a solution if we abandoned the fundamental principles we have built our societies on, and this is rule of law, democracy and human rights.
Thanks, Tim.
I don't see any hands, but I see Nick coming.
Bruce said that your hand is up.
I'm not sure if that's a still up from before.
If you had another question, I have another question, Yeah, if I may.
Simply to ask whether we're reaching the point where we could use another mandate holder to essentially address these these particular issues.
Given the rapid expansion of.
Issues and the extraordinary scope of the problems, I mean at the moment it sort.
Of seems to fall on.
The hand on the head of the the freedom of expression.
But we're talking about issues that go beyond freedom of expression and and raised fairly complex issues.
So I just wonder if you'd think that we've reached.
[Other language spoken]
Where we need another mandate or we need some other form of ombudsman, neutral arbiter to to provide a.
Point of reference.
For civil society and state.
[Other language spoken]
So next question was, was about whether we need another mandate to go beyond the special rapporteur on freedom of expression and and potentially have more in place to to look at these issues and provide guidance on them.
This is a question that you won't be surprised to hear that we've talked about a lot within our office.
I think that the, I, I want to emphasise that the work that the special procedures and mechanisms have done on these issues, while the freedom of expression mandate has been exceptional in terms of the, the work that's been done both by David Kay and now by Irene Kahn.
It is the case that across the mandates there's an enormous body of work relating to these issues of digital technologies and human rights.
So you know, obviously the the Special Rapporteur on counterterrorism was with David Kay writing to the Australian authorities on their abhorrent Violent material act that was passed in the wake of Christchurch, a situation where in fact they wrote to request information on the law that had just been introduced.
And by the time they were able to make their letter public, the law had already been published, not published, that had already been adopted.
An example of that sort of quick approach to legislate that I mentioned earlier.
But it's not just expression and freedom of association and the political and civil rights, but there's also been really good reporting on these issues from the mandate holder on older persons, on health, on violence against women.
So, you know, it's across the spectrum of of the the mandates that exist within the Human Rights Council.
But I do think that it's not as accessible and it's not pulled together in the ways that might be make it most useful to States and others that want to look at the space.
We've made one step in that direction.
So it's my chance to advertise for you the digital hub that's been created that pulls together all of those special procedures reports that relate to this.
And I'm, I'm going to let Scott give you the, the website address because I don't have it in front of me, but, but that's one way that we're trying to address the gap that you mentioned.
But I wanted to mention that we've also talked a bit about the need for your earlier question about models about review of legislation, about more support and ability to advise governments as they're making these tough, tough decisions.
And we do think that that more space to do that, not just from sort of a special rapporteur approach, but actually sort of advisory opinions and engagement with states around these issues could be really useful.
The model we sometimes mention is that of the the Venice Commission within the Council of Europe that advises on rule of law and other issues.
So, you know, some sort of mechanism of that sort is something that we've we've said might be useful.
And obviously there's the new office of the Tech Envoy set up by the Secretary General role that we, you know, will will obviously have some role in the space and, and will I'm sure be advising and supporting on this.
I should also mention that as an office, we are working very diligently to address how companies address these challenges.
We have a, a big project called the B.Tech project that works with companies on how they apply the UN guiding principles on business and human rights in the tech space.
Because what we hear, of course is the guiding principles, you know, may make sense in a traditional supply train or in extractive industries.
But the same sort of question that Christophe has raised, you know, how do they work given the scale and the speed of the digital world?
And so we're really trying to unpack those issues and try to be very clear what our company's responsibilities under the guiding principles to do human rights due diligence to avoid the types of impacts that we're talking about.
And that guidance also even looks for example, at business models and the extent to which they need to look at their business models and the extent their business models actually feed some of the offline harms that we're that we're talking about here.
Scott, over to you for any additional comments and the the link for to the website and I just, I put the link in the in the chat to UNTV.
Hopefully that's accessible to everyone, if not UNTV.
[Other language spoken]
Pass that on.
I mean, I think you covered it.
Peggy would.
[Other language spoken]
First step, you know, in trying to pull together really the enormous and the rapidly growing.
Amount of information of of.
Guidance and recommendations and reports.
[Other language spoken]
Not only the special.
Procedures and the the.
Different mandate holders, but also from the Treaty bodies.
And, and from our own office.
So there is a there is a a clearinghouse.
If you will of of information out there.
[Other language spoken]
[Other language spoken]
Step and I think there are still a number of gaps.
[Other language spoken]
On by the special.
Procedures, and that's something we're looking at is where are those gaps, where has guidance yet to be developed and what we're seeing initially.
Is there are a lot?
Of gaps around economic.
And social rights and the other area, I think.
Where there's room for improvement, just as in.
Coordination and perhaps you know, rather than adding an additional mandate holder.
As so many of the mandate.
Holders are already involved, it's a question of improving.
Some of the communications and coordinations going on between the different mandates that are working.
On on the impact of of digital tech.
On the respective rights issues that.
They're that they're working.
On over thanks Scott.
And I should just add, I, I referenced the work by the special procedures, but it's all human rights mechanisms.
The treaty bodies have done enormous work in these areas as well.
And, and, and we try to reference that in the hub too.
I don't see any more questions at this stage.
So in my colleagues tell me differently.
We'll bring it to a close.
We thank you all for your, for your interest.
As I said, we focused today on freedom of expression online, but there's lots of other issues in the space that we're looking at.
And, you know, would be very glad to to do this sort of briefing in the future on, on other issues in the digital space, you know, looking at things like the impacts of artificial intelligence on social and economic rights, facial recognition and its impact on, on policing and, and so many issues that keep us all occupied quite a bit of the time.
Thanks very much for joining us.