SNSサービス会社はユーザーにプラットフォームを提供しているだけであり、ユーザーの投稿内容に対する法的責任はなく、また各社のポリシーで投稿内容を削除することができる。
(一方、TVや新聞であればこうはいかず、その記事内容に対しては法的責任を取らなければならない。)
事の発端は一つ前のブログを参考にしていただきたいのだが、ちょうど昨日28日、米国上院議会で、SNS三社に対し「通信品位法230条」に関する公聴会が行われた。
・Google
・Facebook
・Twitter
オンラインで行われたこの公聴会では各SNSトップと激しいやり取りが行われたようで、その内容に関しては世に出ている記事を参考にしていただきたいと思うのだが、例えば、
主要発言
・「230条がなければFBを起業できなかった」(ザッカーバーグ氏)→12:45ごろ
・「230条を撤廃しろ!」(公聴会中にトランプ氏がツイート)→12:33ごろ
・「選挙にらんだ民間いじめでナンセンス」(民主議員)→11:34ごろ
・「我々は審判ではない」(3社CEO)→11:09ごろ
・「230条は米テック企業の強さの礎だ」(グーグルCEO)→10:25ごろ
・「フリーパスの時代は終わりだ」(共和党のウィッカー委員長)→10:00ごろ
※出典:「日本経済新聞」2020/10/29 3:04記事『米SNS公聴会詳報 「セクション230」巡り激しい応酬』より
このブログでは「検閲を行うのは人間である」ということを申し上げたい。
各トップがどれだけ弁明しようとも、検閲現場で最終決定権を持つのは各幹部であろうし、また、実際にその承認作業(ON、OFF)を行うのは現場のスタッフである。
そこに、彼らの倫理観(今回の騒動で言えば政治観になろうか)が全く入らない、とはどうして言えようか。
毎朝電車に揺られて出社し、デスクに座って検閲作業を行う。
そしておそらく、休憩時間は同僚と「今日はこんな記事を承認した。でもあんな記事は非承認にした。」みたいな会話がなされ、終了時間が来たら仕事を切り上げて帰路に着く。
そして次の日も同じような業務が続き、一週間が過ぎる。
時には隣人の意見に影響を受けることもあるだろうし、時には社内の空気感みたいなもので決定してしまうこともあるだろうし、毎日の業務の中で毎回聖者のように判断し続けていくのには無理があるだろうし、まあそもそも1か0で決まるものではないので、社の検閲ポリシーがあったとしても、その判断は人によってもその時々によっても誤差が生じてしまうはずだ。
実際に当社の経験で言うと、SNSの検閲ではなく、インターネット広告の広告審査の審査される側の話なのだが、全く同じ広告なのに審査を行う担当者によって審査に通ったり通らなかったりする。
もちろん審査の母数に対するその数は少ないけれど、「え?なんでダメなの?」と記憶に残りやすいので結構覚えているからまたかと感じてしまう(笑)
将来的にSNSの検閲はよりAI化が進み精度が上がっていくのは確実だが、私見を述べると、この「通信品位法230条」の免責特権は弱まるべきだと思うし、SNS各社や投稿するユーザーにはある一定の責任が負われるような時代になると思う。また、そうしたルールを構築していくべきだ。
一案として例えば、TVや新聞の記事をそのまま拡散した場合は投稿内容には何も加工はしていないのでSNS各社にも免責が適用されるが、少しでも文章を加えて拡散した記事については免責特権がなくなりSNS各社も投稿者も一定の責任を負うようにすれば良い。
投稿する側もオンラインで顔が見えないからと言ってヘイトスピーチや偽情報など何でも発信して良いということではなく、ユーザーを特定できるような仕組み(携帯電話番号認証など)の必須化はもちろん、起こってしまった問題の度合いに応じてSNS各社に対するユーザー情報開示の承認ハードルも下げていき、その度合いに応じてユーザー側も処罰されるという法律に改正すべきである。
世界中でますますオンライン化が進みどんなにAIの精度が上がっていっても最終的な善し悪しの判断は人間に委ねられるはずだ。
その"最終的な判断"が残り10%なのか5%なのかそれともたった1%なのかは、正に世界中のテクノロジー企業がAI力を高めて残りの90%なりを100%に近づけようとしている最中であり現在進行形であろうが、ヘイトスピーチや偽情報以外にも犯罪や自殺など人の命にも関わってしまう問題なので、私は、それを待つべきではなくすぐにでも法改正をしてより厳しく取り締まるべきだと考える次第だ。
29th October, 2020
Social Media Platforms’ Censorship in the Future
Social media companies are just giving a platform and not liable for any user content. Besides, they can remove any post at their discretion (on the other hand, a specified type of information content provider such as broadcasting or newspaper company shall be held liable for any materials they provide).
A public hearing was held before the Senate to testify three tech giants on the “Section 230 of the Communications Decency Act” yesterday (28th Oct). Please see details of the background on the previous blog post.
There seemed to be bickering among Senate lawmakers and each tech platforms’ top at the online-hearing (details can be found on various websites). The following is major testimonies (EST):
*Since we only have a Japanese translation of these testimonies from the source below, we excerpted the context containing relevant information in the bottom.
“Without the Section 230, I couldn’t start up this business”(Zuckerberg)12:45 *1
“Repeal Section 230!”(President Trump) 12:33 (tweeted during the hearing)
“this is nonsense and bullying of the private sector for electoral purposes.” (Democrat) 11:34 *2
“We are not the ref”(Three Tech CEOs) 11:09 *3
“(The United States adopted Section 230 early in the internet’s history), and it has been foundational to US leadership in the tech sector” (Google’s CEO, Pichai) *4
“The time has come for that free pass to end” (Chairman Wicker (Republican) 10:00*5
*source:
(2) “Tech CEOs Senate Testimony Transcript October 28”, 28th of October 2020
I want to emphasize the point that “it’s people to execute censorship”.
Whatever excuse each CEO may make, it is clear that they hold the authority to make a final decision and their on-site staff members perform an approval action (on or off).
It’s an undeniable fact that all decisions reflect their ethical sense (could be political sense in the Hunter’s case).
An employee commutes by car (or may stay at home to work under the covid-19 pandemic) and sits at a desk to censor posts. Sometimes, he/she may have small talk (or online chat) with colleagues: “I approved that kind of content, though, denied these ideas” and finishes tasks at closing time to go home.
He/she keeps working on the same task next day and for a week.
He/she might make much of other’s opinion and be forced to conclude contents by going with his/her corporate culture. It would be also difficult to behave like good Samaritans all the time to make a judgement on dairy routine. Even if any form of in-house censorship policy is given, each conclusion relies on each judge's interpretation or other reasons to make a difference. No decision shall be inherently made based on all-or-nothing method.
As far as I have learned from the reviews of text ads (not the one operated by social media companies), each result depends on the person in charge. Some of the ads are approved by one, but rejected by another, even though there is no change made on any word of a text.
Taking consideration of the total amount of reviews the number of such cases is small, Still, the question of “why was this ad rejected?” makes the event unforgettable. When we face the same situation, we easily tend to assume that such denial occurs frequently. LOL.
In my opinion, the liability shield under the Section 230 of the Communications Decency Act should be weakened, even if the precision of censorship will be improved by AI introduction in the future. Presumably, tech platforms and their users should become to hold liability to some extent. Or, rather, I should say, such a scheme needs to be established.
A new regulation could solve this concern: social media platforms are exempted from responsibility for any post which just distributes reports of TV or newspaper without any modification. Though, if any amendment is made on these articles to be posted, the sweeping immunity is removed to force tech companies and their users to be liable.
Users should keep away from posting hate speech or disinformation with the use of anonymity. It is necessary not only to enforce identification of users (such as authentication of mobile phone number), but also to reduce obstacles which tech platforms face in disclosing users’ information according to the level of adverse effect. Also, relevant act/law should be reformed to punish such users.
While digitalization accelerates to enhance the role of AI on a global scale in the years ahead, it will be human beings who will make a final judgment about what is good and bad. The extent of “final judgement” in percentage would be 10%, 5% or 1% in response to AI’s capacity, which tech companies have a desire and are making efforts to increase from the current level of, say, 90%, to almost 100%. Social media could have another negative impact, life-threatening in the form of suicide or crime, in addition to hate speech or disinformation. Therefore, the related raw should be amended promptly to govern tech platforms more strictly without a wait for the AI advancement.
(For your information)
*1
Mark Zuckerberg: (02:44:39)
Sure, Senator. I do think that if, when we were getting started with building Facebook, if we were subject to a larger number of content lawsuits, because 230 didn’t exist, that would have likely made it prohibitive for me as a college student in a dorm room to get started with this enterprise.
*2
Senator Brian Schatz: (01:33:48)
What we are seeing today is an attempt to bully the CEOs of private companies into carrying out a hit job on a presidential candidate, by making sure that they push out foreign and domestic misinformation meant to influence the election. To our witnesses today, you and other tech leaders need to stand up to this immoral behavior. The truth is, that because some of my colleagues accuse you, your companies, and your employees of being biased or Liberal, you have institutionally bent over backwards and overcompensated.
Senator Brian Schatz: (01:35:09)
So, for the first time in my eight years in the United States Senate, I’m not going to use my time to ask any questions because this is nonsense and it’s not going to work this time. This play my colleagues are running did not start today, and it’s not just happening here in the Senate. It is a coordinated effort by Republicans across the government. Last May, President Trump issued an executive order designed to narrow the protections of Section 230 to discourage platforms from engaging in content moderation on their own sites. After it was issued, President Trump started tweeting that Section 230 should be repealed as if he understands Section 230.
*3
Senator Thune: (01:08:09)
My Democrat colleagues suggest that when we criticize the bias against conservatives that we’re somehow working the refs, but the analogy of working the refs assumes that it’s legitimate even to think of you as refs. It assumes that you three Silicon Valley CEOs get to decide what political speech gets amplified or suppressed, and it assumes that you’re the arbiters of truth, or at the very least the publishers making editorial decisions about speech. So yes or no, I would ask this of each of the three of you, are the Democrats correct that you all are the legitimate referees over our political speech. Mr. Zuckerberg, are you the ref?
Mark Zuckerberg: (01:09:44)
Senator, I certainly think not and I do not want us to have that role.
Senator Thune: (01:09:50)
Mr. Dorsey, are you the ref?
Jack Dorsey: (01:09:57)
No.
Senator Thune: (01:09:58)
Mr. Pichai, are you the ref?
Sundar Pichai: (01:10:03) Senator, I do think we make content moderation decisions, but we are transparent about it and we do it to protect users, but we really believe and support maximizing freedom of expression.
*4
Sundar Pichai: (25:33)
We recognize that people come to our services with a broad spectrum of perspectives, and we are dedicated to building products that are helpful to users of all backgrounds and viewpoints. Let me be clear, we approach our work without political bias full stop. To do otherwise would be contrary to both our business interests and our mission, which compels us to make information accessible to every type of person, no matter where they live or what they believe. Of course, our ability to provide access to a wide range of information is only possible because of existing legal frameworks, like Section 230. The United States adopted Section 230 early in the internet’s history, and it has been foundational to US leadership in the tech sector. It protects the freedom to create and share content, while supporting the ability of platforms and services of all sizes to responsibly address harmful content.
*5
Chairman Wicker: (01:07)
The time has come for that free pass to end. After 24 years of Section 230 being the law of the land much has changed. The internet is no longer an emerging technology. The companies before us today are no longer scrappy startups, operating out of a garage or a dorm room. They are now among the world’s largest corporations, wielding immense power in our economy, culture and public discourse. Immense power. The applications they have created are connecting the world in unprecedented ways, far beyond what lawmakers could have imagined three decades ago. These companies are controlling the overwhelming flow of news and information that the public can share and access. One noteworthy example occurred just two weeks ago, after our subpoenas were unanimously approved. The New York Post, the country’s fourth largest newspaper ran a story revealing communications between Hunter Biden and a Ukrainian official.