Australia

Tech firms must do more on extremism: World Economic Forum

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

  • Facebook, Twitter and Alphabet's Google will go under the microscope of U.S. lawmakers on Tuesday and Wednesday
  • General counsels will testify before three U.S. congressional committees on alleged Russian interference in the 2016 U.S. presidential election

By Reuters

Published: 00:05 EDT, 30 October 2017 | Updated: 11:24 EDT, 30 October 2017

U.S. tech firms such as Facebook and Twitter should be more aggressive in tackling extremism and political misinformation if they want to avoid government action, a report from the World Economic Forum said on Monday.

The study from the Swiss nonprofit organization adds to a chorus of calls for Silicon Valley to stem the spread of violent material from Islamic State militants and the use of their services by alleged Russian propagandists.

Facebook, Twitter and Alphabet's Google will go under the microscope of U.S. lawmakers on Tuesday and Wednesday when their general counsels will testify before three U.S. congressional committees on alleged Russian interference in the 2016 U.S. presidential election.

The study from the Swiss nonprofit organization adds to a chorus of calls for Silicon Valley to stem the spread of violent material from Islamic State militants and the use of their services by alleged Russian propagandists.

The study from the Swiss nonprofit organization adds to a chorus of calls for Silicon Valley to stem the spread of violent material from Islamic State militants and the use of their services by alleged Russian propagandists.

The report from the World Economic Forum's human rights council warns that tech companies risk government regulation that would limit freedom of speech unless they 'assume a more active self-governance role.'

It recommends that the companies conduct more thorough internal reviews of how their services can be misused and that they put in place more human oversight of content.

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

The German parliament in June approved a plan to fine social media networks up to 50 million euros if they fail to remove hateful postings promptly, a law that Monday's study said could potentially lead to the takedown of massive amounts of content.

ROBOT MODERATORS

Google has been using artificial intelligence moderators to monitor illegal videos on YouTube over the last month, in which time it has more than doubled the number of offensive videos deleted from YouTube during this time.

Google said: 'With over 400 hours of content uploaded to YouTube every minute, finding and taking action on violent extremist content poses a significant challenge.

Google has been using artificial intelligence moderators to monitor illegal videos on YouTube over the last month (stock image)Google has been using artificial intelligence moderators to monitor illegal videos on YouTube over the last month (stock image)

Google has been using artificial intelligence moderators to monitor illegal videos on YouTube over the last month (stock image)

'But over the past month, our initial use of machine learning has more than doubled both the number of videos we've removed for violent extremism, as well as the rate at which we've taken this kind of content down.'

The robot moderators are now more accurate than its human moderators at flagging offensive content, according to Google.

The firm wrote: 'While these tools aren't perfect, and aren't right for every setting, in many cases our systems have proven more accurate than humans at flagging videos that need to be removed.'

Let's block ads!

Leave a Reply