Category Archives: CENSORSHIP

YouTube & Facebook are struggling to keep billions under control

YouTube, Facebook and many other media platforms are facing some issues: a lot of undesirable material is distributed through their channels. That has always been the case but, recently, it has become a threat since extremists of all sorts have begun to utilise their channels to spread propagandist and violence-glorifying content. As new privacy laws are passed and advertising sponsors put on the pressure, the media giants have to either discover better ways to handle the deluge of user posts or risk hefty fines. Artificial intelligence (AI) has been touted as the magic silver bullet but are algorithms really the solution?

To give you an idea of the scale I’m referring to: 500 hours of video content are uploaded to YouTube every single minute – and counting. It would require hundreds of thousands of workers to review and, if necessary, delete them. And it’s a golden opportunity to become a big employer – Google has the funds after all! Instead of just a measly 80,000 employees world-wide, 2,500,000 additional jobs could be created to give back a few of those billions to society. Naturally, that’s out of the question. Profits would be declining and share holders surely threaten with self-immolation. That’s why Google is leaving this issue in the arena of technology.

Here’s the plan: human workers have flagged 2 million videos for deletion by adding certain markers to further specify the cause. Self-learning machines analyze the data and scan both audio and video tracks to learn about humans and objects in context (or situations). Even text overlays along with political or religious symbols are recognized. The objective: to find and remove violence-glorifying content, terrorist propaganda, hate speech, SPAM and, naturally, nudity.

Today, AI artificial intelligence has already replaced much of the human workforce.

The algorithms are continuously refined with each iteration. Which videos are showing a bombing, swastika or uncovered female breast? In the past, censors were already quite swift when it came to pornography but other illegal content is now also slowly being focussed on. Affected videos are marked and later wiped from the portal. Of over 8 million recently deleted videos, a whopping 6,6 million were identified through AI while human workers and user feedback did the rest. Many videos hadn’t even become publicly viewable yet, while the video portal is celebrating, the devil is in the detail.

Lately, problem cases have been piling up since the technology doesn’t always act as intended. War crime documentaries that serve to foster education were erroneously deleted and so were historical movies. The algorithms detected the depiction of inhuman practices but failed to grasp the intention behind the movies. Such are the limits of AI to this day: it can spot questionable content but it can’t decipher the rationale behind it (yet). The same applies to nudity: nude paintings, as common in the fine arts, also met with disapproval from the virtual jury and were likewise deleted. After all, how can algorithms tell the difference between artful nudity and obscene home videos? It seems the system can’t do without common (human) sense just yet.

Which of the countless online videos contain illegal content?

Satire is also beyond a machine’s understanding & comprehension. While many of us can laugh at Monty Python’s Nazi jokes, computers are totally devoid of any sense of humour. The closer the jokes stick to the “original”, the quicker they’ll face auto delete. That’s why many users see signs of of a digital inquisition on the horizon. Though they welcome YouTube’s struggle to no longer be a cesspool of extremist, hateful or confused minds, they criticize the shotgun approach exhibited by the AI. Today, investigative journalists, researchers or organizations that document war & other crimes are facing permanent suspension of their channels. Even G-rated garden party videos are deleted because the AI misinterprets bare skin. By contrast, videos uploaded by pedophiles stay up because these people know how to exploit the AI’s weaknesses through subtlety. No algorithm can decipher the many possible shades to a topic (yet). Google had penalised my site with no adsense adverts because I document crimes here www.crimefiles.net

It seems, human workers will remain indispensable for some time to come to evaluate said shades and YouTube will have to comply with some form of binding standard to stay relevant. They will also have to be more open & transparent: presently, users receive no explanation as to why their videos were blocked. YouTube has vowed to respond faster to questions and to provide insights into the implementation of their guidelines. That should be a given, but, in the case YouTube, it actually means progress. They’ve also recruited added staff, if only in dribs & drabs. Apparently, YouTube themselves don’t trust their AI too much and that’s at least comforting.

What we should maybe like to know: is do we believe AI artificial intelligence to be adopted here or is common (human) sense still necessary?

www.spy-drones.com

Henry Sapiecha