In an update on content removal that violates policies published on Monday, YouTube released a transparency and an YouTube Community Guidelines enforcement report revealing it don’t allow content related to “pornography, incitement to violence, harassment, or hate speech”, for example.
This report will be updated to include data on comments, speed of removal and policy removal reason by the end of this year.
Machine Learning to Address Violative Content
The machine learning (ML) algorithms helps in flagging content for review as well as speeding up removal across violent extremism and spam, for example. Google revealing stats said almost 6.7 million were first flagged by machinesm with 76 percent removed before anyone can even watched.
The company also revealed, it has removed almost 8.3 million videos from YouTube in the period covering October to December of 2017. Majority of these videos were mostly spam or adult content.
These content represent a fraction of a percent of YouTube’s total views during this time period.
You can the chart below showing of videos flagged for violent extremism starting from June 2017, when the ML algorithm was first introduced:
We removed over 8 million videos from YouTube during these months.1 The majority of these 8 million videos were mostly spam or people attempting to upload adult content – and represent a fraction of a percent of YouTube’s total views during this time period.2
6.7 million were first flagged for review by machines rather than humans
Of those 6.7 million videos, 76 percent were removed before they received a single view.
That’s not all, the systems also rely on human review to assess content violating policy and by the end of 2018 it’ll have 10k people working to adress violent content that, includes full-time specialists with expertise in violent extremism, counterterrorism, and human rights.
Humans reviewers according to the report have flagged over 9.3 million videos due to sexual, spam, or hateful and violent contents. And almost 95 percent of this flagging was done by common users, with much of the remainder coming from a group YouTube calls Trusted Flaggers.
To learn more about human flagging and review process watch the video below:
In addition, Google also introduced a new “Reporting History” dashboard that will help users see the status of videos flagged by them for being against Community Guidelines for review. The new dashboard is already available and also accounts for videos that may be age-restricted instead of completely taken down after the review process.