diTii.com Digital News Hub

Sign up with your email address to be the first to know about latest news and more.

I agree to have my personal information transfered to MailChimp (more information)

Sep042018

Google Deploy AI To Identify Child Sexual Abuse Content

Taking a step forward to fight the sexually exploited contents online that abuses children, Google has announced of employing cutting-edge artificial intelligence (AI) technology to combat child sexual abuse material (CSAM) online.

At present, tools like Microsoft’s PhotoDNA can help flag material on internet platforms, but only if it has already been marked as abusive.

The new toolkit uses “deep neural networks” for image processing, will significantly assist reviewers by prioritizing the most likely CSAM content for review to discover. That said, the system will reduce the number of harrowing images that human moderators have to look through.

“Quick identification of new images means that children who are being sexually abused today are much more likely to be identified and protected from further abuse,” Google wrote.

It will increase the capacity to review content in a way that requires fewer people to be exposed to it, “We’ve seen firsthand that this system can help a reviewer find and take action on 700 percent more CSAM content over the same time period,” the company wrote.

The toolkit will be made available for free to non-governmental organizations (NGOs) and other industry partners via a new Content Safety API service that could be offered upon request.

Google noted, “By sharing this new technology, the identification of images could be speeded up, which in turn could make the internet a safe place for both victims and others.”

Google has been working with some of its partners for years to combat online child sexual abuse, including the Technology Coalition, WePROTECT Global Alliance, Britain-based charity Internet Watch Foundation, as well as other NGO organizations.

“While technology alone is not a panacea for this societal challenge, this work marks a big step forward in helping more organizations do this challenging work at scale,” the company added.

Google says it will continue to fight the perpetrators of CSAM by investing in the tech and in keeping its platforms and users safe from the abhorrent content. Also, that it’s looking forward to bring on board even more industry partners.

Organizations interested in using the Content Safety API toolkit, can make a request via this form.

Share This Story, Choose Your Platform!