Google in collaboration with Jigsaw has released a new early-stage tool called Perspective, that can be integrated into any publishing platform which enable online comments to help filter out hurtful comments.
The new API "Perspective" using machine learning cross-references comments against a human generated Google database of abusive comments.
When a publisher integrates Perspective into their platform, the API will review all the new comments posted with hundreds of thousands of comments that have already been labeled as "toxic" by the human reviewers in the Google database. "Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments," writes Google.
Once a comments is determined as abusive, Perspective will notify the commenter in real time, as well as publishers to take further action—like for example, "a publisher could flag comments for own moderators to review," or "a publisher could provide tools to let commenter see potential toxicity of their comment as they write it." Publishers can also "allow readers to sort comments by toxicity themselves," making it easier to find great discussions hidden under toxic ones, the tean Google says.
As more sites integrate the API with their publishing platform, Perspective will continue to learn and grow its database of toxic comments, making it even more efficient over time.
Google revealed that this technology is still developing, and is been tested by New York Times, reviewing an average of 11,000 comments every day. But, as more publishers integrates this API, it'll learn and improve over time as it'll be exposed to more comments for better understanding of what makes certain comments toxic.
Google futher notes, Perspective joins many new machine learning resources such as TensorFlow and Cloud Machine Learning Platform, already available to developers.
Perspective, currently can only flag comments in English, but will deliver new languages and models that can "identify other perspectives, such as when comments are unsubstantial or off-topic" over time, said Google.
In the screenshot below, you can Perspective in work:
Today, more than 20 years after SHA-1 was first introduced, Google in collaboration with the CWI Institute in Amsterdam, announces the first practical technique for generating a collision.
"We are releasing two PDFs that have identical SHA-1 hashes but different content," says Google, explaining "a collision occurs when two distinct pieces of data—a document, a binary, or a website's certificate—hash to the same digest as shown above. In practice, collisions should never occur for secure hash functions. However if the hash algorithm has some flaws, as SHA-1 does, a well-funded attacker can craft a collision. The attacker could then use this collision to deceive systems that rely on hashes into accepting a malicious file in place of its benign counterpart. For example, two insurance contracts with drastically different terms," Google said.
You can get the PDFs HERE.
The infographic below shows the first collision attack against SHA-1: