Facebook Plans Long Term Investment in AI to Detect Harmful Content


The developments in natural language processing (NLP) have helped Facebook in creating a digital common language for translation

Social networking giant Facebook announced that the company is planning long-term investments in the area of artificial intelligence (AI) for proactively identifying content that violates its policies.

While speaking at the F8 conference of company, held in San Jose, California, Manohar Paluri, Director – AI at Facebook, said, “To help us catch more of this problematic content, we’re working to make sure our AI systems can understand content with as little supervision as possible.”

Paluri further said that the developments in natural language processing (NLP) have helped Facebook in creating a digital common language for translation, so that the company can detect harmful content across several languages.

The training models, integrating audio and visual signals, has further augment results, he added.

The social media giant has created a new method for object recognition named ‘Panoptic FPN’, which has helped AI-run systems in understanding the context from photos backgrounds.

Currently, Facebook is facing various probes all over the world with regards to privacy violation issues and the spread of biased and harmful content on its platforms.

Developing best practices

Speaking at the event, Joaquin Quinonero Candela, Director – Applied Machine Learning at Facebook said that the company is developing best practices for equality to make sure AI can protect the people and does not differentiate against them.

Candela further said, “When AI models are trained by humans on datasets involving people, there is an inherent representational risk. If the datasets contain limitations, flaws or other issues, the resulting models may perform differently for different people.”

In order to manage that risk, the company claimed that it has created a new method for inclusive AI. This method gives guidelines to support the programmers and researchers in designing datasets, measuring performance of products and testing new systems through the inclusivity lens.



Please enter your comment!
Please enter your name here