Tuesday, December 1, 2020
News

Facebook unveils 1st multilingual Machine Translation model

   SocialTwist Tell-a-Friend    Print this Page   COMMENT

San Francisco | Tuesday, 2020 4:15:06 AM IST
Facebook on Monday introduced the first-ever, open-source multilingual machine translation (MMT) model that can translate between any pair of 100 languages without relying on English data.

Called "M2M-100," it is trained on a total of 2,200 language directions or 10 times more than previous best, English-centric multilingual models.

"Deploying M2M-100 will improve the quality of translations for billions of people, especially those that speak low-resource languages," Facebook AI said in a statement.

When translating, say, Chinese to French, most English-centric multilingual models train on Chinese to English and English to French, because English training data is the most widely available.

The new Facebook ML model directly trains on Chinese to French data to better preserve meaning.

It outperforms English-centric systems by 10 points on the widely used BLEU metric for evaluating machine translations.

"We're also releasing the model, training, and evaluation setup to help other researchers reproduce and further advance multilingual models," the social network announced.

Using novel mining strategies to create translation data, Facebook built the first truly "many-to-many" data set with 7.5 billion sentences for 100 languages.

"We used several scaling techniques to build a universal model with 15 billion parameters, which captures information from related languages and reflects a more diverse script of languages and morphology," the company said.

One challenge in multilingual translation is that a singular model must capture information in many different languages and diverse scripts.

To address this, Facebook saw a clear benefit of scaling the capacity of its model and adding language-specific parameters.

"The combination of dense scaling and language-specific sparse parameters (3.2 billion) enabled us to create an even better model, with 15 billion parameters".

For years, AI researchers have been working toward building a single universal model that can understand all languages across different tasks.

"A single model that supports all languages, dialects, and modalities will help us better serve more people, keep translations up to date, and create new experiences for billions of people equally," Facebook said.

--IANS na/

( 341 Words)

2020-10-19-22:24:05 (IANS)

 
  LATEST COMMENTS (0)
POST YOUR COMMENT
Comments Not Available
 
POST YOUR COMMENT
 
 
TRENDING TOPICS
 
 
CITY NEWS
MORE CITIES
 
 
 
MORE SCIENCE NEWS
Facebook acquires customer service platf...
Highly effective measles virus-based Th1...
Italian watchdog fines Apple 10mn euros ...
Facebook News arrives in UK as company s...
Jabra launches true wireless earbuds in ...
Xiaomi Mi 11 to be launched in Jan 2021:...
More...
 
INDIA WORLD ASIA
Delhi air quality in 'very poor' categor...
DDC polls in J&K introduction of true se...
Pradhan takes on Priyanka in war of word...
Rare pangolin rescued near Agra...
Reports of attempt to murder BJP leader ...
Plea in HC seeks inclusion of transgende...
More...    
 
 Top Stories
AYUSH Deptt to study Art of living ... 
Guterres urges greater inclusion of... 
Polling underway in Greater Hyderab... 
Two civilians injured in IED blast ... 
Delhi air quality in 'very poor' ca... 
Urmila Matondkar joins Shiv Sena... 
Survey finds top entertainment fran... 
Ayurveda surgery row: IMA threatens...