NLLB dataset by Meta AI available on Opus

This dataset was created based on metadata for mined bitext released by Meta AI. It contains bitext for 148 English-centric and 1465 non-English-centric language pairs using the stopes mining library and the LASER3 encoders (Heffernan et al., 2022). The complete dataset is ~450GB.

This release is based on the data package released at huggingface through AllenAI. More information about instances for each language pair in the original data can be found in the dataset_infos.json file. Data was filtered based on language identification, emoji based filtering, and for some high-resource languages using a language model. For more details on data filtering please refer to Section 5.2 (NLLB Team et al., 2022). This release also includes data from CCMatrix for language pairs that are not updated in NLLB.

1 Like

Speaking of Opus, I found out yesterday that there’s a neat set of tools for interacting with it: opustools · PyPI

opus_express in particular is quite neat, you just pick the language pair, dataset(s) and it creates train/dev/test datasets automatically!

1 Like

This should be huge for low-resource languages.