DeepL Write/Grammarly Feature Question/Request

Dear LibreTranslate community,

I am ultra happy with LibreTranslate as such, but would like to see an additional mode like DeepL Write/Grammarly. This is not primarily focused on language translation, but on rephrasing/optimizing sentences within the same language. Does something like this exist in the LibreTranslate world or is it on the roadmap?

Thanks for your answers and great efforts for independent translation!

It’s an interest use case, it’s currently not on the roadmap, but would be a cool addition (probably implemented as a separate module). We’d welcome contributions.

Thanks for your answer! Do you know other projects like LibreTranslate that offer such a feature?

I’m not aware of other projects.

CTranslate2 has some features to customize decoding of the target text you could look at. Argos Translate gives you the ability to return multiple translations (argostranslate.translate.ITranslation.hypotheses.num_hypotheses > 1) but doesn’t support all of the decoding features in CTranslate2.

CTranslate2 Doc

The target_prefix argument can be used to force the start of the translation. Let’s say we want to replace the first occurrence of die by das in the translation:

Combining target_prefix with the return_alternatives flag returns alternative sequences just after the prefix:

“Translation Word-Level Auto-Completion: What Can We Achieve Out of the Box?” Paper

Inference Engine We employ CTranslate2 (Klein et al., 2020) for sentence-level MT, as well as for translation auto-suggestions. To this end, we first convert OPUS models into the CTranslate2 format. After that, we utilize a number of CTranslate2 decoding features, including “alternatives at a position” and “auto-completion”.5 The translation options return_alternatives and num_hypotheses are essential for all our ex-periments; the former should be set to T rue while the latter determines the number of re- turned alternatives. These decoding options can be used with regular beam search, prefix-constrained decoding, and/or random sampling. If the decoding option return_alternatives is used along with target_pref ix, the provided target left context is fed into the decoder in the teacher forcing mode,6 then the engine expands the next N most likely words, and continues (auto-completes) the decoding for these N hypotheses independently. The shared task investigates four context cases: (a) empty context, (b) right context only, (c) left context only, and (d) both the right and left con- texts are provided. Hence, for all cases we returned multiple alternative translations, while for (c) and (d) we also returned another set of alternative auto-completions using the left context as a target prefix. In this sense, it is worth noting that we make use only of the left context, when available, and we do not use the right context at all, which we might investigate further in the future. To enhance diversity of translations, especially for (a) and (b), we applied random sampling with the CTranslate2’s decoding option sampling_topk, with various sampling temperatures. Our experiments are further elaborated in Section 4 and Section 5

Speaking about DeepL.
One of the Features that i like of DeepL is that there have a function where you find a incorrect translation of a word you can select the correct word in the target language i use in this example two fictional language Montypthon and foobar where Montypthon is the source and foobar is the target language
source text in Montypython

Spam eggs, Spam bacon.
Eggs Eggs Spam bacon.

incorrect translation in foobar:

Foo foobar, Bar foobar.
Foo Foo Bar foobar.

But i know form context that Spam bacon is in true Bar Bar foo.
You select this part and from the drop down menu and select the correct transaltion (this can also be type in a textbox) and select/hit Enter.
After this DeepL rerender the text with the correct word and rearrange the sentence.
Correct text in Foobar.

Foo foobar, Bar Bar foo.
Foo Foobar Bar Bar foo.
1 Like