DeepL Write/Grammarly Feature Question/Request

CTranslate2 has some features to customize decoding of the target text you could look at. Argos Translate gives you the ability to return multiple translations (argostranslate.translate.ITranslation.hypotheses.num_hypotheses > 1) but doesn’t support all of the decoding features in CTranslate2.

CTranslate2 Doc

https://opennmt.net/CTranslate2/decoding.html#autocompletion

The target_prefix argument can be used to force the start of the translation. Let’s say we want to replace the first occurrence of die by das in the translation:

https://opennmt.net/CTranslate2/decoding.html#alternatives-at-a-position

Combining target_prefix with the return_alternatives flag returns alternative sequences just after the prefix:

“Translation Word-Level Auto-Completion: What Can We Achieve Out of the Box?” Paper

Inference Engine We employ CTranslate2 (Klein et al., 2020) for sentence-level MT, as well as for translation auto-suggestions. To this end, we first convert OPUS models into the CTranslate2 format. After that, we utilize a number of CTranslate2 decoding features, including “alternatives at a position” and “auto-completion”.5 The translation options return_alternatives and num_hypotheses are essential for all our ex-periments; the former should be set to T rue while the latter determines the number of re- turned alternatives. These decoding options can be used with regular beam search, prefix-constrained decoding, and/or random sampling. If the decoding option return_alternatives is used along with target_pref ix, the provided target left context is fed into the decoder in the teacher forcing mode,6 then the engine expands the next N most likely words, and continues (auto-completes) the decoding for these N hypotheses independently. The shared task investigates four context cases: (a) empty context, (b) right context only, (c) left context only, and (d) both the right and left con- texts are provided. Hence, for all cases we returned multiple alternative translations, while for (c) and (d) we also returned another set of alternative auto-completions using the left context as a target prefix. In this sense, it is worth noting that we make use only of the left context, when available, and we do not use the right context at all, which we might investigate further in the future. To enhance diversity of translations, especially for (a) and (b), we applied random sampling with the CTranslate2’s decoding option sampling_topk, with various sampling temperatures. Our experiments are further elaborated in Section 4 and Section 5