Watermarking Language Models with Error Correcting Codes

Patrick Chao, Edgar Dobriban, Hamed Hassani

Abstract

Recent progress in large language models enables the creation of realistic machine-generated content. Watermarking is a promising approach to distinguish machine-generated text from human text, embedding statistical signals in the output that are ideally undetectable to humans. We propose a watermarking framework that encodes such signals through an error correcting code. Our method, termed robust binary code (RBC) watermark, introduces no distortion compared to the original probability distribution, and no noticeable degradation in quality. We evaluate our watermark on base and instruction fine-tuned models and find our watermark is robust to edits, deletions, and translations. We provide an information-theoretic perspective on watermarking, a powerful statistical test for detection and for generating p-values, and theoretical guarantees. Our empirical findings suggest our watermark is fast, powerful, and robust, comparing favorably to the state-of-the-art.

6.2 Robustness Experiments

In practice, a user may attempt to circumvent watermarks by editing the text generated by a
language model. To emulate this, we use three popular perturbations that may represent an adversary hoping to evade a watermark, as in other works including (Kuditipudi et al., 2023; Piet et al., 2023).

  1. Delete. We randomly delete 20% of the tokens.
  2. Swap. We replace a randomly chosen fraction of 20% of the tokens with random tokens.
  3. Translate. We translate the generated text to Russian then back to English using Argos
    Translate (Finlay and Argos Translate, 2023).

For our experiments, we elect to perturb 20% of the tokens, as this represents a relatively high noise regime where one in five tokens are modified. In our translation perturbation, we choose to translate to Russian and then back to English for a powerful attack, as Russian and English are relatively different compared to Spanish and French, see e.g., Anttila (1972), etc.
In Table 3 and Figure 6, we evaluate the robustness of RBC and the distribution shift water- marking scheme to the perturbations. Notably, the LDPC and one-to-one RBC watermarks show the greatest robustness. They achieve consistent detectability with ∼60 tokens.

On Evaluating The Performance of Watermarked Machine-Generated Texts Under Adversarial Attacks

Zesen Liu, Tianshuo Cong, Xinlei He, Qi Li

Abstract

Large Language Models (LLMs) excel in various applications, including text generation and complex tasks. However, the misuse of LLMs raises concerns about the authenticity and ethical implications of the content they produce, such as deepfake news, academic fraud, and copyright infringement. Watermarking techniques, which embed identifiable markers in machine-generated text, offer a promising solution to these issues by allowing for content verification and origin tracing. Unfortunately, the robustness of current LLM watermarking schemes under potential watermark removal attacks has not been comprehensively explored.
In this paper, to fill this gap, we first systematically comb the mainstream watermarking schemes and removal attacks on machine-generated texts, and then we categorize them into pre-text (before text generation) and post-text (after text generation) classes so that we can conduct diversified analyses. In our experiments, we evaluate eight watermarks (five pre-text, three post-text) and twelve attacks (two pre-text, ten post-text) across 87 scenarios. Evaluation results indicate that (1) KGW and Exponential watermarks offer high text quality and watermark retention but remain vulnerable to most attacks; (2) Post-text attacks are found to be more efficient and practical than pre-text attacks; (3) Pre-text watermarks are generally more imperceptible, as they do not alter text fluency, unlike post-text watermarks; (4) Additionally, combined attack methods can significantly increase effectiveness, highlighting the need for more robust watermarking solutions. Our study underscores the vulnerabilities of current techniques and the necessity for developing more resilient schemes.

Translation attack. Besides the generative language
model, we can use a translation model to translate the
existing text to another language and then translate the
text back to the origin language (e.g., English to Russian
to English). In this paper, we use the argos-translate
model [13] based on OpenNMT [31].