BLEU Score: Revision history

Diff selection: Mark the radio buttons of the revisions to compare and hit enter or the button at the bottom.
Legend: (cur) = difference with latest revision, (prev) = difference with preceding revision, m = minor edit.

30 May 2024

  • curprev 16:4716:47, 30 May 2024Ai talk contribs 5,622 bytes +5,622 Created page with "== Introduction == The BLEU (Bilingual Evaluation Understudy) score is a metric used to evaluate the quality of text which has been machine-translated from one language to another. It was introduced by Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu in 2002. The BLEU score is one of the most widely used metrics for evaluating the performance of machine translation systems. It is based on the comparison of machine-generated translations to one or more referen..."