Metric for Evaluation of Translation with Explicit ORdering

METEOR (Metric for Evaluation of Translation with Explicit ORdering) is a metric for automatic evaluation of machine translation that calculates the similarity between a machine translation output and a reference translation using n-grams.

Meteor evaluates a translation by computing a score based on explicit word-to-word matches between the translation and a given reference translation.

METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments

Apart from exact word matching, METEOR adds matching for synonyms and simple morphological variants of a word.

METEOR takes into account both the precision and recall while evaluating a match.



Note: The list is incomplete.


Want to learn more about METEOR?

Edit this article →

Machine Translate is created and edited by contributors like you!

Learn more about contributing →

Licensed under CC-BY-SA-4.0.

Cite this article →