Machine translation supported by human post-editing or evaluation

Human-in-the-loop consists of using human feedback for additional training of translation engines. Human feedback can be obtained from different tasks:

Other human-machine interactions are also considered human-in-the-loop:

  • Humans improve source content for better translatability
  • Humans label training data to classify various domains or quality levels
  • Fallback option to human translation in case an automated solution is inadequate


The goal of human-in-the-loop is improving the quality of machine translation output in all aspects:

  • Accuracy – eliminating factual errors and hallucinating
  • Fluency – making the language sound more natural for native speakers
  • Terminology and style – using appropriate terms and style in given context


  1. Pre-editing Before feeding text into the machine translation system, human editors may pre-edit the source text to ensure that it is clear, concise, and easy to translate.

  2. Machine translation The pre-edited text is then fed into a machine translation system, which generates an initial translation.

  3. Post-editing A human post-editor reviews the machine-generated translation and makes any necessary corrections to improve the quality of the final output. Professional translators perform this task.

  4. Quality assessment Human quality assessors may evaluate the quality of the machine-generated translations and provide feedback to the machine translation system to help it improve over time. This task requires a checklist of error categories and weights.


Want to learn more about Human-in-the-loop?

Edit this article →

Machine Translate is created and edited by contributors like you!

Learn more about contributing →

Licensed under CC-BY-SA-4.0.

Cite this article →