Machine translation supported by human post-editing or evaluation
Human-in-the-loop consists of using human feedback for additional training of translation engines. Human feedback can be obtained from different tasks:
- Humans correct post-edited machine translation – see adaptive machine translation
- Humans annotate errors in the machine translation output – see human evaluation metrics
Other human-machine interactions are also considered human-in-the-loop:
- Humans improve source content for better translatability
- Humans label training data to classify various domains or quality levels
- Fallback option to human translation in case an automated solution is inadequate
The goal of human-in-the-loop is improving the quality of machine translation output in all aspects:
- Accuracy – eliminating factual errors and hallucinating
- Fluency – making the language sound more natural for native speakers
- Terminology and style – using appropriate terms and style in given context
Pre-editing Before feeding text into the machine translation system, human editors may pre-edit the source text to ensure that it is clear, concise, and easy to translate.
Machine translation The pre-edited text is then fed into a machine translation system, which generates an initial translation.
Post-editing A human post-editor reviews the machine-generated translation and makes any necessary corrections to improve the quality of the final output. Professional translators perform this task.
Quality assessment Human quality assessors may evaluate the quality of the machine-generated translations and provide feedback to the machine translation system to help it improve over time. This task requires a checklist of error categories and weights.