WNMT 17

Workshop on Neural Machine Translation and Generation


Location

  • Vancouver, Canada

Links

Schedule

   
9:30 Welcome and opening remarks
9:40 Keynote 1
The Neural Noisy Channel: Generative Models for Sequence to Sequence Modeling
Chris Dyer
10:30 ☕️
11:00 Keynote 2
Challenges in Neural Document Generation
Alexander Rush
11:50 Best Paper Session
Outstanding paper
An Empirical Study of Adequate Vision Span for Attention-Based Neural Machine Translation
Raphael Shu, Hideki Nakayama
Best Paper
Stronger Baselines for Trustable Results in Neural Machine Translation
Michael Denkowski, Graham Neubig
12:20 🍴
13:40 Keynote 3
What is Neural MT Learning?
Kevin Knight
14:30 Keynote 4
Google's Neural Machine Translation system
Quoc Le
15:20 Poster session
15:30 Poster session (continued)
☕️
An Empirical Study of Adequate Vision Span for Attention-Based Neural Machine Translation
Raphael Shu, Hideki Nakayama
Analyzing Neural MT Search and Model Performance
Jan Niehues, Eunah Cho, Thanh-Le Ha, Alex Waibel
Stronger Baselines for Trustable Results in Neural Machine Translation
Michael Denkowski, Graham Neubig
Six Challenges for Neural Machine Translation
Philipp Koehn, Rebecca Knowles
Cost Weighting for Neural Machine Translation Domain Adaptation
Boxing Chen, Colin Cherry, George Foster, Samuel Larkin
Detecting Untranslated Content for Neural Machine Translation
Isao Goto, Hideki Tanaka
Beam Search Strategies for Neural Machine Translation
Markus Freitag, Yaser Al-Onaizan
An Empirical Study of Mini-Batch Creation Strategies for Neural Machine Translation
Makoto Morishita, Yusuke Oda, Graham Neubig, Koichiro Yoshino, Katsuhito Sudoh, Satoshi Nakamura
Detecting Cross-Lingual Semantic Divergence for Neural Machine Translation
Marine Carpuat, Yogarshi Vyas, Xing Niu
Domain Aware Neural Dialogue System (extended abstract)
Sajal Choudhary, Prerna Srivastava, Joao Sedoc, Lyle Ungar
Interactive Beam Search for Visualizing Neural Machine Translation (extended abstract)
Jaesong Lee, JoongHwi Shin, Jun-Seok Kim
Graph Convolutional Encoders for Syntax-aware Neural Machine Translation (extended abstract)
Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, Khalil Sima’an
Towards String-to-Tree Neural Machine Translation (cross-submission)
Roee Aharoni, Yoav Goldberg
What do Neural Machine Translation Models Learn about Morphology? (cross-submission)
Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, James Glass
Trainable Greedy Decoding for Neural Machine Translation (cross-submission)
Jiatao Gu, Kyunghyun Cho, Victor O.K. Li
16:10 Panel Discussion
Chris Dyer, Alexander Rush, Kevin Knight, Quoc Le, Kyunghuyn Cho
17:30 Closing remarks

Tasks

  • Efficiency
  • Accuracy

Keynotes

“The Neural Noisy Channel: Generative Models for Sequence to Sequence Modeling”, Chris Dyer

The first statistical models of translation relied on Bayes’ rule to factorize the probability of an output translation given an input into two component probabilities: a target language model prior probability (how likely is a candidate output?), and an inverse translation probability (how likely is the observed input given a candidate output?). Although this factorization has largely been abandoned in favor of discriminative models that directly estimate the probability of producing an output translation given an input, these discriminative models suffer from a number of problems, including undesirable explaining-away effects during training (e.g., label bias), and difficulty learning from unpaired samples in training. In contrast, generative models based on the Bayes’ rule factorization must produce outputs that explain their inputs, and training with unpaired samples (i.e., target language monolingual corpora) is straightforward. I discuss the challenges and opportunities afforded by generative models of sequence to sequence transduction, reporting results on machine translation and abstractive summarization.

“Challenges in Neural Document Generation”, Alexander Rush

Advances in neural machine translation have led to optimism for natural language generation in tasks such as summarization and dialogue, but it has been difficult to quantify what challenges remain in neural NLG. In this talk, I will discuss recent work on long-form data-to-document generation using a new dataset pairing comprehensive basketball game statistics with full game descriptions, a classic NLG task. While state-of-the-art NMT systems produce fluent output on this task, the generated documents are clearly insufficient and suffer from basic issues in discourse, reference, and referring expression generation. Recent tricks such as copy and coverage lead to clear improvements, but results for end-to-end generation are not yet competitive for long-form documents. Overall, neural document generation presents a difficult but interesting challenge that may require different techniques than standard NMT.

“What is Neural MT Learning?”, Kevin Knight

In this talk, I will observe what neural MT decides to extract from source sentences, as a by-product of its end-to-end training. I will also speculate about the power of neural MT-style networks, both in general and with respect to how they are currently trained.

“Google’s Neural Machine Translation system”, Quoc Le

I will talk about the history of neural machine translation at Google and some of our recent work on deploying neural machine translation at scale.


Want to learn more about WNMT 17?


Edit this article →

Machine Translate is created and edited by contributors like you!

Learn more about contributing →

Licensed under CC-BY-SA-4.0.

Cite this article →