pytorch/caffe2/python/models
Juan Miguel Pino 4d8a8c2e1e Implement dot attention
Summary:
Implement dot attention as described in https://arxiv.org/abs/1508.04025
This saves the computation of weighted encoder outputs in `rnn_cell.py`
When the encoder and decoder dimensions are different, we apply an FC, which corresponds to the general case below Figure 2.
Refactored unit tests.

Reviewed By: jhcross

Differential Revision: D5486976

fbshipit-source-id: f9e9aea675b3b072fbe631bc004199b90a9d95cb
2017-08-06 11:50:16 -07:00
..
seq2seq Implement dot attention 2017-08-06 11:50:16 -07:00
__sym_init__.py exec_net --> predict_net 2017-03-23 16:31:49 -07:00
download.py fix download progress bar's percentage exceed 100% 2017-04-20 10:41:06 -07:00
resnet.py Fix a few typos and grammars in comment 2017-06-14 18:22:39 -07:00
resnet_test.py fast simple-net memonger for C++ 2017-07-06 15:17:07 -07:00