mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-15 21:00:47 +00:00
Summary: My commit bab5bc broke things wiht fp16 compute, as i had tested it only with the null-input, that actually produced fp32 data (even dtype was given as float16). Also, I had confused the concepts of "float16 compute" and fp16 data. Issue #1408. This fixes those issues, tested with both Volta and M40 GPUs. Basically restored much of the previous code and fixed the null input to do FloatToHalf. Reviewed By: pietern Differential Revision: D6211849 fbshipit-source-id: 5b41cffdd605f61a438a4c34c56972ede9eee28e |
||
|---|---|---|
| .. | ||
| char_rnn.py | ||
| lmdb_create_example.py | ||
| resnet50_trainer.py | ||