mirror of
https://github.com/saymrwulf/transformers.git
synced 2026-05-14 20:58:08 +00:00
* [models] respect dtype of the model when instantiating it * cleanup * cleanup * rework to handle non-float dtype * fix * switch to fp32 tiny model * improve * use dtype.is_floating_point * Apply suggestions from code review Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * fix the doc * recode to use explicit torch_dtype_auto_detect, torch_dtype args * docs and tweaks * docs and tweaks * docs and tweaks * merge 2 args, add docs * fix * fix * better doc * better doc Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| callback.rst | ||
| configuration.rst | ||
| data_collator.rst | ||
| deepspeed.rst | ||
| feature_extractor.rst | ||
| logging.rst | ||
| model.rst | ||
| optimizer_schedules.rst | ||
| output.rst | ||
| pipelines.rst | ||
| processors.rst | ||
| tokenizer.rst | ||
| trainer.rst | ||