mirror of
https://github.com/saymrwulf/transformers.git
synced 2026-05-15 21:01:19 +00:00
* Simplify Tensor Parallel implementation with PyTorch TP * Move tp_plan to config * Lint * Format and warning * Disable copy-from check * Conditionally get attr from config * make fix-copies * Move base_model_tp_plan to PretrainedConfig * Move TP into from_pretrained * Add device context for load * Do not serialize * Move _tp_plan setting to post_init * Add has_tp_plan * Add test_tp * Add 'Multi-gpu inference' doc * Add backward support for device type identification * Auto-detect accelerator * supports_tp_plan * copyright year * Fix copy |
||
|---|---|---|
| .. | ||
| ar | ||
| de | ||
| en | ||
| es | ||
| fr | ||
| hi | ||
| it | ||
| ja | ||
| ko | ||
| ms | ||
| pt | ||
| te | ||
| tr | ||
| zh | ||
| _config.py | ||