mirror of
https://github.com/saymrwulf/transformers.git
synced 2026-05-15 21:01:19 +00:00
* feat(gguf): add falcon q2 k * fix(gguf): remove useless renaming * feat(gguf): seperate falcon 7b and 40b * feat(gguf): apply fixup * fix(test): error rebase * feat(gguf): add fp16 weight comparison for falcon * feat(gguf): test weight of all layers * test(gguf): add falcon 40b under skip decorator * feat(gguf): quick example for extracting model size |
||
|---|---|---|
| .. | ||
| aqlm_integration | ||
| autoawq | ||
| bnb | ||
| compressed_tensor | ||
| eetq_integration | ||
| fbgemm_fp8 | ||
| ggml | ||
| gptq | ||
| hqq | ||
| quanto_integration | ||
| torchao_integration | ||