transformers/tests/quantization
g-prz fe484726aa
Add falcon gguf (#33437)
* feat(gguf): add falcon q2 k

* fix(gguf): remove useless renaming

* feat(gguf): seperate falcon 7b and 40b

* feat(gguf): apply fixup

* fix(test): error rebase

* feat(gguf): add fp16 weight comparison for falcon

* feat(gguf): test weight of all layers

* test(gguf): add falcon 40b under skip decorator

* feat(gguf): quick example for extracting model size
2024-10-02 14:10:39 +02:00
..
aqlm_integration Cache: use batch_size instead of max_batch_size (#32657) 2024-08-16 11:48:45 +01:00
autoawq Skip tests properly (#31308) 2024-06-26 21:59:08 +01:00
bnb Enable BNB multi-backend support (#31098) 2024-09-24 03:40:56 -06:00
compressed_tensor HFQuantizer implementation for compressed-tensors library (#31704) 2024-09-25 14:31:38 +02:00
eetq_integration [FEAT]: EETQ quantizer support (#30262) 2024-04-22 20:38:58 +01:00
fbgemm_fp8 Fix FbgemmFp8Linear not preserving tensor shape (#33239) 2024-09-11 13:26:44 +02:00
ggml Add falcon gguf (#33437) 2024-10-02 14:10:39 +02:00
gptq 🚨 Remove dataset with restrictive license (#31452) 2024-06-17 17:56:51 +01:00
hqq Hqq serialization (#33141) 2024-09-30 14:47:18 +02:00
quanto_integration Skip tests properly (#31308) 2024-06-26 21:59:08 +01:00
torchao_integration Add TorchAOHfQuantizer (#32306) 2024-08-14 16:14:24 +02:00