transformers/tests
Matt d5cf91b346
Separate chat templates into a single file (#33957)
* Initial draft

* Add .jinja file loading for processors

* Add processor saving of naked chat template files

* make fixup

* Add save-load test for tokenizers

* Add save-load test for tokenizers

* stash commit

* Try popping the file

* make fixup

* Pop the arg correctly

* Pop the arg correctly

* Add processor test

* Fix processor code

* stash commit

* Processor clobbers child tokenizer's chat template

* Processor clobbers child tokenizer's chat template

* make fixup

* Split processor/tokenizer files to avoid interactions

* fix test

* Expand processor tests

* Rename arg to "save_raw_chat_template" across all classes

* Update processor warning

* Move templates to single file

* Move templates to single file

* Improve testing for processor/tokenizer clashes

* Improve testing for processor/tokenizer clashes

* Extend saving test

* Test file priority correctly

* make fixup

* Don't pop the chat template file before the slow tokenizer gets a look

* Remove breakpoint

* make fixup

* Fix error
2024-11-26 14:18:04 +00:00
..
agents Agents: Small fixes in streaming to gradio + add tests (#34549) 2024-11-11 20:52:09 +01:00
benchmark
bettertransformer
deepspeed Trainer - deprecate tokenizer for processing_class (#32385) 2024-10-02 14:08:46 +01:00
extended [tests] skip tests for xpu (#33553) 2024-09-19 19:28:04 +01:00
fixtures
fsdp FSDP grad accum fix (#34645) 2024-11-15 22:28:06 +01:00
generation fix static cache data type miss-match (#34799) 2024-11-25 16:59:38 +01:00
models [Whisper] Fix whisper integration tests (#34111) 2024-11-26 12:23:08 +01:00
optimization fix: Fixed the 1st argument name in classmethods (#31907) 2024-07-11 12:11:50 +01:00
peft_integration [PEFT] Add warning for missing key in LoRA adapter (#34068) 2024-10-24 17:56:40 +02:00
pipelines allow unused input parameters passthrough when chunking in asr pipelines (#33889) 2024-11-25 11:36:44 +01:00
quantization Skipping aqlm non working inference tests till fix merged (#34865) 2024-11-26 11:09:30 +01:00
repo_utils Refactor CI: more explicit (#30674) 2024-08-30 18:17:25 +02:00
sagemaker Trainer - deprecate tokenizer for processing_class (#32385) 2024-10-02 14:08:46 +01:00
tokenization VLM: special multimodal Tokenizer (#34461) 2024-11-04 16:37:51 +01:00
tp Simplify Tensor Parallel implementation with PyTorch TP (#34184) 2024-11-18 19:51:49 +01:00
trainer Remove FSDP wrapping from sub-models. (#34452) 2024-11-15 23:00:03 +01:00
utils Fix: take into account meta device (#34134) 2024-11-20 11:32:07 +01:00
__init__.py
test_backbone_common.py
test_configuration_common.py Load sub-configs from composite configs (#34410) 2024-11-05 11:34:01 +01:00
test_feature_extraction_common.py
test_image_processing_common.py Add DetrImageProcessorFast (#34063) 2024-10-21 09:05:05 -04:00
test_image_transforms.py
test_modeling_common.py [Deberta/Deberta-v2] Refactor code base to support compile, export, and fix LLM (#22105) 2024-11-25 10:43:16 +01:00
test_modeling_flax_common.py
test_modeling_tf_common.py [TF] Fix Tensorflow XLA Generation on limited seq_len models (#33903) 2024-10-05 16:20:50 +02:00
test_pipeline_mixin.py Add image text to text pipeline (#34170) 2024-10-31 15:48:11 -04:00
test_processing_common.py Separate chat templates into a single file (#33957) 2024-11-26 14:18:04 +00:00
test_sequence_feature_extraction_common.py
test_tokenization_common.py Separate chat templates into a single file (#33957) 2024-11-26 14:18:04 +00:00