transformers/tests
Yoni Gozlan fa56dcc2ab
Refactoring of ImageProcessorFast (#35069)
* add init and base image processing functions

* add add_fast_image_processor to transformers-cli

* add working fast image processor clip

* add fast image processor to doc, working tests

* remove "to be implemented" SigLip

* fix unprotected import

* fix unprotected vision import

* update ViTImageProcessorFast

* increase threshold slow fast ewuivalence

* add fast img blip

* add fast class in tests with cli

* improve cli

* add fast image processor convnext

* add LlavaPatchingMixin and fast image processor for llava_next and llava_onevision

* add device kwarg to ImagesKwargs for fast processing on cuda

* cleanup

* fix unprotected import

* group images by sizes and add batch processing

* Add batch equivalence tests, skip when center_crop is used

* cleanup

* update init and cli

* fix-copies

* refactor convnext, cleanup base

* fix

* remove patching mixins, add piped torchvision transforms for ViT

* fix unbatched processing

* fix f strings

* protect imports

* change llava onevision to class transforms (test)

* fix convnext

* improve formatting (following Pavel review)

* fix handling device arg

* improve cli

* fix

* fix inits

* Add distinction between preprocess and _preprocess, and support for arbitrary kwargs through valid_extra_kwargs

* uniformize qwen2_vl fast

* fix docstrings

* add add fast image processor llava

* remove min_pixels max_pixels from accepted size

* nit

* nit

* refactor fast image processors docstrings

* cleanup and remove fast class transforms

* update add fast image processor transformers cli

* cleanup docstring

* uniformize pixtral fast and  make _process_image explicit

* fix prepare image structure llava next/onevision

* Use typed kwargs instead of explicit args

* nit fix import Unpack

* clearly separate pops and gets in base preprocess. Use explicit typed kwargs

* make qwen2_vl preprocess arguments hashable
2025-02-04 17:52:31 -05:00
..
agents use torch.testing.assertclose instead to get more details about error in cis (#35659) 2025-01-24 16:55:28 +01:00
bettertransformer use torch.testing.assertclose instead to get more details about error in cis (#35659) 2025-01-24 16:55:28 +01:00
deepspeed use torch.testing.assertclose instead to get more details about error in cis (#35659) 2025-01-24 16:55:28 +01:00
extended
fixtures
fsdp [tests] make cuda-only tests device-agnostic (#35607) 2025-01-13 14:48:39 +01:00
generation Add GOT-OCR 2.0 to Transformers (#34721) 2025-01-31 11:28:13 -05:00
models Refactoring of ImageProcessorFast (#35069) 2025-02-04 17:52:31 -05:00
optimization Update unwrap_and_save_reload_schedule to use weights_only=False (#35952) 2025-01-29 14:30:57 +01:00
peft_integration use torch.testing.assertclose instead to get more details about error in cis (#35659) 2025-01-24 16:55:28 +01:00
pipelines Output dicts support in text generation pipeline (#35092) 2025-01-29 14:44:46 +00:00
quantization Split and clean up GGUF quantization tests (#35502) 2025-01-27 15:46:57 +01:00
repo_utils
sagemaker
tokenization
tp Update-tp test (#35844) 2025-02-03 09:37:02 +01:00
trainer layernorm_decay_fix (#35927) 2025-02-04 11:01:49 +01:00
utils Display warning for unknown quants config instead of an error (#35963) 2025-02-04 15:17:01 +01:00
__init__.py
test_backbone_common.py
test_configuration_common.py
test_feature_extraction_common.py
test_image_processing_common.py Refactoring of ImageProcessorFast (#35069) 2025-02-04 17:52:31 -05:00
test_image_transforms.py
test_modeling_common.py Update tests regarding attention types after #35235 (#36024) 2025-02-04 18:04:47 +01:00
test_modeling_flax_common.py
test_modeling_tf_common.py
test_pipeline_mixin.py
test_processing_common.py
test_sequence_feature_extraction_common.py
test_tokenization_common.py apply_chat_template: consistent behaviour for return_assistant_tokens_mask=True return_tensors=True (#35582) 2025-02-04 10:27:52 +01:00