mirror of
https://github.com/saymrwulf/transformers.git
synced 2026-05-14 20:58:08 +00:00
* Add first draft * Use appropriate gelu function * More improvements * More improvements * More improvements * Convert checkpoint * More improvements * Improve docs, remove print statements * More improvements * Add link * remove unused masking function * begin tokenizer * do_lower_case * debug * set split_special_tokens=True * Remove script * Fix style * Fix rebase * Use same design as CLIP * Add fast tokenizer * Add SiglipTokenizer to init, remove extra_ids * Improve conversion script * Use smaller inputs in conversion script * Update conversion script * More improvements * Add processor to conversion script * Add tests * Remove print statements * Add tokenizer tests * Fix more tests * More improvements related to weight initialization * More improvements * Make more tests pass * More improvements * More improvements * Add copied from * Add canonicalize_text * Enable fast tokenizer tests * More improvements * Fix most slow tokenizer tests * Address comments * Fix style * Remove script * Address some comments * Add copied from to tests * Add more copied from * Add more copied from * Add more copied from * Remove is_flax_available * More updates * Address comment * Remove SiglipTokenizerFast for now * Add caching * Remove umt5 test * Add canonicalize_text inside _tokenize, thanks Arthur * Fix image processor tests * Skip tests which are not applicable * Skip test_initialization * More improvements * Compare pixel values * Fix doc tests, add integration test * Add do_normalize * Remove causal mask and leverage ignore copy * Fix attention_mask * Fix remaining tests * Fix dummies * Rename temperature and bias * Address comments * Add copied from to tokenizer tests * Add SiglipVisionModel to auto mapping * Add copied from to image processor tests * Improve doc * Remove SiglipVisionModel from index * Address comments * Improve docs * Simplify config * Add first draft * Make it like mistral * More improvements * Fix attention_mask * Fix output_attentions * Add note in docs * Convert multilingual model * Convert large checkpoint * Convert more checkpoints * Add pipeline support, correct image_mean and image_std * Use padding=max_length by default * Make processor like llava * Add code snippet * Convert more checkpoints * Set keep_punctuation_string=None as in OpenCLIP * Set normalized=False for special tokens * Fix doc test * Update integration test * Add figure * Update organization * Happy new year * Use AutoModel everywhere --------- Co-authored-by: patil-suraj <surajp815@gmail.com> |
||
|---|---|---|
| .. | ||
| benchmark | ||
| bettertransformer | ||
| deepspeed | ||
| extended | ||
| fixtures | ||
| fsdp | ||
| generation | ||
| models | ||
| optimization | ||
| peft_integration | ||
| pipelines | ||
| quantization | ||
| repo_utils | ||
| sagemaker | ||
| tokenization | ||
| tools | ||
| trainer | ||
| utils | ||
| __init__.py | ||
| test_backbone_common.py | ||
| test_cache_utils.py | ||
| test_configuration_common.py | ||
| test_configuration_utils.py | ||
| test_feature_extraction_common.py | ||
| test_feature_extraction_utils.py | ||
| test_image_processing_common.py | ||
| test_image_processing_utils.py | ||
| test_image_transforms.py | ||
| test_modeling_common.py | ||
| test_modeling_flax_common.py | ||
| test_modeling_flax_utils.py | ||
| test_modeling_tf_common.py | ||
| test_modeling_tf_utils.py | ||
| test_modeling_utils.py | ||
| test_pipeline_mixin.py | ||
| test_sequence_feature_extraction_common.py | ||
| test_tokenization_common.py | ||
| test_tokenization_utils.py | ||