transformers/docs/source/en
pglorio 33cb1f7b61
Add Zamba2 (#34517)
* First commit

* Finish model implementation

* First commit

* Finish model implementation

* Register zamba2

* generated modeling and configuration

* generated modeling and configuration

* added hybrid cache

* fix attention_mask in mamba

* dropped unused loras

* fix flash2

* config docstrings

* fix config and fwd pass

* make fixup fixes

* text_modeling_zamba2

* small fixes

* make fixup fixes

* Fix modular model converter

* added inheritances in modular, renamed zamba cache

* modular rebase

* new modular conversion

* fix generated modeling file

* fixed import for Zamba2RMSNormGated

* modular file cleanup

* make fixup and model tests

* dropped inheritance for Zamba2PreTrainedModel

* make fixup and unit tests

* Add inheritance of rope from GemmaRotaryEmbedding

* moved rope to model init

* drop del self.self_attn and del self.feed_forward

* fix tests

* renamed lora -> adapter

* rewrote adapter implementation

* fixed tests

* Fix torch_forward in mamba2 layer

* Fix torch_forward in mamba2 layer

* Fix torch_forward in mamba2 layer

* Dropped adapter in-place sum

* removed rope from attention init

* updated rope

* created get_layers method

* make fixup fix

* make fixup fixes

* make fixup fixes

* update to new attention standard

* update to new attention standard

* make fixup fixes

* minor fixes

* cache_position

* removed cache_position postion_ids use_cache

* remove config from modular

* removed config from modular (2)

* import apply_rotary_pos_emb from llama

* fixed rope_kwargs

* Instantiate cache in Zamba2Model

* fix cache

* fix @slow decorator

* small fix in modular file

* Update docs/source/en/model_doc/zamba2.md

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* several minor fixes

* inherit mamba2decoder fwd and drop position_ids in mamba

* removed docstrings from modular

* reinstate zamba2 attention decoder fwd

* use regex for tied keys

* Revert "use regex for tied keys"

This reverts commit 9007a522b1f831df6d516a281c0d3fdd20a118f5.

* use regex for tied keys

* add cpu to slow forward tests

* dropped config.use_shared_mlp_adapter

* Update docs/source/en/model_doc/zamba2.md

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* re-convert from modular

---------

Co-authored-by: root <root@node-2.us-southcentral1-a.compute.internal>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2025-01-27 10:51:23 +01:00
..
internal Implement AsyncTextIteratorStreamer for asynchronous streaming (#34931) 2024-12-20 12:08:12 +01:00
main_classes HIGGS Quantization Support (#34997) 2024-12-23 16:54:49 +01:00
model_doc Add Zamba2 (#34517) 2025-01-27 10:51:23 +01:00
quantization Enable gptqmodel (#35012) 2025-01-15 14:22:49 +01:00
tasks [doctest] Fixes (#35863) 2025-01-26 15:26:38 -08:00
_config.py
_redirects.yml
_toctree.yml Granite Vision Support (#35579) 2025-01-23 17:15:52 +01:00
accelerate.md
add_new_model.md
add_new_pipeline.md [docs] Follow up register_pipeline (#35310) 2024-12-20 09:22:44 -08:00
agents.md
agents_advanced.md [doctest] Fixes (#35863) 2025-01-26 15:26:38 -08:00
attention.md
autoclass_tutorial.md
bertology.md
big_models.md
chat_templating.md [doctest] Fixes (#35863) 2025-01-26 15:26:38 -08:00
community.md
contributing.md
conversations.md
create_a_model.md
custom_models.md
debugging.md
deepspeed.md [doc] deepspeed universal checkpoint (#35015) 2025-01-09 09:50:51 -08:00
fast_tokenizers.md
fsdp.md Fix docs typos. (#35465) 2025-01-02 11:29:46 +01:00
generation_strategies.md [doctest] Fixes (#35863) 2025-01-26 15:26:38 -08:00
gguf.md Add Gemma2 GGUF support (#34002) 2025-01-03 14:50:07 +01:00
glossary.md
how_to_hack_models.md
hpo_train.md
index.md Add Zamba2 (#34517) 2025-01-27 10:51:23 +01:00
installation.md Enhanced Installation Section in README.md (#35094) 2025-01-14 08:05:08 -08:00
kv_cache.md [doctest] Fixes (#35863) 2025-01-26 15:26:38 -08:00
llm_optims.md Update llm_optims docs for sdpa_kernel (#35481) 2025-01-06 08:54:31 -08:00
llm_tutorial.md [chat] docs fix (#35840) 2025-01-22 14:32:27 +00:00
llm_tutorial_optimization.md
model_memory_anatomy.md
model_sharing.md
model_summary.md
modular_transformers.md Improve modular documentation (#35737) 2025-01-21 17:53:30 +01:00
multilingual.md
notebooks.md
pad_truncation.md
peft.md
perf_hardware.md
perf_infer_cpu.md
perf_infer_gpu_multi.md Fix image preview in multi-GPU inference docs (#35303) 2024-12-17 09:33:50 -08:00
perf_infer_gpu_one.md Add Zamba2 (#34517) 2025-01-27 10:51:23 +01:00
perf_torch_compile.md
perf_train_cpu.md
perf_train_cpu_many.md
perf_train_gpu_many.md
perf_train_gpu_one.md
perf_train_special.md
perf_train_tpu_tf.md
performance.md
perplexity.md
philosophy.md
pipeline_tutorial.md
pipeline_webserver.md
pr_checks.md
preprocessing.md
quicktour.md [chat] docs fix (#35840) 2025-01-22 14:32:27 +00:00
run_scripts.md
sagemaker.md
serialization.md
task_summary.md [doctest] Fixes (#35863) 2025-01-26 15:26:38 -08:00
tasks_explained.md
testing.md
tf_xla.md
tflite.md
tiktoken.md
tokenizer_summary.md
torchscript.md
trainer.md
training.md
troubleshooting.md