mirror of
https://github.com/saymrwulf/transformers.git
synced 2026-05-14 20:58:08 +00:00
* First commit * Finish model implementation * First commit * Finish model implementation * Register zamba2 * generated modeling and configuration * generated modeling and configuration * added hybrid cache * fix attention_mask in mamba * dropped unused loras * fix flash2 * config docstrings * fix config and fwd pass * make fixup fixes * text_modeling_zamba2 * small fixes * make fixup fixes * Fix modular model converter * added inheritances in modular, renamed zamba cache * modular rebase * new modular conversion * fix generated modeling file * fixed import for Zamba2RMSNormGated * modular file cleanup * make fixup and model tests * dropped inheritance for Zamba2PreTrainedModel * make fixup and unit tests * Add inheritance of rope from GemmaRotaryEmbedding * moved rope to model init * drop del self.self_attn and del self.feed_forward * fix tests * renamed lora -> adapter * rewrote adapter implementation * fixed tests * Fix torch_forward in mamba2 layer * Fix torch_forward in mamba2 layer * Fix torch_forward in mamba2 layer * Dropped adapter in-place sum * removed rope from attention init * updated rope * created get_layers method * make fixup fix * make fixup fixes * make fixup fixes * update to new attention standard * update to new attention standard * make fixup fixes * minor fixes * cache_position * removed cache_position postion_ids use_cache * remove config from modular * removed config from modular (2) * import apply_rotary_pos_emb from llama * fixed rope_kwargs * Instantiate cache in Zamba2Model * fix cache * fix @slow decorator * small fix in modular file * Update docs/source/en/model_doc/zamba2.md Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * several minor fixes * inherit mamba2decoder fwd and drop position_ids in mamba * removed docstrings from modular * reinstate zamba2 attention decoder fwd * use regex for tied keys * Revert "use regex for tied keys" This reverts commit 9007a522b1f831df6d516a281c0d3fdd20a118f5. * use regex for tied keys * add cpu to slow forward tests * dropped config.use_shared_mlp_adapter * Update docs/source/en/model_doc/zamba2.md Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * re-convert from modular --------- Co-authored-by: root <root@node-2.us-southcentral1-a.compute.internal> Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| internal | ||
| main_classes | ||
| model_doc | ||
| quantization | ||
| tasks | ||
| _config.py | ||
| _redirects.yml | ||
| _toctree.yml | ||
| accelerate.md | ||
| add_new_model.md | ||
| add_new_pipeline.md | ||
| agents.md | ||
| agents_advanced.md | ||
| attention.md | ||
| autoclass_tutorial.md | ||
| bertology.md | ||
| big_models.md | ||
| chat_templating.md | ||
| community.md | ||
| contributing.md | ||
| conversations.md | ||
| create_a_model.md | ||
| custom_models.md | ||
| debugging.md | ||
| deepspeed.md | ||
| fast_tokenizers.md | ||
| fsdp.md | ||
| generation_strategies.md | ||
| gguf.md | ||
| glossary.md | ||
| how_to_hack_models.md | ||
| hpo_train.md | ||
| index.md | ||
| installation.md | ||
| kv_cache.md | ||
| llm_optims.md | ||
| llm_tutorial.md | ||
| llm_tutorial_optimization.md | ||
| model_memory_anatomy.md | ||
| model_sharing.md | ||
| model_summary.md | ||
| modular_transformers.md | ||
| multilingual.md | ||
| notebooks.md | ||
| pad_truncation.md | ||
| peft.md | ||
| perf_hardware.md | ||
| perf_infer_cpu.md | ||
| perf_infer_gpu_multi.md | ||
| perf_infer_gpu_one.md | ||
| perf_torch_compile.md | ||
| perf_train_cpu.md | ||
| perf_train_cpu_many.md | ||
| perf_train_gpu_many.md | ||
| perf_train_gpu_one.md | ||
| perf_train_special.md | ||
| perf_train_tpu_tf.md | ||
| performance.md | ||
| perplexity.md | ||
| philosophy.md | ||
| pipeline_tutorial.md | ||
| pipeline_webserver.md | ||
| pr_checks.md | ||
| preprocessing.md | ||
| quicktour.md | ||
| run_scripts.md | ||
| sagemaker.md | ||
| serialization.md | ||
| task_summary.md | ||
| tasks_explained.md | ||
| testing.md | ||
| tf_xla.md | ||
| tflite.md | ||
| tiktoken.md | ||
| tokenizer_summary.md | ||
| torchscript.md | ||
| trainer.md | ||
| training.md | ||
| troubleshooting.md | ||