mirror of
https://github.com/saymrwulf/transformers.git
synced 2026-05-14 20:58:08 +00:00
* first adding diffllama * add Diff Attention and other but still with errors * complate make attention Diff-Attention * fix some bugs which may be caused by transformer-cli while adding model * fix a bug caused by forgetting KV cache... * Update src/transformers/models/diffllama/modeling_diffllama.py You don't need to divide by 2 if we use same number of attention heads as llama. instead you can just split in forward. Co-authored-by: Minho Ryu <ryumin93@gmail.com> * Update src/transformers/models/diffllama/modeling_diffllama.py fit to changeing "num_heads // 2" place Co-authored-by: Minho Ryu <ryumin93@gmail.com> * Update src/transformers/models/diffllama/modeling_diffllama.py new codes are more meaningful than before Co-authored-by: Minho Ryu <ryumin93@gmail.com> * Update src/transformers/models/diffllama/modeling_diffllama.py new codes are more meaningful than before Co-authored-by: Minho Ryu <ryumin93@gmail.com> * Update src/transformers/models/diffllama/modeling_diffllama.py fit to changeing "num_heads // 2" place Co-authored-by: Minho Ryu <ryumin93@gmail.com> * Update src/transformers/models/diffllama/modeling_diffllama.py fix 2times divide by sqrt(self.head_dim) Co-authored-by: Minho Ryu <ryumin93@gmail.com> * Update src/transformers/models/diffllama/modeling_diffllama.py fix 2times divide by sqrt(self.head_dim) Co-authored-by: Minho Ryu <ryumin93@gmail.com> * Update src/transformers/models/diffllama/modeling_diffllama.py fit to changeing "num_heads // 2" place. and more visible Co-authored-by: Minho Ryu <ryumin93@gmail.com> * I found Attention missed implemented from paper still on e072544a3bfc69b8a903e062729f861108ffecd3. * re-implemented * adding groupnorm Co-authored-by: Minho Ryu <ryumin93@gmail.com> * align with transformers code style Co-authored-by: Minho Ryu <ryumin93@gmail.com> * fix typo Co-authored-by: Minho Ryu <ryumin93@gmail.com> * adding groupnorm Co-authored-by: Minho Ryu <ryumin93@gmail.com> * change SdpaAttention to DiffSdpaAttention Co-authored-by: Minho Ryu <ryumin93@gmail.com> * fix bug * Update src/transformers/models/diffllama/modeling_diffllama.py resolve "not same outputs" problem Co-authored-by: Minho Ryu <ryumin93@gmail.com> * fix bugs of places of "GroupNorm with scale" and etc * Revert "fix bugs of places of "GroupNorm with scale" and etc" This reverts commit 26307d92f6acd55e9fe89f2facff350f05760960. * simplify multiple of attention (matmul) operations into one by repeating value_states Co-authored-by: Minho Ryu <ryumin93@gmail.com> * simplify multiple of attention (matmul) operations into one by repeating value_states Co-authored-by: Minho Ryu <ryumin93@gmail.com> * simplify multiple of attention (matmul) operations into one by repeating value_states Co-authored-by: Minho Ryu <ryumin93@gmail.com> * remove missed type * add diffllama model_doc * apply make style/quality * apply review comment about model * apply review comment about test * place diffllama alphabetically on the src/transformers/__init__.py * fix forgot code * Supports parameters that are not initialized with standard deviation 0 in the conventional method * add DiffLlamaConfig to CONFIG_CLASSES_TO_IGNORE_FOR_DOCSTRING_CHECKPOINT_CHECK on utils/check_config_docstrings.py * remove unused property of config * add to supported model list * add to spda supported model list * fix copyright, remove pretraining_tensor_parallel, and modify for initialization test * remove unused import and etc. * empty commit * empty commit * empty commit * apply modular transformers but with bugs * revert prev commit * create src/transformers/model/diffllama/modular_diffllama.py * run utils/modular_model_converter.py * empty commit * leaner modular diffllama * remove more and more in modular_diffllama.pt * remove more and more in modular_diffllama.pt * resolve missing docstring entries * force reset * convert modular --------- Co-authored-by: Minho Ryu <ryumin93@gmail.com> |
||
|---|---|---|
| .. | ||
| internal | ||
| main_classes | ||
| model_doc | ||
| quantization | ||
| tasks | ||
| _config.py | ||
| _redirects.yml | ||
| _toctree.yml | ||
| accelerate.md | ||
| add_new_model.md | ||
| add_new_pipeline.md | ||
| agents.md | ||
| agents_advanced.md | ||
| attention.md | ||
| autoclass_tutorial.md | ||
| benchmarks.md | ||
| bertology.md | ||
| big_models.md | ||
| chat_templating.md | ||
| community.md | ||
| contributing.md | ||
| conversations.md | ||
| create_a_model.md | ||
| custom_models.md | ||
| debugging.md | ||
| deepspeed.md | ||
| fast_tokenizers.md | ||
| fsdp.md | ||
| generation_strategies.md | ||
| gguf.md | ||
| glossary.md | ||
| how_to_hack_models.md | ||
| hpo_train.md | ||
| index.md | ||
| installation.md | ||
| kv_cache.md | ||
| llm_optims.md | ||
| llm_tutorial.md | ||
| llm_tutorial_optimization.md | ||
| model_memory_anatomy.md | ||
| model_sharing.md | ||
| model_summary.md | ||
| modular_transformers.md | ||
| multilingual.md | ||
| notebooks.md | ||
| pad_truncation.md | ||
| peft.md | ||
| perf_hardware.md | ||
| perf_infer_cpu.md | ||
| perf_infer_gpu_multi.md | ||
| perf_infer_gpu_one.md | ||
| perf_torch_compile.md | ||
| perf_train_cpu.md | ||
| perf_train_cpu_many.md | ||
| perf_train_gpu_many.md | ||
| perf_train_gpu_one.md | ||
| perf_train_special.md | ||
| perf_train_tpu_tf.md | ||
| performance.md | ||
| perplexity.md | ||
| philosophy.md | ||
| pipeline_tutorial.md | ||
| pipeline_webserver.md | ||
| pr_checks.md | ||
| preprocessing.md | ||
| quicktour.md | ||
| run_scripts.md | ||
| sagemaker.md | ||
| serialization.md | ||
| task_summary.md | ||
| tasks_explained.md | ||
| testing.md | ||
| tf_xla.md | ||
| tflite.md | ||
| tiktoken.md | ||
| tokenizer_summary.md | ||
| torchscript.md | ||
| trainer.md | ||
| training.md | ||
| troubleshooting.md | ||