transformers/docs/source/en
松本和真 96bf3d6cc5
Add diffllama (#34083)
* first adding diffllama

* add Diff Attention and other but still with errors

* complate make attention Diff-Attention

* fix some bugs which may be caused by transformer-cli while adding model

* fix a bug caused by forgetting KV cache...

* Update src/transformers/models/diffllama/modeling_diffllama.py

You don't need to divide by 2 if we use same number of attention heads as llama. instead you can just split in forward.

Co-authored-by: Minho Ryu <ryumin93@gmail.com>

* Update src/transformers/models/diffllama/modeling_diffllama.py

fit to changeing "num_heads // 2" place

Co-authored-by: Minho Ryu <ryumin93@gmail.com>

* Update src/transformers/models/diffllama/modeling_diffllama.py

new codes are more meaningful than before

Co-authored-by: Minho Ryu <ryumin93@gmail.com>

* Update src/transformers/models/diffllama/modeling_diffllama.py

new codes are more meaningful than before

Co-authored-by: Minho Ryu <ryumin93@gmail.com>

* Update src/transformers/models/diffllama/modeling_diffllama.py

fit to changeing "num_heads // 2" place

Co-authored-by: Minho Ryu <ryumin93@gmail.com>

* Update src/transformers/models/diffllama/modeling_diffllama.py

fix 2times divide by sqrt(self.head_dim)

Co-authored-by: Minho Ryu <ryumin93@gmail.com>

* Update src/transformers/models/diffllama/modeling_diffllama.py

fix 2times divide by sqrt(self.head_dim)

Co-authored-by: Minho Ryu <ryumin93@gmail.com>

* Update src/transformers/models/diffllama/modeling_diffllama.py

fit to changeing "num_heads // 2" place.
and more visible

Co-authored-by: Minho Ryu <ryumin93@gmail.com>

* I found Attention missed implemented from paper still on e072544a3bfc69b8a903e062729f861108ffecd3.

* re-implemented

* adding groupnorm

Co-authored-by: Minho Ryu <ryumin93@gmail.com>

* align with transformers code style

Co-authored-by: Minho Ryu <ryumin93@gmail.com>

* fix typo

Co-authored-by: Minho Ryu <ryumin93@gmail.com>

* adding groupnorm

Co-authored-by: Minho Ryu <ryumin93@gmail.com>

* change SdpaAttention to DiffSdpaAttention

Co-authored-by: Minho Ryu <ryumin93@gmail.com>

* fix bug

* Update src/transformers/models/diffllama/modeling_diffllama.py

resolve "not same outputs" problem

Co-authored-by: Minho Ryu <ryumin93@gmail.com>

* fix bugs of places of "GroupNorm with scale" and etc

* Revert "fix bugs of places of "GroupNorm with scale" and etc"

This reverts commit 26307d92f6acd55e9fe89f2facff350f05760960.

* simplify multiple of attention (matmul) operations into one by repeating value_states

Co-authored-by: Minho Ryu <ryumin93@gmail.com>

* simplify multiple of attention (matmul) operations into one by repeating value_states

Co-authored-by: Minho Ryu <ryumin93@gmail.com>

* simplify multiple of attention (matmul) operations into one by repeating value_states

Co-authored-by: Minho Ryu <ryumin93@gmail.com>

* remove missed type

* add diffllama model_doc

* apply make style/quality

* apply review comment about model

* apply review comment about test

* place diffllama alphabetically on the src/transformers/__init__.py

* fix forgot code

* Supports parameters that are not initialized with standard deviation 0 in the conventional method

* add DiffLlamaConfig to CONFIG_CLASSES_TO_IGNORE_FOR_DOCSTRING_CHECKPOINT_CHECK on utils/check_config_docstrings.py

* remove unused property of config

* add to supported model list

* add to spda supported model list

* fix copyright, remove pretraining_tensor_parallel, and modify for initialization test

* remove unused import and etc.

* empty commit

* empty commit

* empty commit

* apply modular transformers but with bugs

* revert prev commit

* create src/transformers/model/diffllama/modular_diffllama.py

* run utils/modular_model_converter.py

* empty commit

* leaner modular diffllama

* remove more and more in modular_diffllama.pt

* remove more and more in modular_diffllama.pt

* resolve missing docstring entries

* force reset

* convert modular

---------

Co-authored-by: Minho Ryu <ryumin93@gmail.com>
2025-01-07 11:34:56 +01:00
..
internal Implement AsyncTextIteratorStreamer for asynchronous streaming (#34931) 2024-12-20 12:08:12 +01:00
main_classes HIGGS Quantization Support (#34997) 2024-12-23 16:54:49 +01:00
model_doc Add diffllama (#34083) 2025-01-07 11:34:56 +01:00
quantization HIGGS Quantization Support (#34997) 2024-12-23 16:54:49 +01:00
tasks Improved Documentation Of Audio Classification (#35368) 2024-12-20 09:17:28 -08:00
_config.py Add optimized PixtralImageProcessorFast (#34836) 2024-11-28 16:04:05 +01:00
_redirects.yml
_toctree.yml Add diffllama (#34083) 2025-01-07 11:34:56 +01:00
accelerate.md
add_new_model.md
add_new_pipeline.md [docs] Follow up register_pipeline (#35310) 2024-12-20 09:22:44 -08:00
agents.md Multiple typo fixes in Tutorials docs (#35035) 2024-12-02 15:26:34 +00:00
agents_advanced.md Multiple typo fixes in Tutorials docs (#35035) 2024-12-02 15:26:34 +00:00
attention.md
autoclass_tutorial.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
benchmarks.md
bertology.md
big_models.md
chat_templating.md Fix typo in chat template example (#35250) 2024-12-12 16:53:21 -08:00
community.md
contributing.md
conversations.md
create_a_model.md
custom_models.md
debugging.md
deepspeed.md
fast_tokenizers.md
fsdp.md Fix docs typos. (#35465) 2025-01-02 11:29:46 +01:00
generation_strategies.md Adaptive dynamic number of speculative tokens (#34156) 2024-12-05 17:07:33 +01:00
gguf.md Add Gemma2 GGUF support (#34002) 2025-01-03 14:50:07 +01:00
glossary.md
how_to_hack_models.md
hpo_train.md
index.md Add diffllama (#34083) 2025-01-07 11:34:56 +01:00
installation.md docs: HUGGINGFACE_HUB_CACHE -> HF_HUB_CACHE (#34904) 2024-11-26 09:37:18 -08:00
kv_cache.md [docs] add a comment that offloading requires CUDA GPU (#35055) 2024-12-04 07:48:34 -08:00
llm_optims.md Update llm_optims docs for sdpa_kernel (#35481) 2025-01-06 08:54:31 -08:00
llm_tutorial.md
llm_tutorial_optimization.md [docs] add explanation to release_memory() (#34911) 2024-11-27 07:47:28 -08:00
model_memory_anatomy.md
model_sharing.md [docs] update not-working model revision (#34682) 2024-11-11 07:09:31 -08:00
model_summary.md
modular_transformers.md Improve modular transformers documentation (#35322) 2024-12-20 09:16:02 -08:00
multilingual.md
notebooks.md
pad_truncation.md
peft.md
perf_hardware.md
perf_infer_cpu.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
perf_infer_gpu_multi.md Fix image preview in multi-GPU inference docs (#35303) 2024-12-17 09:33:50 -08:00
perf_infer_gpu_one.md Add diffllama (#34083) 2025-01-07 11:34:56 +01:00
perf_torch_compile.md [docs] use device-agnostic instead of cuda (#35047) 2024-12-03 10:53:45 -08:00
perf_train_cpu.md [doc] use full path for run_qa.py (#34914) 2024-11-26 09:23:44 -08:00
perf_train_cpu_many.md [doc] use full path for run_qa.py (#34914) 2024-11-26 09:23:44 -08:00
perf_train_gpu_many.md Multiple typo fixes in Tutorials docs (#35035) 2024-12-02 15:26:34 +00:00
perf_train_gpu_one.md
perf_train_special.md
perf_train_tpu_tf.md
performance.md Simplify Tensor Parallel implementation with PyTorch TP (#34184) 2024-11-18 19:51:49 +01:00
perplexity.md [docs] use device-agnostic API instead of cuda (#34913) 2024-11-26 09:23:34 -08:00
philosophy.md
pipeline_tutorial.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
pipeline_webserver.md
pr_checks.md
preprocessing.md
quicktour.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
run_scripts.md
sagemaker.md
serialization.md
task_summary.md
tasks_explained.md fix: Wrong task mentioned in docs (#34757) 2024-11-18 18:42:28 +00:00
testing.md [tests] add XPU part to testing (#34778) 2024-11-18 09:59:11 -08:00
tf_xla.md
tflite.md
tiktoken.md Updated documentation and added conversion utility (#34319) 2024-11-25 18:44:09 +01:00
tokenizer_summary.md
torchscript.md
trainer.md Fix callback key name (#34762) 2024-11-18 18:41:12 +00:00
training.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
troubleshooting.md