transformers/docs/source/en
João Marcelo 50189e36a6
Add I-JEPA (#33125)
* first draft

* add IJepaEmbeddings class

* fix copy-from for IJepa model

* add weight conversion script

* update attention class names in IJepa model

* style changes

* Add push_to_hub option to convert_ijepa_checkpoint function

* add initial tests for I-JEPA

* minor style changes to conversion script

* make fixup related

* rename conversion script

* Add I-JEPA to sdpa docs

* minor fixes

* adjust conversion script

* update conversion script

* adjust sdpa docs

* [run_slow] ijepa

* [run-slow] ijepa

* [run-slow] ijepa

* [run-slow] ijepa

* [run-slow] ijepa

* [run-slow] ijepa

* formatting issues

* adjust modeling to modular code

* add IJepaModel to objects to ignore in docstring checks

* [run-slow] ijepa

* fix formatting issues

* add usage instruction snippet to docs

* change pos encoding, add checkpoint for doc

* add verify logits for all models

* [run-slow] ijepa

* update docs to include image feature extraction instructions

* remove pooling layer from IJepaModel in image classification class

* [run-slow] ijepa

* remove pooling layer from IJepaModel constructor

* update docs

* [run-slow] ijepa

* [run-slow] ijepa

* small changes

* [run-slow] ijepa

* style adjustments

* update copyright in init file

* adjust modular ijepa

* [run-slow] ijepa
2024-12-05 16:14:46 +01:00
..
internal Automatic compilation in generate: do not rely on inner function (#34923) 2024-12-03 11:20:31 +01:00
main_classes VLM: special multimodal Tokenizer (#34461) 2024-11-04 16:37:51 +01:00
model_doc Add I-JEPA (#33125) 2024-12-05 16:14:46 +01:00
quantization [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
tasks [docs] fix example code bug (#35054) 2024-12-03 09:18:39 -08:00
_config.py Add optimized PixtralImageProcessorFast (#34836) 2024-11-28 16:04:05 +01:00
_redirects.yml
_toctree.yml Add I-JEPA (#33125) 2024-12-05 16:14:46 +01:00
accelerate.md
add_new_model.md Model addition timeline (#33762) 2024-09-27 17:15:13 +02:00
add_new_pipeline.md
agents.md Multiple typo fixes in Tutorials docs (#35035) 2024-12-02 15:26:34 +00:00
agents_advanced.md Multiple typo fixes in Tutorials docs (#35035) 2024-12-02 15:26:34 +00:00
attention.md
autoclass_tutorial.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
benchmarks.md
bertology.md
big_models.md
chat_templating.md Add a doc section on writing generation prompts (#34248) 2024-10-21 14:35:57 +01:00
community.md
contributing.md
conversations.md
create_a_model.md
custom_models.md
debugging.md
deepspeed.md
fast_tokenizers.md
fsdp.md
generation_strategies.md Self-speculation (Layer-Skip Llama) (#34240) 2024-11-19 12:20:07 +00:00
gguf.md Add Nemotron GGUF Loading Support (#34725) 2024-11-21 11:37:34 +01:00
glossary.md
how_to_hack_models.md [Docs] Add Developer Guide: How to Hack Any Transformers Model (#33979) 2024-10-07 10:08:20 +02:00
hpo_train.md Trainer - deprecate tokenizer for processing_class (#32385) 2024-10-02 14:08:46 +01:00
index.md Add I-JEPA (#33125) 2024-12-05 16:14:46 +01:00
installation.md docs: HUGGINGFACE_HUB_CACHE -> HF_HUB_CACHE (#34904) 2024-11-26 09:37:18 -08:00
kv_cache.md [docs] add a comment that offloading requires CUDA GPU (#35055) 2024-12-04 07:48:34 -08:00
llm_optims.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
llm_tutorial.md Fix: typo (#33880) 2024-10-02 09:12:21 +01:00
llm_tutorial_optimization.md [docs] add explanation to release_memory() (#34911) 2024-11-27 07:47:28 -08:00
model_memory_anatomy.md Enable BNB multi-backend support (#31098) 2024-09-24 03:40:56 -06:00
model_sharing.md [docs] update not-working model revision (#34682) 2024-11-11 07:09:31 -08:00
model_summary.md
modular_transformers.md Improve modular converter (#33991) 2024-10-08 14:53:58 +02:00
multilingual.md
notebooks.md
pad_truncation.md
peft.md
perf_hardware.md
perf_infer_cpu.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
perf_infer_gpu_multi.md Simplify Tensor Parallel implementation with PyTorch TP (#34184) 2024-11-18 19:51:49 +01:00
perf_infer_gpu_one.md Add I-JEPA (#33125) 2024-12-05 16:14:46 +01:00
perf_torch_compile.md [docs] use device-agnostic instead of cuda (#35047) 2024-12-03 10:53:45 -08:00
perf_train_cpu.md [doc] use full path for run_qa.py (#34914) 2024-11-26 09:23:44 -08:00
perf_train_cpu_many.md [doc] use full path for run_qa.py (#34914) 2024-11-26 09:23:44 -08:00
perf_train_gpu_many.md Multiple typo fixes in Tutorials docs (#35035) 2024-12-02 15:26:34 +00:00
perf_train_gpu_one.md Corrected max number for bf16 in transformer/docs (#33658) 2024-09-25 19:20:51 +02:00
perf_train_special.md
perf_train_tpu_tf.md
performance.md Simplify Tensor Parallel implementation with PyTorch TP (#34184) 2024-11-18 19:51:49 +01:00
perplexity.md [docs] use device-agnostic API instead of cuda (#34913) 2024-11-26 09:23:34 -08:00
philosophy.md
pipeline_tutorial.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
pipeline_webserver.md
pr_checks.md
preprocessing.md
quicktour.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
run_scripts.md [docs] refine the doc for train with a script (#33423) 2024-09-12 10:16:12 -07:00
sagemaker.md
serialization.md
task_summary.md
tasks_explained.md fix: Wrong task mentioned in docs (#34757) 2024-11-18 18:42:28 +00:00
testing.md [tests] add XPU part to testing (#34778) 2024-11-18 09:59:11 -08:00
tf_xla.md
tflite.md
tiktoken.md Updated documentation and added conversion utility (#34319) 2024-11-25 18:44:09 +01:00
tokenizer_summary.md
torchscript.md
trainer.md Fix callback key name (#34762) 2024-11-18 18:41:12 +00:00
training.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
troubleshooting.md