transformers/docs/source/en
2024-11-18 18:42:28 +00:00
..
internal SynthID: better example (#34372) 2024-10-25 11:46:46 +01:00
main_classes VLM: special multimodal Tokenizer (#34461) 2024-11-04 16:37:51 +01:00
model_doc VLMs: patch_size -> num_image_tokens in processing (#33424) 2024-11-18 13:21:07 +01:00
quantization [docs] add XPU besides CUDA, MPS etc. (#34777) 2024-11-18 09:58:50 -08:00
tasks Fix broken link (#34618) 2024-11-18 14:13:26 +01:00
_config.py [Doc]: Broken link in Kubernetes doc (#33879) 2024-10-04 11:20:56 +02:00
_redirects.yml
_toctree.yml Add OLMo November 2024 (#34551) 2024-11-18 10:43:10 +01:00
accelerate.md
add_new_model.md Model addition timeline (#33762) 2024-09-27 17:15:13 +02:00
add_new_pipeline.md
agents.md Fix method name which changes in tutorial (#34252) 2024-10-21 14:21:52 -03:00
agents_advanced.md Agents: turn any Space into a Tool with Tool.from_space() (#34561) 2024-11-10 12:22:40 +01:00
attention.md
autoclass_tutorial.md
benchmarks.md
bertology.md
big_models.md
chat_templating.md Add a doc section on writing generation prompts (#34248) 2024-10-21 14:35:57 +01:00
community.md
contributing.md
conversations.md
create_a_model.md
custom_models.md
debugging.md
deepspeed.md
fast_tokenizers.md
fsdp.md
generation_strategies.md [docs] add xpu device check (#34684) 2024-11-13 14:16:59 -08:00
gguf.md Add GGUF for Mamba (#34200) 2024-10-30 16:52:17 +01:00
glossary.md
how_to_hack_models.md [Docs] Add Developer Guide: How to Hack Any Transformers Model (#33979) 2024-10-07 10:08:20 +02:00
hpo_train.md Trainer - deprecate tokenizer for processing_class (#32385) 2024-10-02 14:08:46 +01:00
index.md Add OLMo November 2024 (#34551) 2024-11-18 10:43:10 +01:00
installation.md
kv_cache.md Cache: don't show warning in forward passes when past_key_values is None (#33541) 2024-09-19 12:02:46 +01:00
llm_optims.md Remove graph breaks for torch.compile() in flash_attention_forward when Lllama Model is padding free tuned (#33932) 2024-10-24 11:02:54 +02:00
llm_tutorial.md Fix: typo (#33880) 2024-10-02 09:12:21 +01:00
llm_tutorial_optimization.md Enable BNB multi-backend support (#31098) 2024-09-24 03:40:56 -06:00
model_memory_anatomy.md Enable BNB multi-backend support (#31098) 2024-09-24 03:40:56 -06:00
model_sharing.md [docs] update not-working model revision (#34682) 2024-11-11 07:09:31 -08:00
model_summary.md
modular_transformers.md Improve modular converter (#33991) 2024-10-08 14:53:58 +02:00
multilingual.md
notebooks.md
pad_truncation.md
peft.md
perf_hardware.md
perf_infer_cpu.md
perf_infer_gpu_one.md Add OLMo November 2024 (#34551) 2024-11-18 10:43:10 +01:00
perf_torch_compile.md
perf_train_cpu.md update doc (#34478) 2024-10-31 15:59:23 -07:00
perf_train_cpu_many.md update doc (#34478) 2024-10-31 15:59:23 -07:00
perf_train_gpu_many.md
perf_train_gpu_one.md Corrected max number for bf16 in transformer/docs (#33658) 2024-09-25 19:20:51 +02:00
perf_train_special.md
perf_train_tpu_tf.md
performance.md
perplexity.md Fix perplexity computation in perplexity.md (#34387) 2024-10-29 11:10:10 +01:00
philosophy.md
pipeline_tutorial.md
pipeline_webserver.md
pr_checks.md
preprocessing.md
quicktour.md [docs] fix typo (#34235) 2024-10-22 09:46:07 -07:00
run_scripts.md
sagemaker.md
serialization.md
task_summary.md
tasks_explained.md fix: Wrong task mentioned in docs (#34757) 2024-11-18 18:42:28 +00:00
testing.md [tests] add XPU part to testing (#34778) 2024-11-18 09:59:11 -08:00
tf_xla.md
tflite.md
tiktoken.md
tokenizer_summary.md
torchscript.md
trainer.md Fix callback key name (#34762) 2024-11-18 18:41:12 +00:00
training.md [docs] make empty_cache device-agnostic (#34774) 2024-11-18 09:58:26 -08:00
troubleshooting.md