| .. |
|
agents
|
Change is_soundfile_availble to is_soundfile_available (#35030)
|
2025-01-03 14:37:42 +01:00 |
|
benchmark
|
|
|
|
commands
|
|
|
|
data
|
Enhance DataCollatorForLanguageModeling with Configurable Token Replacement Probabilities (#35251)
|
2025-01-14 17:01:10 +00:00 |
|
generation
|
Add future import for Py < 3.10 (#35666)
|
2025-01-15 12:45:43 +00:00 |
|
integrations
|
Fix flex_attention in training mode (#35605)
|
2025-01-10 11:49:12 +01:00 |
|
kernels
|
|
|
|
loss
|
Fix multi-gpu loss (#35395)
|
2025-01-09 10:14:31 +01:00 |
|
models
|
Add future import for Py < 3.10 (#35666)
|
2025-01-15 12:45:43 +00:00 |
|
onnx
|
|
|
|
pipelines
|
Pipeline: simple API for assisted generation (#34504)
|
2025-01-08 17:08:02 +00:00 |
|
quantizers
|
Enable gptqmodel (#35012)
|
2025-01-15 14:22:49 +01:00 |
|
sagemaker
|
|
|
|
utils
|
Enable gptqmodel (#35012)
|
2025-01-15 14:22:49 +01:00 |
|
__init__.py
|
Add-helium (#35669)
|
2025-01-13 18:41:15 +01:00 |
|
activations.py
|
|
|
|
activations_tf.py
|
|
|
|
audio_utils.py
|
Delete redundancy for loop checks. (#35288)
|
2024-12-16 13:36:27 +00:00 |
|
cache_utils.py
|
Add Cohere2 model (#35224)
|
2024-12-13 09:35:50 +01:00 |
|
configuration_utils.py
|
Enable different torch dtype in sub models (#34873)
|
2025-01-13 13:42:08 +01:00 |
|
convert_graph_to_onnx.py
|
|
|
|
convert_pytorch_checkpoint_to_tf2.py
|
Aurevoir PyTorch 1 (#35358)
|
2024-12-20 14:36:31 +01:00 |
|
convert_slow_tokenizer.py
|
Add-helium (#35669)
|
2025-01-13 18:41:15 +01:00 |
|
convert_slow_tokenizers_checkpoints_to_fast.py
|
|
|
|
convert_tf_hub_seq_to_seq_bert_to_pytorch.py
|
|
|
|
debug_utils.py
|
|
|
|
dependency_versions_check.py
|
|
|
|
dependency_versions_table.py
|
Bump torch requirement to >= 2 (#35479)
|
2025-01-08 15:59:32 +01:00 |
|
dynamic_module_utils.py
|
|
|
|
feature_extraction_sequence_utils.py
|
|
|
|
feature_extraction_utils.py
|
Option to set 'non_blocking' for to(device) in BatchEncoding and BatchFeature (#34883)
|
2024-12-09 11:29:04 +01:00 |
|
file_utils.py
|
Change is_soundfile_availble to is_soundfile_available (#35030)
|
2025-01-03 14:37:42 +01:00 |
|
hf_argparser.py
|
|
|
|
hyperparameter_search.py
|
|
|
|
image_processing_base.py
|
Reuse "if not" logic in image_processing. (#35405)
|
2025-01-03 14:44:57 +01:00 |
|
image_processing_utils.py
|
|
|
|
image_processing_utils_fast.py
|
|
|
|
image_transforms.py
|
|
|
|
image_utils.py
|
Chat template: return vectorized output in processors (#34275)
|
2025-01-10 11:05:29 +01:00 |
|
keras_callbacks.py
|
|
|
|
modelcard.py
|
|
|
|
modeling_attn_mask_utils.py
|
bugfix: torch.export failure caused by _make_causal_mask (#35291)
|
2024-12-20 14:37:04 +01:00 |
|
modeling_flash_attention_utils.py
|
🚨All attention refactor🚨 (#35235)
|
2024-12-18 16:53:39 +01:00 |
|
modeling_flax_outputs.py
|
|
|
|
modeling_flax_pytorch_utils.py
|
Aurevoir PyTorch 1 (#35358)
|
2024-12-20 14:36:31 +01:00 |
|
modeling_flax_utils.py
|
|
|
|
modeling_gguf_pytorch_utils.py
|
Fix : Nemotron Processor in GGUF conversion (#35708)
|
2025-01-15 14:25:44 +01:00 |
|
modeling_outputs.py
|
|
|
|
modeling_rope_utils.py
|
More model refactoring! (#35359)
|
2025-01-09 11:09:09 +01:00 |
|
modeling_tf_outputs.py
|
|
|
|
modeling_tf_pytorch_utils.py
|
Aurevoir PyTorch 1 (#35358)
|
2024-12-20 14:36:31 +01:00 |
|
modeling_tf_utils.py
|
|
|
|
modeling_utils.py
|
Clean-up composite configs (#34603)
|
2025-01-15 10:04:07 +01:00 |
|
optimization.py
|
|
|
|
optimization_tf.py
|
|
|
|
processing_utils.py
|
Chat template: return vectorized output in processors (#34275)
|
2025-01-10 11:05:29 +01:00 |
|
pytorch_utils.py
|
Aurevoir PyTorch 1 (#35358)
|
2024-12-20 14:36:31 +01:00 |
|
safetensors_conversion.py
|
Change back to Thread for SF conversion (#35236)
|
2024-12-12 16:05:04 +01:00 |
|
testing_utils.py
|
Enable gptqmodel (#35012)
|
2025-01-15 14:22:49 +01:00 |
|
tf_utils.py
|
|
|
|
time_series_utils.py
|
|
|
|
tokenization_utils.py
|
|
|
|
tokenization_utils_base.py
|
Removed some duplicated code (#35637)
|
2025-01-13 12:34:21 +01:00 |
|
tokenization_utils_fast.py
|
[tokenizers] Ensure that add_prefix_space is propagated to backend_tokenizer.pre_tokenizer (#35593)
|
2025-01-09 17:46:50 +01:00 |
|
trainer.py
|
Update doc for metric_for_best_model when save_strategy="best". (#35389)
|
2025-01-08 16:32:35 +01:00 |
|
trainer_callback.py
|
Let EarlyStoppingCallback not require load_best_model_at_end (#35101)
|
2025-01-10 10:25:32 -05:00 |
|
trainer_pt_utils.py
|
Aurevoir PyTorch 1 (#35358)
|
2024-12-20 14:36:31 +01:00 |
|
trainer_seq2seq.py
|
Replace tokenizer to processing_class in Seq2SeqTrainer (#35452)
|
2025-01-07 09:51:12 +00:00 |
|
trainer_utils.py
|
|
|
|
training_args.py
|
Update doc for metric_for_best_model when save_strategy="best". (#35389)
|
2025-01-08 16:32:35 +01:00 |
|
training_args_seq2seq.py
|
[docs] Remove sortish_sampler (#35539)
|
2025-01-07 12:06:19 -08:00 |
|
training_args_tf.py
|
|
|