mirror of
https://github.com/saymrwulf/transformers.git
synced 2026-05-14 20:58:08 +00:00
* First version - OPT model
* Final changes
- putting use cache to False
* few changes
- remove commented block
* few changes
- remove unecessary files
* fix style issues
* few changes
- remove a test file
- added the logits test
* Update src/transformers/models/auto/tokenization_auto.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add gen tests
* few changes
- rm mask filling example on docstring
* few changes
- remove useless args
* some changes
- more tests should pass now
- needs to clean more
- documentation still needs to be done
* fix code quality
* major changes
- change attention architecture to BART-like
- modify some tests
- style fix
* rm useless classes
- remove opt for:
- QA
- cond generation
- seq classif
* Removed autodoc calls to non-existant classes
TOkenizers are not implemented
* Update src/transformers/__init__.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/__init__.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/models/auto/modeling_tf_auto.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Replaced OPTTokeniser with GPT2 tokenizer
* added GPT2Tokenizer.from_pretrained("patrickvonplaten/opt_gpt2_tokenizer")
* Removed OPTTokenizer
* make style
* Make style replaces
``` ...).unsqueeze(```
by
``` >>>).unsqueeze(```
* make repo consistency
* Removed PretrainedOPTModel
* fix opt.mdx removed other heads
* fix init, removed 3 heads
* removed heads
* finished cleaning head
* removed seauence classif and question answering
* removed unused imports
* removed useless dummy object for QA, SC and CG
* removed tests for removed useless dummy object for QA, SC and CG
* Removed head_mask using encoder layers which don't exist
* fixed test
* fix line
* added OPT to toctree
* Updated model path with pushed weigths
* fix model path
* fixed code quality
* fixed embeddings and generation tests
* update paths
* clean comments
* removed OPTClassificationHead for sentence classification
* renamed hidden layer
* renamed num layers to standard num_hidden_layers
* num_attention_heads fix
* changes for 125m
* add first version for 125m
* add first version - flax
* add new version
* causal LM output
* replace output type with BaseModelOutputWithPastAndCrossAttentions
* revert working config from 150m to 350m
* clean
* removed decoder input ids
* fixed embed dim
* more embed_dim issues
* make style + removed enc_dec test
* update falx model
* removed troublesome copy
* added is_encoder_decoder=False to config
* added set_input emb fuinction to model class
* requires torch on embed test
* use head mask instead of decoder head mask input param solves a test
* 8 test remaining, update
* Updated create_and_check_decoder_model_past_large_inputs
* Make style
* update op tokenizer with condition
* make style
* See if I can push
* some clean up
* remove linear head hack
* save intermediate
* save correct attention
* add copied from from bart
* Update src/transformers/models/opt/modeling_opt.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* fix part of the reviewss
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* same changes in naming / conversion
* correct mask
* more fixes
* delete FlaxOPT and TfOPT
* clean traces of Flax and Tf
* fix mask
* fixed positionnal embedding length when past key value is provoded
* get 125m, 6.7b to work
* Added do_layer_norm
* solved mismatch in load dictionnary
* clean up preapre opt input dict
* fixed past key value as bool
* fix previus
* fixed return dict False tuple issue
* All tests are passing
* Make style
* Ignore OPTDecoder non tested
* make fix-copies
* make repo consistency
* small fix
* removed uselss @torch.no_grad decorator
* make styl;e
* fix previous opt test
* style
* make style
* added opt documentation
* update OPT_PRETRAINED_MODEL_ARCHIVE_LIST
* up
* more fixes
* model & config work
* Update src/transformers/models/opt/modeling_opt.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/transformers/models/opt/modeling_opt.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/transformers/models/opt/modeling_opt.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* added comment on padding hack (+2)
* cleaup
* review update
* docstring for missing arg
* Update docs/source/en/model_doc/opt.mdx
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update docs/source/en/model_doc/opt.mdx
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update docs/source/en/model_doc/opt.mdx
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/transformers/models/opt/__init__.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* update pretrained map
* update path and tests
* make style
* styling
* make consistency
* add gpt2 tok new
* more tok fixes
* Update src/transformers/models/auto/tokenization_auto.py
* Update docs/source/en/model_doc/opt.mdx
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update docs/source/en/model_doc/opt.mdx
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update docs/source/en/model_doc/opt.mdx
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/opt/modeling_opt.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update tests/models/opt/test_modeling_opt.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/opt/modeling_opt.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/opt/modeling_opt.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/opt/modeling_opt.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/opt/modeling_opt.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/opt/modeling_opt.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update based on reviews
* Apply suggestions from code review
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* make style
* make tokenizer auto tests pass
* apply Lysandre suggestion
* finish tests
* add some good tokenizer tests
* improve docs slighly
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: ArthurZucker <arthur.zucker@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
404 lines
11 KiB
YAML
404 lines
11 KiB
YAML
- sections:
|
|
- local: index
|
|
title: 🤗 Transformers
|
|
- local: quicktour
|
|
title: Quick tour
|
|
- local: installation
|
|
title: Installation
|
|
title: Get started
|
|
- sections:
|
|
- local: pipeline_tutorial
|
|
title: Pipelines for inference
|
|
- local: autoclass_tutorial
|
|
title: Load pretrained instances with an AutoClass
|
|
- local: preprocessing
|
|
title: Preprocess
|
|
- local: training
|
|
title: Fine-tune a pretrained model
|
|
- local: accelerate
|
|
title: Distributed training with 🤗 Accelerate
|
|
- local: model_sharing
|
|
title: Share a model
|
|
title: Tutorials
|
|
- sections:
|
|
- local: fast_tokenizers
|
|
title: "Use tokenizers from 🤗 Tokenizers"
|
|
- local: create_a_model
|
|
title: Create a custom architecture
|
|
- local: custom_models
|
|
title: Sharing custom models
|
|
- sections:
|
|
- local: tasks/sequence_classification
|
|
title: Text classification
|
|
- local: tasks/token_classification
|
|
title: Token classification
|
|
- local: tasks/question_answering
|
|
title: Question answering
|
|
- local: tasks/language_modeling
|
|
title: Language modeling
|
|
- local: tasks/translation
|
|
title: Translation
|
|
- local: tasks/summarization
|
|
title: Summarization
|
|
- local: tasks/multiple_choice
|
|
title: Multiple choice
|
|
- local: tasks/audio_classification
|
|
title: Audio classification
|
|
- local: tasks/asr
|
|
title: Automatic speech recognition
|
|
- local: tasks/image_classification
|
|
title: Image classification
|
|
title: Fine-tune for downstream tasks
|
|
- local: run_scripts
|
|
title: Train with a script
|
|
- local: sagemaker
|
|
title: Run training on Amazon SageMaker
|
|
- local: multilingual
|
|
title: Inference for multilingual models
|
|
- local: converting_tensorflow_models
|
|
title: Converting TensorFlow Checkpoints
|
|
- local: serialization
|
|
title: Export 🤗 Transformers models
|
|
- local: performance
|
|
title: 'Performance and Scalability: How To Fit a Bigger Model and Train It Faster'
|
|
- local: big_models
|
|
title: Instantiating a big model
|
|
- local: parallelism
|
|
title: Model Parallelism
|
|
- local: benchmarks
|
|
title: Benchmarks
|
|
- local: migration
|
|
title: Migrating from previous packages
|
|
- local: troubleshooting
|
|
title: Troubleshoot
|
|
- local: debugging
|
|
title: Debugging
|
|
- local: notebooks
|
|
title: "🤗 Transformers Notebooks"
|
|
- local: community
|
|
title: Community
|
|
- local: contributing
|
|
title: How to contribute to transformers?
|
|
- local: add_new_model
|
|
title: "How to add a model to 🤗 Transformers?"
|
|
- local: add_new_pipeline
|
|
title: "How to add a pipeline to 🤗 Transformers?"
|
|
- local: testing
|
|
title: Testing
|
|
- local: pr_checks
|
|
title: Checks on a Pull Request
|
|
title: How-to guides
|
|
- sections:
|
|
- local: philosophy
|
|
title: Philosophy
|
|
- local: glossary
|
|
title: Glossary
|
|
- local: task_summary
|
|
title: Summary of the tasks
|
|
- local: model_summary
|
|
title: Summary of the models
|
|
- local: tokenizer_summary
|
|
title: Summary of the tokenizers
|
|
- local: pad_truncation
|
|
title: Padding and truncation
|
|
- local: bertology
|
|
title: BERTology
|
|
- local: perplexity
|
|
title: Perplexity of fixed-length models
|
|
title: Conceptual guides
|
|
- sections:
|
|
- sections:
|
|
- local: main_classes/callback
|
|
title: Callbacks
|
|
- local: main_classes/configuration
|
|
title: Configuration
|
|
- local: main_classes/data_collator
|
|
title: Data Collator
|
|
- local: main_classes/keras_callbacks
|
|
title: Keras callbacks
|
|
- local: main_classes/logging
|
|
title: Logging
|
|
- local: main_classes/model
|
|
title: Models
|
|
- local: main_classes/text_generation
|
|
title: Text Generation
|
|
- local: main_classes/onnx
|
|
title: ONNX
|
|
- local: main_classes/optimizer_schedules
|
|
title: Optimization
|
|
- local: main_classes/output
|
|
title: Model outputs
|
|
- local: main_classes/pipelines
|
|
title: Pipelines
|
|
- local: main_classes/processors
|
|
title: Processors
|
|
- local: main_classes/tokenizer
|
|
title: Tokenizer
|
|
- local: main_classes/trainer
|
|
title: Trainer
|
|
- local: main_classes/deepspeed
|
|
title: DeepSpeed Integration
|
|
- local: main_classes/feature_extractor
|
|
title: Feature Extractor
|
|
title: Main Classes
|
|
- sections:
|
|
- local: model_doc/albert
|
|
title: ALBERT
|
|
- local: model_doc/auto
|
|
title: Auto Classes
|
|
- local: model_doc/bart
|
|
title: BART
|
|
- local: model_doc/barthez
|
|
title: BARThez
|
|
- local: model_doc/bartpho
|
|
title: BARTpho
|
|
- local: model_doc/beit
|
|
title: BEiT
|
|
- local: model_doc/bert
|
|
title: BERT
|
|
- local: model_doc/bertweet
|
|
title: Bertweet
|
|
- local: model_doc/bert-generation
|
|
title: BertGeneration
|
|
- local: model_doc/bert-japanese
|
|
title: BertJapanese
|
|
- local: model_doc/big_bird
|
|
title: BigBird
|
|
- local: model_doc/bigbird_pegasus
|
|
title: BigBirdPegasus
|
|
- local: model_doc/blenderbot
|
|
title: Blenderbot
|
|
- local: model_doc/blenderbot-small
|
|
title: Blenderbot Small
|
|
- local: model_doc/bort
|
|
title: BORT
|
|
- local: model_doc/byt5
|
|
title: ByT5
|
|
- local: model_doc/camembert
|
|
title: CamemBERT
|
|
- local: model_doc/canine
|
|
title: CANINE
|
|
- local: model_doc/convnext
|
|
title: ConvNeXT
|
|
- local: model_doc/clip
|
|
title: CLIP
|
|
- local: model_doc/convbert
|
|
title: ConvBERT
|
|
- local: model_doc/cpm
|
|
title: CPM
|
|
- local: model_doc/ctrl
|
|
title: CTRL
|
|
- local: model_doc/data2vec
|
|
title: Data2Vec
|
|
- local: model_doc/deberta
|
|
title: DeBERTa
|
|
- local: model_doc/deberta-v2
|
|
title: DeBERTa-v2
|
|
- local: model_doc/decision_transformer
|
|
title: Decision Transformer
|
|
- local: model_doc/deit
|
|
title: DeiT
|
|
- local: model_doc/detr
|
|
title: DETR
|
|
- local: model_doc/dialogpt
|
|
title: DialoGPT
|
|
- local: model_doc/distilbert
|
|
title: DistilBERT
|
|
- local: model_doc/dit
|
|
title: DiT
|
|
- local: model_doc/dpr
|
|
title: DPR
|
|
- local: model_doc/dpt
|
|
title: DPT
|
|
- local: model_doc/electra
|
|
title: ELECTRA
|
|
- local: model_doc/encoder-decoder
|
|
title: Encoder Decoder Models
|
|
- local: model_doc/flaubert
|
|
title: FlauBERT
|
|
- local: model_doc/flava
|
|
title: FLAVA
|
|
- local: model_doc/fnet
|
|
title: FNet
|
|
- local: model_doc/fsmt
|
|
title: FSMT
|
|
- local: model_doc/funnel
|
|
title: Funnel Transformer
|
|
- local: model_doc/glpn
|
|
title: GLPN
|
|
- local: model_doc/herbert
|
|
title: HerBERT
|
|
- local: model_doc/ibert
|
|
title: I-BERT
|
|
- local: model_doc/imagegpt
|
|
title: ImageGPT
|
|
- local: model_doc/layoutlm
|
|
title: LayoutLM
|
|
- local: model_doc/layoutlmv2
|
|
title: LayoutLMV2
|
|
- local: model_doc/layoutxlm
|
|
title: LayoutXLM
|
|
- local: model_doc/led
|
|
title: LED
|
|
- local: model_doc/longformer
|
|
title: Longformer
|
|
- local: model_doc/luke
|
|
title: LUKE
|
|
- local: model_doc/lxmert
|
|
title: LXMERT
|
|
- local: model_doc/marian
|
|
title: MarianMT
|
|
- local: model_doc/maskformer
|
|
title: MaskFormer
|
|
- local: model_doc/m2m_100
|
|
title: M2M100
|
|
- local: model_doc/mbart
|
|
title: MBart and MBart-50
|
|
- local: model_doc/megatron-bert
|
|
title: MegatronBERT
|
|
- local: model_doc/megatron_gpt2
|
|
title: MegatronGPT2
|
|
- local: model_doc/mluke
|
|
title: mLUKE
|
|
- local: model_doc/mobilebert
|
|
title: MobileBERT
|
|
- local: model_doc/mpnet
|
|
title: MPNet
|
|
- local: model_doc/mt5
|
|
title: MT5
|
|
- local: model_doc/nystromformer
|
|
title: Nyströmformer
|
|
- local: model_doc/openai-gpt
|
|
title: OpenAI GPT
|
|
- local: model_doc/opt
|
|
title: OPT
|
|
- local: model_doc/gpt2
|
|
title: OpenAI GPT2
|
|
- local: model_doc/gptj
|
|
title: GPT-J
|
|
- local: model_doc/gpt_neo
|
|
title: GPT Neo
|
|
- local: model_doc/hubert
|
|
title: Hubert
|
|
- local: model_doc/perceiver
|
|
title: Perceiver
|
|
- local: model_doc/pegasus
|
|
title: Pegasus
|
|
- local: model_doc/phobert
|
|
title: PhoBERT
|
|
- local: model_doc/plbart
|
|
title: PLBart
|
|
- local: model_doc/poolformer
|
|
title: PoolFormer
|
|
- local: model_doc/prophetnet
|
|
title: ProphetNet
|
|
- local: model_doc/qdqbert
|
|
title: QDQBert
|
|
- local: model_doc/rag
|
|
title: RAG
|
|
- local: model_doc/realm
|
|
title: REALM
|
|
- local: model_doc/reformer
|
|
title: Reformer
|
|
- local: model_doc/rembert
|
|
title: RemBERT
|
|
- local: model_doc/regnet
|
|
title: RegNet
|
|
- local: model_doc/resnet
|
|
title: ResNet
|
|
- local: model_doc/retribert
|
|
title: RetriBERT
|
|
- local: model_doc/roberta
|
|
title: RoBERTa
|
|
- local: model_doc/roformer
|
|
title: RoFormer
|
|
- local: model_doc/segformer
|
|
title: SegFormer
|
|
- local: model_doc/sew
|
|
title: SEW
|
|
- local: model_doc/sew-d
|
|
title: SEW-D
|
|
- local: model_doc/speech-encoder-decoder
|
|
title: Speech Encoder Decoder Models
|
|
- local: model_doc/speech_to_text
|
|
title: Speech2Text
|
|
- local: model_doc/speech_to_text_2
|
|
title: Speech2Text2
|
|
- local: model_doc/splinter
|
|
title: Splinter
|
|
- local: model_doc/squeezebert
|
|
title: SqueezeBERT
|
|
- local: model_doc/swin
|
|
title: Swin Transformer
|
|
- local: model_doc/t5
|
|
title: T5
|
|
- local: model_doc/t5v1.1
|
|
title: T5v1.1
|
|
- local: model_doc/tapas
|
|
title: TAPAS
|
|
- local: model_doc/tapex
|
|
title: TAPEX
|
|
- local: model_doc/transfo-xl
|
|
title: Transformer XL
|
|
- local: model_doc/trocr
|
|
title: TrOCR
|
|
- local: model_doc/unispeech
|
|
title: UniSpeech
|
|
- local: model_doc/unispeech-sat
|
|
title: UniSpeech-SAT
|
|
- local: model_doc/van
|
|
title: VAN
|
|
- local: model_doc/vilt
|
|
title: ViLT
|
|
- local: model_doc/vision-encoder-decoder
|
|
title: Vision Encoder Decoder Models
|
|
- local: model_doc/vision-text-dual-encoder
|
|
title: Vision Text Dual Encoder
|
|
- local: model_doc/vit
|
|
title: Vision Transformer (ViT)
|
|
- local: model_doc/vit_mae
|
|
title: ViTMAE
|
|
- local: model_doc/visual_bert
|
|
title: VisualBERT
|
|
- local: model_doc/wav2vec2
|
|
title: Wav2Vec2
|
|
- local: model_doc/wav2vec2_phoneme
|
|
title: Wav2Vec2Phoneme
|
|
- local: model_doc/wavlm
|
|
title: WavLM
|
|
- local: model_doc/xglm
|
|
title: XGLM
|
|
- local: model_doc/xlm
|
|
title: XLM
|
|
- local: model_doc/xlm-prophetnet
|
|
title: XLM-ProphetNet
|
|
- local: model_doc/xlm-roberta
|
|
title: XLM-RoBERTa
|
|
- local: model_doc/xlm-roberta-xl
|
|
title: XLM-RoBERTa-XL
|
|
- local: model_doc/xlnet
|
|
title: XLNet
|
|
- local: model_doc/xlsr_wav2vec2
|
|
title: XLSR-Wav2Vec2
|
|
- local: model_doc/xls_r
|
|
title: XLS-R
|
|
- local: model_doc/yolos
|
|
title: YOLOS
|
|
- local: model_doc/yoso
|
|
title: YOSO
|
|
title: Models
|
|
- sections:
|
|
- local: internal/modeling_utils
|
|
title: Custom Layers and Utilities
|
|
- local: internal/pipelines_utils
|
|
title: Utilities for pipelines
|
|
- local: internal/tokenization_utils
|
|
title: Utilities for Tokenizers
|
|
- local: internal/trainer_utils
|
|
title: Utilities for Trainer
|
|
- local: internal/generation_utils
|
|
title: Utilities for Generation
|
|
- local: internal/file_utils
|
|
title: General Utilities
|
|
title: Internal Helpers
|
|
title: API
|