| .. |
|
albert
|
|
|
|
auto
|
Prepare transformers for v0.8.0 huggingface-hub release (#17716)
|
2022-06-21 11:51:18 -04:00 |
|
bart
|
[Generate Tests] Make sure no tokens are force-generated (#18053)
|
2022-07-07 15:08:34 +02:00 |
|
barthez
|
|
|
|
bartpho
|
|
|
|
beit
|
skip some test_multi_gpu_data_parallel_forward (#18188)
|
2022-07-20 15:54:44 +02:00 |
|
bert
|
Add a TF in-graph tokenizer for BERT (#17701)
|
2022-06-27 12:06:21 +01:00 |
|
bert_generation
|
Black preview (#17217)
|
2022-05-12 16:25:55 -04:00 |
|
bert_japanese
|
Black preview (#17217)
|
2022-05-12 16:25:55 -04:00 |
|
bertweet
|
|
|
|
big_bird
|
Use higher value for hidden_size in Flax BigBird test (#17822)
|
2022-06-24 19:31:30 +02:00 |
|
bigbird_pegasus
|
Black preview (#17217)
|
2022-05-12 16:25:55 -04:00 |
|
blenderbot
|
[Generate Tests] Make sure no tokens are force-generated (#18053)
|
2022-07-07 15:08:34 +02:00 |
|
blenderbot_small
|
[Generate Tests] Make sure no tokens are force-generated (#18053)
|
2022-07-07 15:08:34 +02:00 |
|
bloom
|
BLOOM minor fixes small test (#18175)
|
2022-07-18 19:18:19 +02:00 |
|
bort
|
|
|
|
byt5
|
|
|
|
camembert
|
|
|
|
canine
|
Black preview (#17217)
|
2022-05-12 16:25:55 -04:00 |
|
clip
|
Fx support for multiple model architectures (#17393)
|
2022-05-31 10:02:55 +02:00 |
|
codegen
|
Update expected values in CodeGen tests (#17888)
|
2022-07-01 15:33:36 +02:00 |
|
convbert
|
|
|
|
convnext
|
has_attentions - consistent test skipping logic and tf tests (#17495)
|
2022-06-09 09:50:03 +02:00 |
|
cpm
|
|
|
|
ctrl
|
Fix CTRL tests (#17508)
|
2022-06-01 16:27:23 +02:00 |
|
cvt
|
has_attentions - consistent test skipping logic and tf tests (#17495)
|
2022-06-09 09:50:03 +02:00 |
|
data2vec
|
skip some test_multi_gpu_data_parallel_forward (#18188)
|
2022-07-20 15:54:44 +02:00 |
|
deberta
|
fix train_new_from_iterator in the case of byte-level tokenizers (#17549)
|
2022-06-08 15:30:41 +02:00 |
|
deberta_v2
|
Fx support for Deberta-v[1-2], Hubert and LXMERT (#17539)
|
2022-06-07 18:05:20 +02:00 |
|
decision_transformer
|
Update expected values in DecisionTransformerModelIntegrationTest (#18016)
|
2022-07-05 14:53:43 +02:00 |
|
deit
|
Add TF DeiT implementation (#17806)
|
2022-07-13 18:04:08 +01:00 |
|
detr
|
|
|
|
distilbert
|
|
|
|
dit
|
|
|
|
dpr
|
|
|
|
dpt
|
|
|
|
electra
|
|
|
|
encoder_decoder
|
Update TF(Vision)EncoderDecoderModel PT/TF equivalence tests (#18073)
|
2022-07-18 15:29:14 +02:00 |
|
flaubert
|
|
|
|
flava
|
has_attentions - consistent test skipping logic and tf tests (#17495)
|
2022-06-09 09:50:03 +02:00 |
|
fnet
|
Black preview (#17217)
|
2022-05-12 16:25:55 -04:00 |
|
fsmt
|
Not use -1e4 as attn mask (#17306)
|
2022-06-20 16:16:16 +02:00 |
|
funnel
|
|
|
|
glpn
|
|
|
|
gpt2
|
TF: XLA beam search + most generation-compatible models are now also XLA-generate-compatible (#17857)
|
2022-06-29 12:41:01 +01:00 |
|
gpt_neo
|
fix train_new_from_iterator in the case of byte-level tokenizers (#17549)
|
2022-06-08 15:30:41 +02:00 |
|
gpt_neox
|
skip some gpt_neox tests that require 80G RAM (#17923)
|
2022-07-01 09:04:38 -04:00 |
|
gptj
|
TF: GPT-J compatible with XLA generation (#17986)
|
2022-07-06 15:02:07 +01:00 |
|
groupvit
|
Adding GroupViT Models (#17313)
|
2022-06-28 20:51:47 +02:00 |
|
herbert
|
|
|
|
hubert
|
Fx support for Deberta-v[1-2], Hubert and LXMERT (#17539)
|
2022-06-07 18:05:20 +02:00 |
|
ibert
|
fix train_new_from_iterator in the case of byte-level tokenizers (#17549)
|
2022-06-08 15:30:41 +02:00 |
|
imagegpt
|
Enabling imageGPT auto feature extractor. (#16871)
|
2022-05-24 12:30:46 +02:00 |
|
layoutlm
|
Fx support for multiple model architectures (#17393)
|
2022-05-31 10:02:55 +02:00 |
|
layoutlmv2
|
Fix some typos. (#17560)
|
2022-07-11 05:00:13 -04:00 |
|
layoutlmv3
|
Fix some typos. (#17560)
|
2022-07-11 05:00:13 -04:00 |
|
layoutxlm
|
Fix LayoutXLMProcessorTest (#17506)
|
2022-06-01 16:26:37 +02:00 |
|
led
|
fix train_new_from_iterator in the case of byte-level tokenizers (#17549)
|
2022-06-08 15:30:41 +02:00 |
|
levit
|
Add skip logic for attentions test - Levit (#17633)
|
2022-06-10 12:46:30 +02:00 |
|
longformer
|
fix train_new_from_iterator in the case of byte-level tokenizers (#17549)
|
2022-06-08 15:30:41 +02:00 |
|
longt5
|
Mark slow test as such
|
2022-07-11 12:48:57 -04:00 |
|
luke
|
Debug LukeForMaskedLM (#17499)
|
2022-06-01 10:03:06 -04:00 |
|
lxmert
|
Fx support for Deberta-v[1-2], Hubert and LXMERT (#17539)
|
2022-06-07 18:05:20 +02:00 |
|
m2m_100
|
Fx support for multiple model architectures (#17393)
|
2022-05-31 10:02:55 +02:00 |
|
marian
|
[Generate Tests] Make sure no tokens are force-generated (#18053)
|
2022-07-07 15:08:34 +02:00 |
|
maskformer
|
Fix test_inference_instance_segmentation_head (#17872)
|
2022-06-24 19:36:45 +02:00 |
|
mbart
|
[Generate Tests] Make sure no tokens are force-generated (#18053)
|
2022-07-07 15:08:34 +02:00 |
|
mbart50
|
Black preview (#17217)
|
2022-05-12 16:25:55 -04:00 |
|
mctct
|
M-CTC-T Model (#16402)
|
2022-06-08 00:33:07 +02:00 |
|
megatron_bert
|
|
|
|
megatron_gpt2
|
|
|
|
mluke
|
Black preview (#17217)
|
2022-05-12 16:25:55 -04:00 |
|
mobilebert
|
Black preview (#17217)
|
2022-05-12 16:25:55 -04:00 |
|
mobilevit
|
add MobileViT model (#17354)
|
2022-06-29 16:07:51 -04:00 |
|
mpnet
|
|
|
|
mt5
|
Fix expected loss values in some (m)T5 tests (#18177)
|
2022-07-18 15:26:21 +02:00 |
|
mvp
|
Add MVP model (#17787)
|
2022-06-29 09:30:55 -04:00 |
|
nezha
|
speed up test (#18106)
|
2022-07-12 04:28:28 -04:00 |
|
nllb
|
NLLB tokenizer (#18126)
|
2022-07-18 08:12:34 -04:00 |
|
nystromformer
|
|
|
|
openai
|
|
|
|
opt
|
Adding OPTForSeqClassification class (#18123)
|
2022-07-20 10:14:21 +02:00 |
|
pegasus
|
[Generate Tests] Make sure no tokens are force-generated (#18053)
|
2022-07-07 15:08:34 +02:00 |
|
perceiver
|
Black preview (#17217)
|
2022-05-12 16:25:55 -04:00 |
|
phobert
|
|
|
|
plbart
|
Fx support for multiple model architectures (#17393)
|
2022-05-31 10:02:55 +02:00 |
|
poolformer
|
has_attentions - consistent test skipping logic and tf tests (#17495)
|
2022-06-09 09:50:03 +02:00 |
|
prophetnet
|
Black preview (#17217)
|
2022-05-12 16:25:55 -04:00 |
|
qdqbert
|
|
|
|
rag
|
Avoid GPU OOM for a TF Rag test (#17638)
|
2022-06-10 18:50:29 +02:00 |
|
realm
|
Black preview (#17217)
|
2022-05-12 16:25:55 -04:00 |
|
reformer
|
Black preview (#17217)
|
2022-05-12 16:25:55 -04:00 |
|
regnet
|
TF implementation of RegNets (#17554)
|
2022-06-29 13:45:14 +01:00 |
|
rembert
|
|
|
|
resnet
|
Add TF ResNet model (#17427)
|
2022-07-04 10:59:15 +01:00 |
|
retribert
|
fix retribert's test_torch_encode_plus_sent_to_model (#17231)
|
2022-05-17 14:33:13 +02:00 |
|
roberta
|
fix train_new_from_iterator in the case of byte-level tokenizers (#17549)
|
2022-06-08 15:30:41 +02:00 |
|
roformer
|
|
|
|
segformer
|
|
|
|
sew
|
|
|
|
sew_d
|
|
|
|
speech_encoder_decoder
|
|
|
|
speech_to_text
|
Fx support for multiple model architectures (#17393)
|
2022-05-31 10:02:55 +02:00 |
|
speech_to_text_2
|
Fx support for multiple model architectures (#17393)
|
2022-05-31 10:02:55 +02:00 |
|
splinter
|
Fix Splinter test (#17854)
|
2022-06-24 16:26:14 +02:00 |
|
squeezebert
|
|
|
|
swin
|
Improve vision models (#17731)
|
2022-06-24 11:34:51 +02:00 |
|
t5
|
Fix expected loss values in some (m)T5 tests (#18177)
|
2022-07-18 15:26:21 +02:00 |
|
tapas
|
Add magic method to our TF models to convert datasets with column inference (#17160)
|
2022-06-06 15:53:49 +01:00 |
|
tapex
|
|
|
|
trajectory_transformer
|
Add trajectory transformer (#17141)
|
2022-05-17 19:07:43 -04:00 |
|
transfo_xl
|
Add magic method to our TF models to convert datasets with column inference (#17160)
|
2022-06-06 15:53:49 +01:00 |
|
trocr
|
Fx support for multiple model architectures (#17393)
|
2022-05-31 10:02:55 +02:00 |
|
unispeech
|
|
|
|
unispeech_sat
|
|
|
|
van
|
has_attentions - consistent test skipping logic and tf tests (#17495)
|
2022-06-09 09:50:03 +02:00 |
|
vilt
|
Black preview (#17217)
|
2022-05-12 16:25:55 -04:00 |
|
vision_encoder_decoder
|
Update TF(Vision)EncoderDecoderModel PT/TF equivalence tests (#18073)
|
2022-07-18 15:29:14 +02:00 |
|
vision_text_dual_encoder
|
|
|
|
visual_bert
|
|
|
|
vit
|
Improve vision models (#17731)
|
2022-06-24 11:34:51 +02:00 |
|
vit_mae
|
Fix some typos. (#17560)
|
2022-07-11 05:00:13 -04:00 |
|
wav2vec2
|
Black preview (#17217)
|
2022-05-12 16:25:55 -04:00 |
|
wav2vec2_conformer
|
[Test] Fix W2V-Conformer integration test (#17303)
|
2022-05-17 18:20:36 +02:00 |
|
wav2vec2_phoneme
|
|
|
|
wav2vec2_with_lm
|
|
|
|
wavlm
|
|
|
|
xglm
|
Fx support for multiple model architectures (#17393)
|
2022-05-31 10:02:55 +02:00 |
|
xlm
|
|
|
|
xlm_prophetnet
|
Black preview (#17217)
|
2022-05-12 16:25:55 -04:00 |
|
xlm_roberta
|
Black preview (#17217)
|
2022-05-12 16:25:55 -04:00 |
|
xlm_roberta_xl
|
|
|
|
xlnet
|
Return scalar losses instead of per-sample means (#18013)
|
2022-07-04 17:26:19 +01:00 |
|
yolos
|
Improve vision models (#17731)
|
2022-06-24 11:34:51 +02:00 |
|
yoso
|
fix train_new_from_iterator in the case of byte-level tokenizers (#17549)
|
2022-06-08 15:30:41 +02:00 |
|
__init__.py
|
|
|