diff --git a/model_cards/allenai/wmt19-de-en-6-6-base/README.md b/model_cards/allenai/wmt19-de-en-6-6-base/README.md index 303a11cb6..c946ad9f2 100644 --- a/model_cards/allenai/wmt19-de-en-6-6-base/README.md +++ b/model_cards/allenai/wmt19-de-en-6-6-base/README.md @@ -61,7 +61,7 @@ Pretrained weights were left identical to the original model released by allenai Here are the BLEU scores: model | transformers --------|---------|---------- +-------|--------- wmt19-de-en-6-6-base | 38.37 The score was calculated using this code: diff --git a/model_cards/allenai/wmt19-de-en-6-6-big/README.md b/model_cards/allenai/wmt19-de-en-6-6-big/README.md index 515e1d674..f675f899a 100644 --- a/model_cards/allenai/wmt19-de-en-6-6-big/README.md +++ b/model_cards/allenai/wmt19-de-en-6-6-big/README.md @@ -61,7 +61,7 @@ Pretrained weights were left identical to the original model released by allenai Here are the BLEU scores: model | transformers --------|---------|---------- +-------|--------- wmt19-de-en-6-6-big | 39.9 The score was calculated using this code: diff --git a/scripts/fsmt/gen-card-allenai-wmt19.py b/scripts/fsmt/gen-card-allenai-wmt19.py index b6bb97d6a..4df5ca054 100755 --- a/scripts/fsmt/gen-card-allenai-wmt19.py +++ b/scripts/fsmt/gen-card-allenai-wmt19.py @@ -85,7 +85,7 @@ Pretrained weights were left identical to the original model released by allenai Here are the BLEU scores: model | transformers --------|---------|---------- +-------|--------- {model_name} | {scores[model_name][1]} The score was calculated using this code: