From 26dc6593f314a7cf8fd8dd1dc752efb4eba7bc00 Mon Sep 17 00:00:00 2001 From: Vishal Singh Date: Wed, 18 Nov 2020 23:49:32 +0530 Subject: [PATCH] Update README.md (#8544) Modified Model in Action section. The class `AutoModelWithLMHead` is deprecated so changed it to `AutoModelForSeq2SeqLM` for encoder-decoder models. Removed duplicate eos token. --- model_cards/mrm8488/t5-base-finetuned-squadv2/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/model_cards/mrm8488/t5-base-finetuned-squadv2/README.md b/model_cards/mrm8488/t5-base-finetuned-squadv2/README.md index f199273e7..d842e6562 100644 --- a/model_cards/mrm8488/t5-base-finetuned-squadv2/README.md +++ b/model_cards/mrm8488/t5-base-finetuned-squadv2/README.md @@ -51,13 +51,13 @@ The training script is a slightly modified version of [this one](https://colab.r ## Model in Action 🚀 ```python -from transformers import AutoModelWithLMHead, AutoTokenizer +from transformers import AutoModelForSeq2SeqLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-squadv2") -model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-squadv2") +model = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/t5-base-finetuned-squadv2") def get_answer(question, context): - input_text = "question: %s context: %s " % (question, context) + input_text = "question: %s context: %s" % (question, context) features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'],