diff --git a/model_cards/google/roberta2roberta_L-24_wikisplit/README.md b/model_cards/google/roberta2roberta_L-24_wikisplit/README.md index 8ba18aaeb..8d4a2b380 100644 --- a/model_cards/google/roberta2roberta_L-24_wikisplit/README.md +++ b/model_cards/google/roberta2roberta_L-24_wikisplit/README.md @@ -17,6 +17,9 @@ Disclaimer: The model card has been written by the Hugging Face team. You can use this model for sentence splitting, *e.g.* +**IMPORTANT**: The model was not trained on the `"` (double quotation mark) character -> so the before tokenizing the text, +it is advised to replace all `"` (double quotation marks) with two single `'` (single quotation mark). + ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM