mirror of
https://github.com/saymrwulf/transformers.git
synced 2026-05-14 20:58:08 +00:00
Update README.md (#7444)
Hi, just corrected the example code, add 2 links and fixed some typos
This commit is contained in:
parent
74d8d69bd4
commit
205bf0b7ea
1 changed files with 12 additions and 3 deletions
|
|
@ -12,15 +12,22 @@ widget:
|
|||
|
||||
## Model description
|
||||
|
||||
This model performs sentiment analysis on Italian political twitter sentences. It was trained starting from an instance of "bert-base-italian-uncased-xxl" and fine-tuned on an Italian dataset of tweets.
|
||||
This model performs sentiment analysis on Italian political twitter sentences. It was trained starting from an instance of "bert-base-italian-uncased-xxl" and fine-tuned on an Italian dataset of tweets. You can try it out at https://www.unideeplearning.com/twitter_sa/ (in italian!)
|
||||
|
||||
#### Hands-on
|
||||
|
||||
```python
|
||||
import torch
|
||||
from torch import nn
|
||||
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
||||
|
||||
text = "Giueseppe Rossi è un pessimo politico"
|
||||
tokenizer = AutoTokenizer.from_pretrained("unideeplearning/polibert_sa")
|
||||
model = AutoModelForSequenceClassification.from_pretrained("unideeplearning/polibert_sa")
|
||||
|
||||
|
||||
|
||||
|
||||
text = "Giuseppe Rossi è un pessimo politico"
|
||||
input_ids = tokenizer.encode(text, add_special_tokens=True, return_tensors= 'pt')
|
||||
|
||||
logits, = model(input_ids)
|
||||
|
|
@ -41,4 +48,6 @@ print(prob.argmax().tolist())
|
|||
## Acknowledgments
|
||||
|
||||
Thanks to the support from:
|
||||
the [Hugging Face](https://huggingface.co/), Unione Professionisti (https://www.unioneprofessionisti.com/)
|
||||
the [Hugging Face](https://huggingface.co/), https://www.unioneprofessionisti.com
|
||||
|
||||
https://www.unideeplearning.com/
|
||||
|
|
|
|||
Loading…
Reference in a new issue