mirror of
https://github.com/saymrwulf/transformers.git
synced 2026-05-14 20:58:08 +00:00
* [model_cards] roberta-base-finetuned-yelp-polarity * Update model_cards/VictorSanh/roberta-base-finetuned-yelp-polarity/README.md Co-authored-by: Julien Chaumond <chaumond@gmail.com> Co-authored-by: Julien Chaumond <chaumond@gmail.com> |
||
|---|---|---|
| .. | ||
| README.md | ||
| language | datasets | |
|---|---|---|
| en |
|
RoBERTa-base-finetuned-yelp-polarity
This is a RoBERTa-base checkpoint fine-tuned on binary sentiment classifcation from Yelp polarity. It gets 98.08% accuracy on the test set.
Hyper-parameters
We used the following hyper-parameters to train the model on one GPU:
num_train_epochs = 2.0
learning_rate = 1e-05
weight_decay = 0.0
adam_epsilon = 1e-08
max_grad_norm = 1.0
per_device_train_batch_size = 32
gradient_accumulation_steps = 1
warmup_steps = 3500
seed = 42