mirror of
https://github.com/saymrwulf/transformers.git
synced 2026-05-14 20:58:08 +00:00
creating readme for bert-base-mongolian-uncased (#7440)
This commit is contained in:
parent
381443c096
commit
0c2b9fa831
1 changed files with 54 additions and 0 deletions
54
model_cards/bayartsogt/bert-base-mongolian-uncased/README.md
Normal file
54
model_cards/bayartsogt/bert-base-mongolian-uncased/README.md
Normal file
|
|
@ -0,0 +1,54 @@
|
|||
---
|
||||
language: "mn"
|
||||
tags:
|
||||
- bert
|
||||
- mongolian
|
||||
- uncased
|
||||
---
|
||||
|
||||
# BERT-BASE-MONGOLIAN-UNCASED
|
||||
[Link to Official Mongolian-BERT repo](https://github.com/tugstugi/mongolian-bert)
|
||||
|
||||
## Model description
|
||||
This repository contains pre-trained Mongolian [BERT](https://arxiv.org/abs/1810.04805) models trained by [tugstugi](https://github.com/tugstugi), [enod](https://github.com/enod) and [sharavsambuu](https://github.com/sharavsambuu).
|
||||
Special thanks to [nabar](https://github.com/nabar) who provided 5x TPUs.
|
||||
|
||||
This repository is based on the following open source projects: [google-research/bert](https://github.com/google-research/bert/),
|
||||
[huggingface/pytorch-pretrained-BERT](https://github.com/huggingface/pytorch-pretrained-BERT) and [yoheikikuta/bert-japanese](https://github.com/yoheikikuta/bert-japanese).
|
||||
|
||||
#### How to use
|
||||
|
||||
```python
|
||||
from transformers import pipeline, AlbertTokenizer, BertForMaskedLM
|
||||
|
||||
tokenizer = AlbertTokenizer.from_pretrained('bayartsogt/bert-base-mongolian-uncased')
|
||||
model = BertForMaskedLM.from_pretrained('bayartsogt/bert-base-mongolian-uncased')
|
||||
|
||||
## declare task ##
|
||||
pipe = pipeline(task="fill-mask", model=model, tokenizer=tokenizer)
|
||||
|
||||
## example ##
|
||||
input_ = 'Миний [MASK] хоол идэх нь тун чухал.'
|
||||
|
||||
output_ = pipe(input_)
|
||||
for i in range(len(output_)):
|
||||
print(output_[i])
|
||||
|
||||
```
|
||||
|
||||
|
||||
## Training data
|
||||
Mongolian Wikipedia and the 700 million word Mongolian news data set [[Pretraining Procedure](https://github.com/tugstugi/mongolian-bert#pre-training)]
|
||||
|
||||
### BibTeX entry and citation info
|
||||
|
||||
```bibtex
|
||||
@misc{mongolian-bert,
|
||||
author = {Tuguldur, Erdene-Ochir and Gunchinish, Sharavsambuu and Bataa, Enkhbold},
|
||||
title = {BERT Pretrained Models on Mongolian Datasets},
|
||||
year = {2019},
|
||||
publisher = {GitHub},
|
||||
journal = {GitHub repository},
|
||||
howpublished = {\url{https://github.com/tugstugi/mongolian-bert/}}
|
||||
}
|
||||
```
|
||||
Loading…
Reference in a new issue