From d15be2216c2637e51efbce598708e5f9fd8a71d8 Mon Sep 17 00:00:00 2001
From: Jannes <36601086+jannesgg@users.noreply.github.com>
Date: Tue, 21 Jul 2020 19:27:13 +0200
Subject: [PATCH] Create README.md (#5879)
---
.../jannesg/takalane_zul_roberta/README.md | 50 +++++++++++++++++++
1 file changed, 50 insertions(+)
create mode 100644 model_cards/jannesg/takalane_zul_roberta/README.md
diff --git a/model_cards/jannesg/takalane_zul_roberta/README.md b/model_cards/jannesg/takalane_zul_roberta/README.md
new file mode 100644
index 000000000..3a0ab0c30
--- /dev/null
+++ b/model_cards/jannesg/takalane_zul_roberta/README.md
@@ -0,0 +1,50 @@
+---
+language:
+- zul
+thumbnail: https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg
+tags:
+- zul
+- fill-mask
+- pytorch
+- roberta
+- lm-head
+- masked-lm
+license: MIT
+---
+
+# Takalani Sesame - Zulu πΏπ¦
+
+
+
+## Model description
+
+Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
+
+## Intended uses & limitations
+
+#### How to use
+
+```python
+from transformers import AutoTokenizer, AutoModelWithLMHead
+
+tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_zul_roberta")
+
+model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_zul_roberta")
+```
+
+#### Limitations and bias
+
+Updates will be added continously to improve performance.
+
+## Training data
+
+Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en)
+**Sentences:** 410000
+
+## Training procedure
+
+No preprocessing. Standard Huggingface hyperparameters.
+
+## Author
+
+Jannes Germishuys [website](http://jannesgg.github.io)