From abc43ffbfff69dc91f354c34f1c7c5b48a5c1502 Mon Sep 17 00:00:00 2001 From: Antti Virtanen Date: Mon, 16 Dec 2019 18:08:00 +0200 Subject: [PATCH] Add pretrained model documentation for FinBERT. --- docs/source/pretrained_models.rst | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/docs/source/pretrained_models.rst b/docs/source/pretrained_models.rst index c6b990f21..7d037da34 100644 --- a/docs/source/pretrained_models.rst +++ b/docs/source/pretrained_models.rst @@ -79,6 +79,14 @@ Here is the full list of the currently provided pretrained models together with | | ``bert-base-japanese-char-whole-word-masking`` | | 12-layer, 768-hidden, 12-heads, 110M parameters. | | | | | Trained on Japanese text using Whole-Word-Masking. Text is tokenized into characters. | | | | (see `details on cl-tohoku repository `__). | +| +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ +| | ``bert-base-finnish-cased-v1`` | | 12-layer, 768-hidden, 12-heads, 110M parameters. | +| | | | Trained on cased Finnish text. | +| | | (see `details on turkunlp.org `__). | +| +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ +| | ``bert-base-finnish-uncased-v1`` | | 12-layer, 768-hidden, 12-heads, 110M parameters. | +| | | | Trained on uncased Finnish text. | +| | | (see `details on turkunlp.org `__). | +-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ | GPT | ``openai-gpt`` | | 12-layer, 768-hidden, 12-heads, 110M parameters. | | | | | OpenAI GPT English model |