From 32f5de10a01e2489cb0295d752f76ad81b20c5cb Mon Sep 17 00:00:00 2001 From: Julien Chaumond Date: Wed, 23 Feb 2022 11:40:06 -0500 Subject: [PATCH] [doc] custom_models: mention security features of the Hub (#15768) * custom_models: tiny doc addition * mention security feature earlier in the section Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/custom_models.mdx | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/docs/source/custom_models.mdx b/docs/source/custom_models.mdx index a83c90b20..20bf93c68 100644 --- a/docs/source/custom_models.mdx +++ b/docs/source/custom_models.mdx @@ -304,8 +304,9 @@ See the [sharing tutorial](model_sharing) for more information on the push to Hu ## Using a model with custom code You can use any configuration, model or tokenizer with custom code files in its repository with the auto-classes and -the `from_pretrained` method. The only thing is that you have to add an extra argument to make sure you have read the -online code and trust the author of that model, to avoid executing malicious code on your machine: +the `from_pretrained` method. All files and code uploaded to the Hub are scanned for malware (refer to the [Hub security](https://huggingface.co/docs/hub/security#malware-scanning) documentation for more information), but you should still +review the model code and author to avoid executing malicious code on your machine. Set `trust_remote_code=True` to use +a model with custom code: ```py from transformers import AutoModelForImageClassification