From de099564e32400a593071d7aa4b9a99bb3fe6f0f Mon Sep 17 00:00:00 2001 From: Edward Yang Date: Mon, 27 Aug 2018 21:03:38 -0700 Subject: [PATCH] Minor copy-edit on README Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/10931 Reviewed By: cpuhrsch Differential Revision: D9526248 fbshipit-source-id: 2401a0c1cd8c5e680c6d2b885298fa067d08f2c3 --- CONTRIBUTING.md | 2 +- README.md | 3 +-- 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index e409c57a151..11a219d6b32 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -23,7 +23,7 @@ If you are not familiar with creating a Pull Request, here are some guides: To develop PyTorch on your machine, here are some tips: -1. Uninstall all existing pytorch installs +1. Uninstall all existing PyTorch installs: ``` conda uninstall pytorch pip uninstall torch diff --git a/README.md b/README.md index 77f94142e03..b909001edc6 100644 --- a/README.md +++ b/README.md @@ -105,8 +105,7 @@ We hope you never spend hours debugging your code because of bad stack traces or PyTorch has minimal framework overhead. We integrate acceleration libraries such as Intel MKL and NVIDIA (cuDNN, NCCL) to maximize speed. At the core, its CPU and GPU Tensor and neural network backends -(TH, THC, THNN, THCUNN) are written as independent libraries with a C99 API. -They are mature and have been tested for years. +(TH, THC, THNN, THCUNN) are mature and have been tested for years. Hence, PyTorch is quite fast – whether you run small or large neural networks.