transformers/examples/test_examples.py

206 lines
6.6 KiB
Python
Raw Normal View History

# coding=utf-8
# Copyright 2018 HuggingFace Inc..
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
2019-12-22 15:20:32 +00:00
import argparse
import logging
import os
import sys
2019-12-22 17:19:07 +00:00
from unittest.mock import patch
import torch
Allow tests in examples to use cuda or fp16,if they are available (#5512) * Allow tests in examples to use cuda or fp16,if they are available The tests in examples didn't use the cuda or fp16 even if they where available. - The text classification example (`run_glue.py`) didn't use the fp16 even if it was available but the device was take based on the availablity(cuda/cpu). - The language-modeling example (`run_language_modeling.py`) was having `--no_cuda` argument which made the test to work without cuda. This example is having issue when running with fp16 thus it not enabled (got an assertion error for perplexity due to it higher value). - The cuda and fp16 is not enabled for question-answering example (`run_squad.py`) as it is having a difference in the f1 score. - The text-generation example (`run_generation.py`) will take the cuda or fp16 whenever it is available. Resolves some of: #5057 * Unwanted import of is_apex_available was removed * Made changes to test examples file to have the pass --fp16 only if cuda and apex is avaliable - run_glue.py: Removed the check for cuda and fp16. - run_generation.py: Removed the check for cuda and fp16 also removed unwanted flag creation. * Incorrectly sorted imports fixed * The model needs to be converted to half precision * Formatted single line if condition statement to multiline * The torch_device also needed to be checked before running the test on examples - The tests in examples which uses cuda should also depend from the USE_CUDA flag, similarly to the rest of the test suite. Even if we decide to set USE_CUDA to True by default, setting USE_CUDA to False should result in the examples not using CUDA * Format some of the code in test_examples file * The improper import of is_apex_available was sorted * Formatted the code to keep the style standards * The comma at the end of list giving a flake8 issue was fixed * Import sort was fixed * Removed the clean_test_dir function as its not used right now
2020-08-25 10:02:07 +00:00
from transformers.file_utils import is_apex_available
from transformers.testing_utils import TestCasePlus, torch_device
SRC_DIRS = [
os.path.join(os.path.dirname(__file__), dirname)
for dirname in ["text-generation", "text-classification", "language-modeling", "question-answering"]
]
sys.path.extend(SRC_DIRS)
if SRC_DIRS is not None:
import run_generation
import run_glue
import run_language_modeling
import run_pl_glue
import run_squad
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger()
def get_setup_file():
parser = argparse.ArgumentParser()
parser.add_argument("-f")
args = parser.parse_args()
return args.f
Allow tests in examples to use cuda or fp16,if they are available (#5512) * Allow tests in examples to use cuda or fp16,if they are available The tests in examples didn't use the cuda or fp16 even if they where available. - The text classification example (`run_glue.py`) didn't use the fp16 even if it was available but the device was take based on the availablity(cuda/cpu). - The language-modeling example (`run_language_modeling.py`) was having `--no_cuda` argument which made the test to work without cuda. This example is having issue when running with fp16 thus it not enabled (got an assertion error for perplexity due to it higher value). - The cuda and fp16 is not enabled for question-answering example (`run_squad.py`) as it is having a difference in the f1 score. - The text-generation example (`run_generation.py`) will take the cuda or fp16 whenever it is available. Resolves some of: #5057 * Unwanted import of is_apex_available was removed * Made changes to test examples file to have the pass --fp16 only if cuda and apex is avaliable - run_glue.py: Removed the check for cuda and fp16. - run_generation.py: Removed the check for cuda and fp16 also removed unwanted flag creation. * Incorrectly sorted imports fixed * The model needs to be converted to half precision * Formatted single line if condition statement to multiline * The torch_device also needed to be checked before running the test on examples - The tests in examples which uses cuda should also depend from the USE_CUDA flag, similarly to the rest of the test suite. Even if we decide to set USE_CUDA to True by default, setting USE_CUDA to False should result in the examples not using CUDA * Format some of the code in test_examples file * The improper import of is_apex_available was sorted * Formatted the code to keep the style standards * The comma at the end of list giving a flake8 issue was fixed * Import sort was fixed * Removed the clean_test_dir function as its not used right now
2020-08-25 10:02:07 +00:00
def is_cuda_and_apex_avaliable():
is_using_cuda = torch.cuda.is_available() and torch_device == "cuda"
return is_using_cuda and is_apex_available()
class ExamplesTests(TestCasePlus):
def test_run_glue(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
run_glue.py
--model_name_or_path distilbert-base-uncased
--data_dir ./tests/fixtures/tests_samples/MRPC/
--output_dir {tmp_dir}
--overwrite_output_dir
--task_name mrpc
--do_train
--do_eval
--per_device_train_batch_size=2
--per_device_eval_batch_size=1
--learning_rate=1e-4
--max_steps=10
--warmup_steps=2
--seed=42
--max_seq_length=128
Allow tests in examples to use cuda or fp16,if they are available (#5512) * Allow tests in examples to use cuda or fp16,if they are available The tests in examples didn't use the cuda or fp16 even if they where available. - The text classification example (`run_glue.py`) didn't use the fp16 even if it was available but the device was take based on the availablity(cuda/cpu). - The language-modeling example (`run_language_modeling.py`) was having `--no_cuda` argument which made the test to work without cuda. This example is having issue when running with fp16 thus it not enabled (got an assertion error for perplexity due to it higher value). - The cuda and fp16 is not enabled for question-answering example (`run_squad.py`) as it is having a difference in the f1 score. - The text-generation example (`run_generation.py`) will take the cuda or fp16 whenever it is available. Resolves some of: #5057 * Unwanted import of is_apex_available was removed * Made changes to test examples file to have the pass --fp16 only if cuda and apex is avaliable - run_glue.py: Removed the check for cuda and fp16. - run_generation.py: Removed the check for cuda and fp16 also removed unwanted flag creation. * Incorrectly sorted imports fixed * The model needs to be converted to half precision * Formatted single line if condition statement to multiline * The torch_device also needed to be checked before running the test on examples - The tests in examples which uses cuda should also depend from the USE_CUDA flag, similarly to the rest of the test suite. Even if we decide to set USE_CUDA to True by default, setting USE_CUDA to False should result in the examples not using CUDA * Format some of the code in test_examples file * The improper import of is_apex_available was sorted * Formatted the code to keep the style standards * The comma at the end of list giving a flake8 issue was fixed * Import sort was fixed * Removed the clean_test_dir function as its not used right now
2020-08-25 10:02:07 +00:00
"""
output_dir = "./tests/fixtures/tests_samples/temp_dir_{}".format(hash(testargs))
testargs += "--output_dir " + output_dir
testargs = testargs.split()
if is_cuda_and_apex_avaliable():
testargs.append("--fp16")
with patch.object(sys, "argv", testargs):
result = run_glue.main()
del result["eval_loss"]
for value in result.values():
self.assertGreaterEqual(value, 0.75)
def test_run_pl_glue(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
run_pl_glue.py
--model_name_or_path bert-base-cased
--data_dir ./tests/fixtures/tests_samples/MRPC/
--output_dir {tmp_dir}
--task mrpc
--do_train
--do_predict
--train_batch_size=32
--learning_rate=1e-4
--num_train_epochs=1
--seed=42
--max_seq_length=128
""".split()
if torch.cuda.is_available():
testargs += ["--fp16", "--gpus=1"]
with patch.object(sys, "argv", testargs):
result = run_pl_glue.main()
# for now just testing that the script can run to a completion
self.assertGreater(result["acc"], 0.25)
#
# TODO: this fails on CI - doesn't get acc/f1>=0.75:
#
# # remove all the various *loss* attributes
# result = {k: v for k, v in result.items() if "loss" not in k}
# for k, v in result.items():
# self.assertGreaterEqual(v, 0.75, f"({k})")
#
def test_run_language_modeling(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
run_language_modeling.py
--model_name_or_path distilroberta-base
--model_type roberta
--mlm
--line_by_line
--train_data_file ./tests/fixtures/sample_text.txt
--eval_data_file ./tests/fixtures/sample_text.txt
--output_dir {tmp_dir}
--overwrite_output_dir
--do_train
--do_eval
--num_train_epochs=1
Allow tests in examples to use cuda or fp16,if they are available (#5512) * Allow tests in examples to use cuda or fp16,if they are available The tests in examples didn't use the cuda or fp16 even if they where available. - The text classification example (`run_glue.py`) didn't use the fp16 even if it was available but the device was take based on the availablity(cuda/cpu). - The language-modeling example (`run_language_modeling.py`) was having `--no_cuda` argument which made the test to work without cuda. This example is having issue when running with fp16 thus it not enabled (got an assertion error for perplexity due to it higher value). - The cuda and fp16 is not enabled for question-answering example (`run_squad.py`) as it is having a difference in the f1 score. - The text-generation example (`run_generation.py`) will take the cuda or fp16 whenever it is available. Resolves some of: #5057 * Unwanted import of is_apex_available was removed * Made changes to test examples file to have the pass --fp16 only if cuda and apex is avaliable - run_glue.py: Removed the check for cuda and fp16. - run_generation.py: Removed the check for cuda and fp16 also removed unwanted flag creation. * Incorrectly sorted imports fixed * The model needs to be converted to half precision * Formatted single line if condition statement to multiline * The torch_device also needed to be checked before running the test on examples - The tests in examples which uses cuda should also depend from the USE_CUDA flag, similarly to the rest of the test suite. Even if we decide to set USE_CUDA to True by default, setting USE_CUDA to False should result in the examples not using CUDA * Format some of the code in test_examples file * The improper import of is_apex_available was sorted * Formatted the code to keep the style standards * The comma at the end of list giving a flake8 issue was fixed * Import sort was fixed * Removed the clean_test_dir function as its not used right now
2020-08-25 10:02:07 +00:00
"""
output_dir = "./tests/fixtures/tests_samples/temp_dir_{}".format(hash(testargs))
testargs += "--output_dir " + output_dir
testargs = testargs.split()
if torch_device != "cuda":
testargs.append("--no_cuda")
with patch.object(sys, "argv", testargs):
result = run_language_modeling.main()
self.assertLess(result["perplexity"], 35)
def test_run_squad(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
run_squad.py
--model_type=distilbert
--model_name_or_path=sshleifer/tiny-distilbert-base-cased-distilled-squad
--data_dir=./tests/fixtures/tests_samples/SQUAD
--output_dir {tmp_dir}
--overwrite_output_dir
--max_steps=10
--warmup_steps=2
--do_train
--do_eval
--version_2_with_negative
--learning_rate=2e-4
--per_gpu_train_batch_size=2
--per_gpu_eval_batch_size=1
--seed=42
""".split()
with patch.object(sys, "argv", testargs):
result = run_squad.main()
self.assertGreaterEqual(result["f1"], 25)
self.assertGreaterEqual(result["exact"], 21)
def test_generation(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
testargs = ["run_generation.py", "--prompt=Hello", "--length=10", "--seed=42"]
Allow tests in examples to use cuda or fp16,if they are available (#5512) * Allow tests in examples to use cuda or fp16,if they are available The tests in examples didn't use the cuda or fp16 even if they where available. - The text classification example (`run_glue.py`) didn't use the fp16 even if it was available but the device was take based on the availablity(cuda/cpu). - The language-modeling example (`run_language_modeling.py`) was having `--no_cuda` argument which made the test to work without cuda. This example is having issue when running with fp16 thus it not enabled (got an assertion error for perplexity due to it higher value). - The cuda and fp16 is not enabled for question-answering example (`run_squad.py`) as it is having a difference in the f1 score. - The text-generation example (`run_generation.py`) will take the cuda or fp16 whenever it is available. Resolves some of: #5057 * Unwanted import of is_apex_available was removed * Made changes to test examples file to have the pass --fp16 only if cuda and apex is avaliable - run_glue.py: Removed the check for cuda and fp16. - run_generation.py: Removed the check for cuda and fp16 also removed unwanted flag creation. * Incorrectly sorted imports fixed * The model needs to be converted to half precision * Formatted single line if condition statement to multiline * The torch_device also needed to be checked before running the test on examples - The tests in examples which uses cuda should also depend from the USE_CUDA flag, similarly to the rest of the test suite. Even if we decide to set USE_CUDA to True by default, setting USE_CUDA to False should result in the examples not using CUDA * Format some of the code in test_examples file * The improper import of is_apex_available was sorted * Formatted the code to keep the style standards * The comma at the end of list giving a flake8 issue was fixed * Import sort was fixed * Removed the clean_test_dir function as its not used right now
2020-08-25 10:02:07 +00:00
if is_cuda_and_apex_avaliable():
testargs.append("--fp16")
model_type, model_name = (
"--model_type=gpt2",
"--model_name_or_path=sshleifer/tiny-gpt2",
)
with patch.object(sys, "argv", testargs + [model_type, model_name]):
result = run_generation.main()
Improve special_token_id logic in run_generation.py and add tests (#2885) * improving generation * finalized special token behaviour for no_beam_search generation * solved modeling_utils merge conflict * solve merge conflicts in modeling_utils.py * add run_generation improvements from PR #2749 * adapted language generation to not use hardcoded -1 if no padding token is available * remove the -1 removal as hard coded -1`s are not necessary anymore * add lightweight language generation testing for randomely initialized models - just checking whether no errors are thrown * add slow language generation tests for pretrained models using hardcoded output with pytorch seed * delete ipdb * check that all generated tokens are valid * renaming * renaming Generation -> Generate * make style * updated so that generate_beam_search has same token behavior than generate_no_beam_search * consistent return format for run_generation.py * deleted pretrain lm generate tests -> will be added in another PR * cleaning of unused if statements and renaming * run_generate will always return an iterable * make style * consistent renaming * improve naming, make sure generate function always returns the same tensor, add docstring * add slow tests for all lmhead models * make style and improve example comments modeling_utils * better naming and refactoring in modeling_utils * improving generation * finalized special token behaviour for no_beam_search generation * solved modeling_utils merge conflict * solve merge conflicts in modeling_utils.py * add run_generation improvements from PR #2749 * adapted language generation to not use hardcoded -1 if no padding token is available * remove the -1 removal as hard coded -1`s are not necessary anymore * add lightweight language generation testing for randomely initialized models - just checking whether no errors are thrown * add slow language generation tests for pretrained models using hardcoded output with pytorch seed * delete ipdb * check that all generated tokens are valid * renaming * renaming Generation -> Generate * make style * updated so that generate_beam_search has same token behavior than generate_no_beam_search * consistent return format for run_generation.py * deleted pretrain lm generate tests -> will be added in another PR * cleaning of unused if statements and renaming * run_generate will always return an iterable * make style * consistent renaming * improve naming, make sure generate function always returns the same tensor, add docstring * add slow tests for all lmhead models * make style and improve example comments modeling_utils * better naming and refactoring in modeling_utils * changed fast random lm generation testing design to more general one * delete in old testing design in gpt2 * correct old variable name * temporary fix for encoder_decoder lm generation tests - has to be updated when t5 is fixed * adapted all fast random generate tests to new design * better warning description in modeling_utils * better comment * better comment and error message Co-authored-by: Thomas Wolf <thomwolf@users.noreply.github.com>
2020-02-21 17:10:00 +00:00
self.assertGreaterEqual(len(result[0]), 10)