Update from Facebook (#8887)
* add opencl + fpga context
adds an opencl context inside caffe2/fb which can be used for fpga access
* [Caffe2] Force tensor inference checks to be triggered during testing
We've started to rely on TensorInference functions more for different analysis. This diff ensures that the TensorInference function's result matches what is expected from the definition of the operator.
* Enable building //caffe2:torch with @mode/opt
In @mode/opt, python runs out of a PAR, which breaks a lot of
assumptions in the code about where templates/ folders live relative
to __file__. Rather than introduce hacks with parutil, I simply turn
template_path into a parameter for all the relevant functions and
thread it through from the top level.
* [Caffe2] Fix cost models for DotProduct and Div. Update Tensor Inference for dot product
As title. DotProduct states that output is a 1-D tensor (https://caffe2.ai/docs/operators-catalogue.html#dotproduct) though code suggests it is either 0- or 1-D depending on inputs. TensorInference defined to support implementation.
* [SG-MoE] Add an option to make the experts NOT as components
* [nomnigraph] Rename and fixup convertToNeuralNetOperator API
This will make things a bit cleaner
* no longer symlink THNN.h and THCUNN.h
* forced decoder network (onnx export)
Closes https://github.com/pytorch/translate/pull/95
Add networks in ensemble_export.py to create a forced decoding network from PyTorch NMT checkpoints. This network takes an arbitrary numberized (source, target) pair and returns the model score for the translation, including penalties.
Vocabulary reduction networks are also supported, but note that target indices which are not in the possible_translation_tokens generated for the source input will be trea
* Revert schema change to fix production models
Revert schema change to fix production models
* MockLogDeviceReader - rebase on FIX
# Goal
1), Build a make_mock_log_device_reader using make_mock_reader
2), Replace the real log_device_reader here: https://fburl.com/raihwf1p
# Log by D8151734
Real log_device_reader:
```
I0529 20:29:05.373108 954994 tensor.h:839] Tensor print_net/log of type std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >. Dims: (): read_net/ParseOpenTrainingRow:0
I0529 20:29:05.373244 954994 tensor.h:839] Tensor read_net/ParseOpenTrainin
* [C2/D2][1/n]: Nonnegative-Constrained Optimization -- log barrier
implement log barrier as a regularization method
* Add teacher weight screening.
Add teacher weight sceening according to teacher labels. If teacher label is zero, we do not use the distill loss in the objective function.
* Add NormalizerContext
See task for more detail. This implementation is a copy of what exists for RegularizerContext except for how the parameters are defined in the model_definition thrift file.
I'll try an alternative implementation which overrides the default arguments of functions instead like for argscopes in tensorflow.
https://github.com/pytorch/pytorch/compare/master...MaximeBoucher:update-from-facebook-0939578c068c?expand=1
* Adding cosine similarity option in dot processor
Add pairwise cosine similarity option in dot product.
Add an option to concate dot product and cosine similarity.
Add test cases.
* [nomnigraph][redo] Concat elim for sparseNN
Same as D7962948, which was reverted because Operator Schema was not
defined
* [pytorch] Revert pytorch/pytorch#7918 'Release GIL when copying to shared memory', breaks ASAN
Revert this pytorch diff that breaks ASAN when running Filament in dev mode; in opt mode it gives "bad file descriptor" errors. Looks like a race when copying tensors to shared memory in multiple mp.Queue's (which spawn separate threads).
https://github.com/pytorch/pytorch/pull/7918/files
* [nomnigraph][mobile] Enable nomnigraph by default, use -Oz on nomnigraph related code to reduce code size
enables nomnigraph and reduces codesize
* [Warmup] Allow both offline incremental training and online training
Change plan name on saving side and reading side to support both training type
This diff depends on D8128530 and D8168651.
* Revert D7802642: [Warmup] Allow both offline incremental training and online training
This reverts commit afc213cf9b36cecf75333a788391c4d09f4afccc
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* Add legacy grad logic to fix div op on old graphs.
Add legacy grad logic to fix div op on old graphs.
* Correctly propagate operator failures
Propagate errors from operators that throw exceptions and return false
* Revert D8374829: [caffe2][nomnigraph][redo] Concat elim for sparseNN
This reverts commit 6dda028c463e54bb5c32188bbbe9202107e188a5
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* [Caffe2] Added extra_info to core.DeviceOption(), enforced extra_info to be inherited in scope.DeviceScope
extra_info is a newly defined field in DeviceOption proto. This diff added extra_info to the core.DeviceOption(). And, In scope.DeviceScope(), this diff enforce the new scope to inherit the extra_info from old scope.
* [opt] hgdirsync wasn't enabled, merge diverged code
Here's the damage, P59732616 basically xplat was left behind but had
the change from assert to CAFFE_ENFORCE
* OMP parallelism over RoIs for RoIAlign op
Simpler to parallelize over RoIs. Shouldn't affect other uses as it relies on
the number of OMP threads set during startup.
PR: https://github.com/pytorch/pytorch/pull/8562
* Use int64_t for shape in FillOps
to avoid overflow of int32
* Implement Rotated RoIAlign op
Based on Rotated RPNs as explained in https://arxiv.org/abs/1703.01086.
The idea is simple - orientation/angle is added as an RPN
anchor parameter and then the angle is further regressed similar to bbox
coords. There are some additional changes related to NMS and IoU, but besides
that it's a direct extension to Faster-RCNN. Further details in https://fb.quip.com/sZHlA1iMfWPZ.
RoIs are represented in [center_x, center_y, width, height, angle] format.
`angle` repre
* Rotated RoIAlign op CUDA forward implementation
CUDA forward impl for D8415490
* RoIAlignRotated op CUDA backward pass implementation
TSIA
* All remaining fixes to eliminate process_github.sh
Most of this diff has already been reviewed separately, except for the parts relating to _thnn/utils.py and _utils._internal.py
remove skipIf(True, 'Fbcode') line from process_github.sh
replace sed of cpp file with #ifdef to control cudnnDestroy use
undo sync-time deletion of .gitattributes, remove process_github.sh
switch to using _utils._internal rather than try-import-except
This diff also fixes the open-source bug where rebuilds have
* Back out "Revert D7802642: [Warmup] Allow both offline incremental training and online training"
Original commit changeset: 7707d2efe60e The original diff is backout becuase the online trainer package is backed out. This code would only work with new online trainer package
* [easy] improve error log in adagrad op
as title
* re-allow use of thnn_h_path
This fixes cffi usage in OSS
* [4/4] [tum] paralyzing layerNorm for GPU full sync
as title
* add compile=False to pytorch tests, remove hack with pyc
* Add shape and type inference for RowWiseArgMax operator
See title
* Revert D8515341: Back out "Revert D7802642: [Warmup] Allow both offline incremental training and online training"
This reverts commit 78167eeef0af16b60f72c82f9dcdda9b41b4dcbd
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* [fix-flaky-test] mock_hive_reader_test flaky, because GlobalCounter collects local counts intervally
# Problem
`MockHiveReader` uses `GlobalCounter` to limit `max_examples`.
GlobalCounter on server node collect local counts from worker nodes every 1 sec.
This 1 sec delay makes it impossible to limit exactly to the `max_examples`, it will definitely exceed `max_examples`.
# Plan
Given,
```
Expected num_examples = max_examples + num_examples/sec (Read Speed) x 1 sec (GlobalCounter Sync Int
* [Caffe2] Fix FCGradient cost inference. Prevent overflow in cost inference
FCGradient missed a factor 2 in the `num_outputs == 3` case. Overflow was occurring with flop calculation for FC. Changed types to `uint64_t` to prevent future problems.
* Fix binary ops with empty inputs
Fix binary ops with empty inputs
* Support the filling of input blob with provided data
as title for Biz Integrity case
* Back out "Revert D8515341: Back out "Revert D7802642: [Warmup] Allow both offline incremental training and online training""
Original commit changeset: 30c55dd38816 Original diff is reverted due to introducing bad integration test. Fixed the integration test.
* [c2][easy] improve pack ops error loggings
as desc.
* Add ShapeTypeInference for LpNorm operator
As desc
* Shard test_nn to reduce runtime for each test target
Closes https://github.com/pytorch/pytorch/pull/8793
The current test_nn would time out and be disabled in GreenWarden, and we need to have an option to split it up in order to pass the stress test. Right now GreenWarden roughly allows running 100 test cases in test_nn before timing out, and here we have an option to divide test_nn into 30 shards (with ~40 tests in each shard) to allow for some test suite growth in the future.
* Change default caffe2_streams_per_gpu to 1
* Remove IN_SANDCASTLE from common.py and test_nn.py
We prefer to disable the failing tests through Sandcastle UI instead.
* Add a new class for an updated prof_dag.proto
This diff contains:
- An updated prof_dag.proto that contains blob profiles.
- A class to deserialize this information (serialization is in a follow up diff)
- Update to separate profiling information from NeuralNet (and use it as part of the class above).
- Unit tests
* Lambdarank for SparseNN
This diff adds a lambda_rank_layer for SparseNN.
changes include
1) Adds support for multi sessions in c2 op
2) Adds support for two different loss functions in c2 op
3) Unit tests for op
* Revert D8586950: Back out "Revert D8515341: Back out "Revert D7802642: [Warmup] Allow both offline incremental training and online training""
This reverts commit 012220ed63eccc35659a57b31d16a3625da6317b
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* [easy] A few fixups to multithread predictor benchmark
(1) support perf on T6 server
(2) remove dead code
* fix a bug about the map size
as title
* Fix reduce sum on in-place case.
Fix reduce sum on in-place case.
* [Warmup] Reland reverted diff Allow both offline incremental training and online training
Closes https://github.com/pytorch/pytorch/pull/8827
fix net transform integration test. Allow offline and online trainer to coexist D7802642.
* Add StoreHandlerNotAvailableException
Add an exception for a store that is not available or has been
deleted.
* Use exception handling for fault tolerance, missing KV store
Remove status blobs to communication ops so that exceptions propagate on
failure.
* [C2/D2][2/n]: Nonnegative-Constrained Optimization -- bounded grad proj
for simple bounded constrained optimization, incl non-negative box constraints.
* [GanH]: Adaptive Weighting with More Estimations
With implemented postivity optimization, we now learn adaptive weights with different
parameterizations.
This improves parameter estimation and training stability.
* Revert some changes for landing
* Remove AutoNoGIL in StorageSharing
* Temporarily disable net_tests
* Revert "[Caffe2] Force tensor inference checks to be triggered during testing"
This reverts commit 67ef05c22b2f71b4a489695384932f968384a2a4.
* Revert "Fix reduce sum on in-place case."
This reverts commit 6cb8a8e1b3db7b6d20941b0053e3f3836068eb64.
* Revert "Revert "Fix reduce sum on in-place case.""
This reverts commit 130a257c0893dc09f4bd6e6a45d112261807fd2c.
2018-06-26 21:55:48 +00:00
|
|
|
# @package utils
|
2017-03-29 13:44:02 +00:00
|
|
|
# Module caffe2.python.utils
|
2020-09-24 00:55:24 +00:00
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2018-03-26 18:07:10 +00:00
|
|
|
|
2015-06-25 23:26:01 +00:00
|
|
|
from caffe2.proto import caffe2_pb2
|
2018-09-13 22:04:43 +00:00
|
|
|
from caffe2.python.compatibility import container_abcs
|
2017-06-29 23:52:01 +00:00
|
|
|
from future.utils import viewitems
|
2015-06-25 23:26:01 +00:00
|
|
|
from google.protobuf.message import DecodeError, Message
|
|
|
|
|
from google.protobuf import text_format
|
2018-03-26 18:07:10 +00:00
|
|
|
|
2017-06-05 18:50:32 +00:00
|
|
|
import sys
|
2018-03-26 18:07:10 +00:00
|
|
|
import copy
|
2017-01-23 17:32:12 +00:00
|
|
|
import functools
|
2015-06-25 23:26:01 +00:00
|
|
|
import numpy as np
|
Update from Facebook (#8887)
* add opencl + fpga context
adds an opencl context inside caffe2/fb which can be used for fpga access
* [Caffe2] Force tensor inference checks to be triggered during testing
We've started to rely on TensorInference functions more for different analysis. This diff ensures that the TensorInference function's result matches what is expected from the definition of the operator.
* Enable building //caffe2:torch with @mode/opt
In @mode/opt, python runs out of a PAR, which breaks a lot of
assumptions in the code about where templates/ folders live relative
to __file__. Rather than introduce hacks with parutil, I simply turn
template_path into a parameter for all the relevant functions and
thread it through from the top level.
* [Caffe2] Fix cost models for DotProduct and Div. Update Tensor Inference for dot product
As title. DotProduct states that output is a 1-D tensor (https://caffe2.ai/docs/operators-catalogue.html#dotproduct) though code suggests it is either 0- or 1-D depending on inputs. TensorInference defined to support implementation.
* [SG-MoE] Add an option to make the experts NOT as components
* [nomnigraph] Rename and fixup convertToNeuralNetOperator API
This will make things a bit cleaner
* no longer symlink THNN.h and THCUNN.h
* forced decoder network (onnx export)
Closes https://github.com/pytorch/translate/pull/95
Add networks in ensemble_export.py to create a forced decoding network from PyTorch NMT checkpoints. This network takes an arbitrary numberized (source, target) pair and returns the model score for the translation, including penalties.
Vocabulary reduction networks are also supported, but note that target indices which are not in the possible_translation_tokens generated for the source input will be trea
* Revert schema change to fix production models
Revert schema change to fix production models
* MockLogDeviceReader - rebase on FIX
# Goal
1), Build a make_mock_log_device_reader using make_mock_reader
2), Replace the real log_device_reader here: https://fburl.com/raihwf1p
# Log by D8151734
Real log_device_reader:
```
I0529 20:29:05.373108 954994 tensor.h:839] Tensor print_net/log of type std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >. Dims: (): read_net/ParseOpenTrainingRow:0
I0529 20:29:05.373244 954994 tensor.h:839] Tensor read_net/ParseOpenTrainin
* [C2/D2][1/n]: Nonnegative-Constrained Optimization -- log barrier
implement log barrier as a regularization method
* Add teacher weight screening.
Add teacher weight sceening according to teacher labels. If teacher label is zero, we do not use the distill loss in the objective function.
* Add NormalizerContext
See task for more detail. This implementation is a copy of what exists for RegularizerContext except for how the parameters are defined in the model_definition thrift file.
I'll try an alternative implementation which overrides the default arguments of functions instead like for argscopes in tensorflow.
https://github.com/pytorch/pytorch/compare/master...MaximeBoucher:update-from-facebook-0939578c068c?expand=1
* Adding cosine similarity option in dot processor
Add pairwise cosine similarity option in dot product.
Add an option to concate dot product and cosine similarity.
Add test cases.
* [nomnigraph][redo] Concat elim for sparseNN
Same as D7962948, which was reverted because Operator Schema was not
defined
* [pytorch] Revert pytorch/pytorch#7918 'Release GIL when copying to shared memory', breaks ASAN
Revert this pytorch diff that breaks ASAN when running Filament in dev mode; in opt mode it gives "bad file descriptor" errors. Looks like a race when copying tensors to shared memory in multiple mp.Queue's (which spawn separate threads).
https://github.com/pytorch/pytorch/pull/7918/files
* [nomnigraph][mobile] Enable nomnigraph by default, use -Oz on nomnigraph related code to reduce code size
enables nomnigraph and reduces codesize
* [Warmup] Allow both offline incremental training and online training
Change plan name on saving side and reading side to support both training type
This diff depends on D8128530 and D8168651.
* Revert D7802642: [Warmup] Allow both offline incremental training and online training
This reverts commit afc213cf9b36cecf75333a788391c4d09f4afccc
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* Add legacy grad logic to fix div op on old graphs.
Add legacy grad logic to fix div op on old graphs.
* Correctly propagate operator failures
Propagate errors from operators that throw exceptions and return false
* Revert D8374829: [caffe2][nomnigraph][redo] Concat elim for sparseNN
This reverts commit 6dda028c463e54bb5c32188bbbe9202107e188a5
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* [Caffe2] Added extra_info to core.DeviceOption(), enforced extra_info to be inherited in scope.DeviceScope
extra_info is a newly defined field in DeviceOption proto. This diff added extra_info to the core.DeviceOption(). And, In scope.DeviceScope(), this diff enforce the new scope to inherit the extra_info from old scope.
* [opt] hgdirsync wasn't enabled, merge diverged code
Here's the damage, P59732616 basically xplat was left behind but had
the change from assert to CAFFE_ENFORCE
* OMP parallelism over RoIs for RoIAlign op
Simpler to parallelize over RoIs. Shouldn't affect other uses as it relies on
the number of OMP threads set during startup.
PR: https://github.com/pytorch/pytorch/pull/8562
* Use int64_t for shape in FillOps
to avoid overflow of int32
* Implement Rotated RoIAlign op
Based on Rotated RPNs as explained in https://arxiv.org/abs/1703.01086.
The idea is simple - orientation/angle is added as an RPN
anchor parameter and then the angle is further regressed similar to bbox
coords. There are some additional changes related to NMS and IoU, but besides
that it's a direct extension to Faster-RCNN. Further details in https://fb.quip.com/sZHlA1iMfWPZ.
RoIs are represented in [center_x, center_y, width, height, angle] format.
`angle` repre
* Rotated RoIAlign op CUDA forward implementation
CUDA forward impl for D8415490
* RoIAlignRotated op CUDA backward pass implementation
TSIA
* All remaining fixes to eliminate process_github.sh
Most of this diff has already been reviewed separately, except for the parts relating to _thnn/utils.py and _utils._internal.py
remove skipIf(True, 'Fbcode') line from process_github.sh
replace sed of cpp file with #ifdef to control cudnnDestroy use
undo sync-time deletion of .gitattributes, remove process_github.sh
switch to using _utils._internal rather than try-import-except
This diff also fixes the open-source bug where rebuilds have
* Back out "Revert D7802642: [Warmup] Allow both offline incremental training and online training"
Original commit changeset: 7707d2efe60e The original diff is backout becuase the online trainer package is backed out. This code would only work with new online trainer package
* [easy] improve error log in adagrad op
as title
* re-allow use of thnn_h_path
This fixes cffi usage in OSS
* [4/4] [tum] paralyzing layerNorm for GPU full sync
as title
* add compile=False to pytorch tests, remove hack with pyc
* Add shape and type inference for RowWiseArgMax operator
See title
* Revert D8515341: Back out "Revert D7802642: [Warmup] Allow both offline incremental training and online training"
This reverts commit 78167eeef0af16b60f72c82f9dcdda9b41b4dcbd
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* [fix-flaky-test] mock_hive_reader_test flaky, because GlobalCounter collects local counts intervally
# Problem
`MockHiveReader` uses `GlobalCounter` to limit `max_examples`.
GlobalCounter on server node collect local counts from worker nodes every 1 sec.
This 1 sec delay makes it impossible to limit exactly to the `max_examples`, it will definitely exceed `max_examples`.
# Plan
Given,
```
Expected num_examples = max_examples + num_examples/sec (Read Speed) x 1 sec (GlobalCounter Sync Int
* [Caffe2] Fix FCGradient cost inference. Prevent overflow in cost inference
FCGradient missed a factor 2 in the `num_outputs == 3` case. Overflow was occurring with flop calculation for FC. Changed types to `uint64_t` to prevent future problems.
* Fix binary ops with empty inputs
Fix binary ops with empty inputs
* Support the filling of input blob with provided data
as title for Biz Integrity case
* Back out "Revert D8515341: Back out "Revert D7802642: [Warmup] Allow both offline incremental training and online training""
Original commit changeset: 30c55dd38816 Original diff is reverted due to introducing bad integration test. Fixed the integration test.
* [c2][easy] improve pack ops error loggings
as desc.
* Add ShapeTypeInference for LpNorm operator
As desc
* Shard test_nn to reduce runtime for each test target
Closes https://github.com/pytorch/pytorch/pull/8793
The current test_nn would time out and be disabled in GreenWarden, and we need to have an option to split it up in order to pass the stress test. Right now GreenWarden roughly allows running 100 test cases in test_nn before timing out, and here we have an option to divide test_nn into 30 shards (with ~40 tests in each shard) to allow for some test suite growth in the future.
* Change default caffe2_streams_per_gpu to 1
* Remove IN_SANDCASTLE from common.py and test_nn.py
We prefer to disable the failing tests through Sandcastle UI instead.
* Add a new class for an updated prof_dag.proto
This diff contains:
- An updated prof_dag.proto that contains blob profiles.
- A class to deserialize this information (serialization is in a follow up diff)
- Update to separate profiling information from NeuralNet (and use it as part of the class above).
- Unit tests
* Lambdarank for SparseNN
This diff adds a lambda_rank_layer for SparseNN.
changes include
1) Adds support for multi sessions in c2 op
2) Adds support for two different loss functions in c2 op
3) Unit tests for op
* Revert D8586950: Back out "Revert D8515341: Back out "Revert D7802642: [Warmup] Allow both offline incremental training and online training""
This reverts commit 012220ed63eccc35659a57b31d16a3625da6317b
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* [easy] A few fixups to multithread predictor benchmark
(1) support perf on T6 server
(2) remove dead code
* fix a bug about the map size
as title
* Fix reduce sum on in-place case.
Fix reduce sum on in-place case.
* [Warmup] Reland reverted diff Allow both offline incremental training and online training
Closes https://github.com/pytorch/pytorch/pull/8827
fix net transform integration test. Allow offline and online trainer to coexist D7802642.
* Add StoreHandlerNotAvailableException
Add an exception for a store that is not available or has been
deleted.
* Use exception handling for fault tolerance, missing KV store
Remove status blobs to communication ops so that exceptions propagate on
failure.
* [C2/D2][2/n]: Nonnegative-Constrained Optimization -- bounded grad proj
for simple bounded constrained optimization, incl non-negative box constraints.
* [GanH]: Adaptive Weighting with More Estimations
With implemented postivity optimization, we now learn adaptive weights with different
parameterizations.
This improves parameter estimation and training stability.
* Revert some changes for landing
* Remove AutoNoGIL in StorageSharing
* Temporarily disable net_tests
* Revert "[Caffe2] Force tensor inference checks to be triggered during testing"
This reverts commit 67ef05c22b2f71b4a489695384932f968384a2a4.
* Revert "Fix reduce sum on in-place case."
This reverts commit 6cb8a8e1b3db7b6d20941b0053e3f3836068eb64.
* Revert "Revert "Fix reduce sum on in-place case.""
This reverts commit 130a257c0893dc09f4bd6e6a45d112261807fd2c.
2018-06-26 21:55:48 +00:00
|
|
|
from six import integer_types, binary_type, text_type, string_types
|
|
|
|
|
|
|
|
|
|
OPTIMIZER_ITERATION_NAME = "optimizer_iteration"
|
|
|
|
|
ITERATION_MUTEX_NAME = "iteration_mutex"
|
2016-07-21 17:16:42 +00:00
|
|
|
|
|
|
|
|
|
2018-03-26 18:07:10 +00:00
|
|
|
def OpAlmostEqual(op_a, op_b, ignore_fields=None):
|
|
|
|
|
'''
|
|
|
|
|
Two ops are identical except for each field in the `ignore_fields`.
|
|
|
|
|
'''
|
|
|
|
|
ignore_fields = ignore_fields or []
|
|
|
|
|
if not isinstance(ignore_fields, list):
|
|
|
|
|
ignore_fields = [ignore_fields]
|
|
|
|
|
|
|
|
|
|
assert all(isinstance(f, text_type) for f in ignore_fields), (
|
|
|
|
|
'Expect each field is text type, but got {}'.format(ignore_fields))
|
|
|
|
|
|
|
|
|
|
def clean_op(op):
|
|
|
|
|
op = copy.deepcopy(op)
|
|
|
|
|
for field in ignore_fields:
|
|
|
|
|
if op.HasField(field):
|
|
|
|
|
op.ClearField(field)
|
|
|
|
|
return op
|
|
|
|
|
|
|
|
|
|
op_a = clean_op(op_a)
|
|
|
|
|
op_b = clean_op(op_b)
|
2020-07-20 20:10:28 +00:00
|
|
|
return op_a == op_b or str(op_a) == str(op_b)
|
2018-03-26 18:07:10 +00:00
|
|
|
|
|
|
|
|
|
2015-06-25 23:26:01 +00:00
|
|
|
def CaffeBlobToNumpyArray(blob):
|
2016-08-10 18:02:15 +00:00
|
|
|
if (blob.num != 0):
|
|
|
|
|
# old style caffe blob.
|
|
|
|
|
return (np.asarray(blob.data, dtype=np.float32)
|
|
|
|
|
.reshape(blob.num, blob.channels, blob.height, blob.width))
|
|
|
|
|
else:
|
|
|
|
|
# new style caffe blob.
|
|
|
|
|
return (np.asarray(blob.data, dtype=np.float32)
|
|
|
|
|
.reshape(blob.shape.dim))
|
2016-05-13 21:43:48 +00:00
|
|
|
|
2015-06-25 23:26:01 +00:00
|
|
|
|
|
|
|
|
def Caffe2TensorToNumpyArray(tensor):
|
2017-02-22 11:25:46 +00:00
|
|
|
if tensor.data_type == caffe2_pb2.TensorProto.FLOAT:
|
|
|
|
|
return np.asarray(
|
|
|
|
|
tensor.float_data, dtype=np.float32).reshape(tensor.dims)
|
|
|
|
|
elif tensor.data_type == caffe2_pb2.TensorProto.DOUBLE:
|
|
|
|
|
return np.asarray(
|
|
|
|
|
tensor.double_data, dtype=np.float64).reshape(tensor.dims)
|
2019-12-27 00:38:11 +00:00
|
|
|
elif tensor.data_type == caffe2_pb2.TensorProto.INT64:
|
|
|
|
|
return np.asarray(
|
|
|
|
|
tensor.int64_data, dtype=np.int64).reshape(tensor.dims)
|
2017-02-22 11:25:46 +00:00
|
|
|
elif tensor.data_type == caffe2_pb2.TensorProto.INT32:
|
|
|
|
|
return np.asarray(
|
2017-07-26 18:17:19 +00:00
|
|
|
tensor.int32_data, dtype=np.int).reshape(tensor.dims) # pb.INT32=>np.int use int32_data
|
|
|
|
|
elif tensor.data_type == caffe2_pb2.TensorProto.INT16:
|
|
|
|
|
return np.asarray(
|
|
|
|
|
tensor.int32_data, dtype=np.int16).reshape(tensor.dims) # pb.INT16=>np.int16 use int32_data
|
|
|
|
|
elif tensor.data_type == caffe2_pb2.TensorProto.UINT16:
|
|
|
|
|
return np.asarray(
|
|
|
|
|
tensor.int32_data, dtype=np.uint16).reshape(tensor.dims) # pb.UINT16=>np.uint16 use int32_data
|
|
|
|
|
elif tensor.data_type == caffe2_pb2.TensorProto.INT8:
|
|
|
|
|
return np.asarray(
|
|
|
|
|
tensor.int32_data, dtype=np.int8).reshape(tensor.dims) # pb.INT8=>np.int8 use int32_data
|
|
|
|
|
elif tensor.data_type == caffe2_pb2.TensorProto.UINT8:
|
|
|
|
|
return np.asarray(
|
|
|
|
|
tensor.int32_data, dtype=np.uint8).reshape(tensor.dims) # pb.UINT8=>np.uint8 use int32_data
|
2017-02-22 11:25:46 +00:00
|
|
|
else:
|
2017-07-26 18:17:19 +00:00
|
|
|
# TODO: complete the data type: bool, float16, byte, int64, string
|
2017-02-22 11:25:46 +00:00
|
|
|
raise RuntimeError(
|
|
|
|
|
"Tensor data type not supported yet: " + str(tensor.data_type))
|
2016-05-13 21:43:48 +00:00
|
|
|
|
2015-06-25 23:26:01 +00:00
|
|
|
|
2017-02-22 11:25:46 +00:00
|
|
|
def NumpyArrayToCaffe2Tensor(arr, name=None):
|
2016-05-13 21:43:48 +00:00
|
|
|
tensor = caffe2_pb2.TensorProto()
|
|
|
|
|
tensor.dims.extend(arr.shape)
|
2017-02-22 11:25:46 +00:00
|
|
|
if name:
|
|
|
|
|
tensor.name = name
|
|
|
|
|
if arr.dtype == np.float32:
|
|
|
|
|
tensor.data_type = caffe2_pb2.TensorProto.FLOAT
|
|
|
|
|
tensor.float_data.extend(list(arr.flatten().astype(float)))
|
|
|
|
|
elif arr.dtype == np.float64:
|
|
|
|
|
tensor.data_type = caffe2_pb2.TensorProto.DOUBLE
|
|
|
|
|
tensor.double_data.extend(list(arr.flatten().astype(np.float64)))
|
2019-12-27 00:38:11 +00:00
|
|
|
elif arr.dtype == np.int64:
|
|
|
|
|
tensor.data_type = caffe2_pb2.TensorProto.INT64
|
|
|
|
|
tensor.int64_data.extend(list(arr.flatten().astype(np.int64)))
|
2017-08-01 20:44:48 +00:00
|
|
|
elif arr.dtype == np.int or arr.dtype == np.int32:
|
2017-02-22 11:25:46 +00:00
|
|
|
tensor.data_type = caffe2_pb2.TensorProto.INT32
|
2018-03-06 03:56:36 +00:00
|
|
|
tensor.int32_data.extend(arr.flatten().astype(np.int).tolist())
|
2017-07-26 18:17:19 +00:00
|
|
|
elif arr.dtype == np.int16:
|
|
|
|
|
tensor.data_type = caffe2_pb2.TensorProto.INT16
|
|
|
|
|
tensor.int32_data.extend(list(arr.flatten().astype(np.int16))) # np.int16=>pb.INT16 use int32_data
|
|
|
|
|
elif arr.dtype == np.uint16:
|
|
|
|
|
tensor.data_type = caffe2_pb2.TensorProto.UINT16
|
|
|
|
|
tensor.int32_data.extend(list(arr.flatten().astype(np.uint16))) # np.uint16=>pb.UNIT16 use int32_data
|
|
|
|
|
elif arr.dtype == np.int8:
|
|
|
|
|
tensor.data_type = caffe2_pb2.TensorProto.INT8
|
|
|
|
|
tensor.int32_data.extend(list(arr.flatten().astype(np.int8))) # np.int8=>pb.INT8 use int32_data
|
|
|
|
|
elif arr.dtype == np.uint8:
|
|
|
|
|
tensor.data_type = caffe2_pb2.TensorProto.UINT8
|
|
|
|
|
tensor.int32_data.extend(list(arr.flatten().astype(np.uint8))) # np.uint8=>pb.UNIT8 use int32_data
|
2017-02-22 11:25:46 +00:00
|
|
|
else:
|
2019-12-27 00:38:11 +00:00
|
|
|
# TODO: complete the data type: bool, float16, byte, string
|
2017-02-22 11:25:46 +00:00
|
|
|
raise RuntimeError(
|
|
|
|
|
"Numpy data type not supported yet: " + str(arr.dtype))
|
2016-05-13 21:43:48 +00:00
|
|
|
return tensor
|
|
|
|
|
|
2015-06-25 23:26:01 +00:00
|
|
|
|
|
|
|
|
def MakeArgument(key, value):
|
2016-05-13 21:43:48 +00:00
|
|
|
"""Makes an argument based on the value type."""
|
|
|
|
|
argument = caffe2_pb2.Argument()
|
|
|
|
|
argument.name = key
|
2018-09-13 22:04:43 +00:00
|
|
|
iterable = isinstance(value, container_abcs.Iterable)
|
2016-09-06 22:54:56 +00:00
|
|
|
|
2017-07-11 00:38:40 +00:00
|
|
|
# Fast tracking common use case where a float32 array of tensor parameters
|
|
|
|
|
# needs to be serialized. The entire array is guaranteed to have the same
|
|
|
|
|
# dtype, so no per-element checking necessary and no need to convert each
|
|
|
|
|
# element separately.
|
|
|
|
|
if isinstance(value, np.ndarray) and value.dtype.type is np.float32:
|
|
|
|
|
argument.floats.extend(value.flatten().tolist())
|
|
|
|
|
return argument
|
|
|
|
|
|
2016-09-06 22:54:56 +00:00
|
|
|
if isinstance(value, np.ndarray):
|
|
|
|
|
value = value.flatten().tolist()
|
2016-10-07 20:08:53 +00:00
|
|
|
elif isinstance(value, np.generic):
|
|
|
|
|
# convert numpy scalar to native python type
|
|
|
|
|
value = np.asscalar(value)
|
2016-09-06 22:54:56 +00:00
|
|
|
|
2016-05-13 21:43:48 +00:00
|
|
|
if type(value) is float:
|
|
|
|
|
argument.f = value
|
2017-06-13 19:12:53 +00:00
|
|
|
elif type(value) in integer_types or type(value) is bool:
|
2016-05-13 21:43:48 +00:00
|
|
|
# We make a relaxation that a boolean variable will also be stored as
|
|
|
|
|
# int.
|
|
|
|
|
argument.i = value
|
2017-06-23 20:06:20 +00:00
|
|
|
elif isinstance(value, binary_type):
|
|
|
|
|
argument.s = value
|
|
|
|
|
elif isinstance(value, text_type):
|
|
|
|
|
argument.s = value.encode('utf-8')
|
2017-08-23 01:59:55 +00:00
|
|
|
elif isinstance(value, caffe2_pb2.NetDef):
|
|
|
|
|
argument.n.CopyFrom(value)
|
2016-05-13 21:43:48 +00:00
|
|
|
elif isinstance(value, Message):
|
|
|
|
|
argument.s = value.SerializeToString()
|
2016-11-21 21:51:18 +00:00
|
|
|
elif iterable and all(type(v) in [float, np.float_] for v in value):
|
2017-06-13 19:12:53 +00:00
|
|
|
argument.floats.extend(
|
|
|
|
|
v.item() if type(v) is np.float_ else v for v in value
|
|
|
|
|
)
|
|
|
|
|
elif iterable and all(
|
|
|
|
|
type(v) in integer_types or type(v) in [bool, np.int_] for v in value
|
|
|
|
|
):
|
|
|
|
|
argument.ints.extend(
|
|
|
|
|
v.item() if type(v) is np.int_ else v for v in value
|
|
|
|
|
)
|
2017-06-23 20:06:20 +00:00
|
|
|
elif iterable and all(
|
|
|
|
|
isinstance(v, binary_type) or isinstance(v, text_type) for v in value
|
|
|
|
|
):
|
2017-06-13 19:12:53 +00:00
|
|
|
argument.strings.extend(
|
|
|
|
|
v.encode('utf-8') if isinstance(v, text_type) else v
|
|
|
|
|
for v in value
|
|
|
|
|
)
|
2017-08-23 01:59:55 +00:00
|
|
|
elif iterable and all(isinstance(v, caffe2_pb2.NetDef) for v in value):
|
|
|
|
|
argument.nets.extend(value)
|
2016-11-21 21:51:18 +00:00
|
|
|
elif iterable and all(isinstance(v, Message) for v in value):
|
2017-06-13 19:12:53 +00:00
|
|
|
argument.strings.extend(v.SerializeToString() for v in value)
|
2016-05-13 21:43:48 +00:00
|
|
|
else:
|
2017-06-23 20:06:20 +00:00
|
|
|
if iterable:
|
|
|
|
|
raise ValueError(
|
|
|
|
|
"Unknown iterable argument type: key={} value={}, value "
|
|
|
|
|
"type={}[{}]".format(
|
|
|
|
|
key, value, type(value), set(type(v) for v in value)
|
|
|
|
|
)
|
|
|
|
|
)
|
|
|
|
|
else:
|
|
|
|
|
raise ValueError(
|
|
|
|
|
"Unknown argument type: key={} value={}, value type={}".format(
|
|
|
|
|
key, value, type(value)
|
|
|
|
|
)
|
|
|
|
|
)
|
2016-05-13 21:43:48 +00:00
|
|
|
return argument
|
|
|
|
|
|
2015-06-25 23:26:01 +00:00
|
|
|
|
|
|
|
|
def TryReadProtoWithClass(cls, s):
|
2016-05-13 21:43:48 +00:00
|
|
|
"""Reads a protobuffer with the given proto class.
|
|
|
|
|
|
|
|
|
|
Inputs:
|
|
|
|
|
cls: a protobuffer class.
|
|
|
|
|
s: a string of either binary or text protobuffer content.
|
|
|
|
|
|
|
|
|
|
Outputs:
|
|
|
|
|
proto: the protobuffer of cls
|
|
|
|
|
|
|
|
|
|
Throws:
|
|
|
|
|
google.protobuf.message.DecodeError: if we cannot decode the message.
|
|
|
|
|
"""
|
|
|
|
|
obj = cls()
|
|
|
|
|
try:
|
|
|
|
|
text_format.Parse(s, obj)
|
|
|
|
|
return obj
|
2019-06-14 06:38:12 +00:00
|
|
|
except (text_format.ParseError, UnicodeDecodeError):
|
2016-05-13 21:43:48 +00:00
|
|
|
obj.ParseFromString(s)
|
|
|
|
|
return obj
|
|
|
|
|
|
2015-06-25 23:26:01 +00:00
|
|
|
|
|
|
|
|
def GetContentFromProto(obj, function_map):
|
2016-05-13 21:43:48 +00:00
|
|
|
"""Gets a specific field from a protocol buffer that matches the given class
|
|
|
|
|
"""
|
2017-06-29 23:52:01 +00:00
|
|
|
for cls, func in viewitems(function_map):
|
2016-05-13 21:43:48 +00:00
|
|
|
if type(obj) is cls:
|
|
|
|
|
return func(obj)
|
|
|
|
|
|
2015-06-25 23:26:01 +00:00
|
|
|
|
|
|
|
|
def GetContentFromProtoString(s, function_map):
|
2017-06-29 23:52:01 +00:00
|
|
|
for cls, func in viewitems(function_map):
|
2016-05-13 21:43:48 +00:00
|
|
|
try:
|
|
|
|
|
obj = TryReadProtoWithClass(cls, s)
|
|
|
|
|
return func(obj)
|
|
|
|
|
except DecodeError:
|
|
|
|
|
continue
|
|
|
|
|
else:
|
|
|
|
|
raise DecodeError("Cannot find a fit protobuffer class.")
|
2016-09-06 22:54:56 +00:00
|
|
|
|
|
|
|
|
|
|
|
|
|
def ConvertProtoToBinary(proto_class, filename, out_filename):
|
|
|
|
|
"""Convert a text file of the given protobuf class to binary."""
|
2019-01-09 23:25:58 +00:00
|
|
|
with open(filename) as f:
|
|
|
|
|
proto = TryReadProtoWithClass(proto_class, f.read())
|
2016-09-06 22:54:56 +00:00
|
|
|
with open(out_filename, 'w') as fid:
|
|
|
|
|
fid.write(proto.SerializeToString())
|
2017-01-20 17:21:19 +00:00
|
|
|
|
|
|
|
|
|
2017-04-19 17:33:29 +00:00
|
|
|
def GetGPUMemoryUsageStats():
|
2018-11-29 21:58:11 +00:00
|
|
|
"""Get GPU memory usage stats from CUDAContext/HIPContext. This requires flag
|
2017-04-19 17:33:29 +00:00
|
|
|
--caffe2_gpu_memory_tracking to be enabled"""
|
|
|
|
|
from caffe2.python import workspace, core
|
|
|
|
|
workspace.RunOperatorOnce(
|
|
|
|
|
core.CreateOperator(
|
|
|
|
|
"GetGPUMemoryUsage",
|
|
|
|
|
[],
|
|
|
|
|
["____mem____"],
|
2018-11-29 21:58:11 +00:00
|
|
|
device_option=core.DeviceOption(workspace.GpuDeviceType, 0),
|
2017-04-19 17:33:29 +00:00
|
|
|
),
|
|
|
|
|
)
|
|
|
|
|
b = workspace.FetchBlob("____mem____")
|
|
|
|
|
return {
|
|
|
|
|
'total_by_gpu': b[0, :],
|
|
|
|
|
'max_by_gpu': b[1, :],
|
|
|
|
|
'total': np.sum(b[0, :]),
|
|
|
|
|
'max_total': np.sum(b[1, :])
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
2017-04-26 20:29:08 +00:00
|
|
|
def ResetBlobs(blobs):
|
|
|
|
|
from caffe2.python import workspace, core
|
|
|
|
|
workspace.RunOperatorOnce(
|
|
|
|
|
core.CreateOperator(
|
|
|
|
|
"Free",
|
|
|
|
|
list(blobs),
|
|
|
|
|
list(blobs),
|
|
|
|
|
device_option=core.DeviceOption(caffe2_pb2.CPU),
|
|
|
|
|
),
|
|
|
|
|
)
|
|
|
|
|
|
|
|
|
|
|
2017-01-20 17:21:19 +00:00
|
|
|
class DebugMode(object):
|
|
|
|
|
'''
|
|
|
|
|
This class allows to drop you into an interactive debugger
|
|
|
|
|
if there is an unhandled exception in your python script
|
|
|
|
|
|
|
|
|
|
Example of usage:
|
|
|
|
|
|
|
|
|
|
def main():
|
|
|
|
|
# your code here
|
|
|
|
|
pass
|
|
|
|
|
|
|
|
|
|
if __name__ == '__main__':
|
|
|
|
|
from caffe2.python.utils import DebugMode
|
|
|
|
|
DebugMode.run(main)
|
|
|
|
|
'''
|
|
|
|
|
|
|
|
|
|
@classmethod
|
|
|
|
|
def run(cls, func):
|
|
|
|
|
try:
|
|
|
|
|
return func()
|
|
|
|
|
except KeyboardInterrupt:
|
|
|
|
|
raise
|
|
|
|
|
except Exception:
|
2017-01-23 17:32:12 +00:00
|
|
|
import pdb
|
2017-01-20 17:21:19 +00:00
|
|
|
|
2017-01-23 17:32:12 +00:00
|
|
|
print(
|
|
|
|
|
'Entering interactive debugger. Type "bt" to print '
|
|
|
|
|
'the full stacktrace. Type "help" to see command listing.')
|
|
|
|
|
print(sys.exc_info()[1])
|
|
|
|
|
print
|
2017-01-20 17:21:19 +00:00
|
|
|
|
2017-01-23 17:32:12 +00:00
|
|
|
pdb.post_mortem()
|
|
|
|
|
sys.exit(1)
|
2017-01-20 17:21:19 +00:00
|
|
|
raise
|
2017-01-23 17:32:12 +00:00
|
|
|
|
2018-03-26 18:07:10 +00:00
|
|
|
|
2018-02-08 01:16:49 +00:00
|
|
|
def raiseIfNotEqual(a, b, msg):
|
|
|
|
|
if a != b:
|
|
|
|
|
raise Exception("{}. {} != {}".format(msg, a, b))
|
2017-01-23 17:32:12 +00:00
|
|
|
|
2018-03-26 18:07:10 +00:00
|
|
|
|
2017-01-23 17:32:12 +00:00
|
|
|
def debug(f):
|
|
|
|
|
'''
|
|
|
|
|
Use this method to decorate your function with DebugMode's functionality
|
|
|
|
|
|
|
|
|
|
Example:
|
|
|
|
|
|
|
|
|
|
@debug
|
|
|
|
|
def test_foo(self):
|
|
|
|
|
raise Exception("Bar")
|
|
|
|
|
|
|
|
|
|
'''
|
|
|
|
|
|
|
|
|
|
@functools.wraps(f)
|
|
|
|
|
def wrapper(*args, **kwargs):
|
|
|
|
|
def func():
|
|
|
|
|
return f(*args, **kwargs)
|
[Caffe2][fbcode=>GH sync] Update from facebook 4323b18ce13c (#7116)
* [fix] Re-enable events in RNN ops
We have earlier added event disabling in RNN ops as back then we didn't use
events, with current use cases this is no longer true
(https://fburl.com/8vd0lp8y)
* use ops with cude impl
* Revert D7729695: [caffe2][fix] Re-enable events in RNN ops
This reverts commit 4b215c7496fb724656ff4c776933a15bdbbcde5e
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* [observer] Clean up observer_config.h
#accept2ship
* [1/n] Refactor dataio_test.py
Replace code duplication with a common function
* Add barrier net that runs before training nets
Add a synchonize barrier net that is run before training nets. With this net, shards that are faster will wait for other shards before start training. This reduce chances of the faster shards timing out during GLOO AllReduce.
Removed explicit data_parallel_model.py.synchronize call in holmes workflow. Similar change in speech/asr_training workflow will come in another diff.
* Support the dnnlowp backend in caffe2_benchmark
This is for SHARE operator latency evaluation
* Migrate integral_image_op to main caffe2
migrate integral_image_op(GPU version) given by https://fburl.com/yvqezigi
to caffe2/caffe2/operators and implement its CPU version. Write up a test
using the hypothesis_test mechanism
* [pos_disc, fbcode] Implement unjoined lr loss
As explained in https://our.intern.facebook.com/intern/wiki/Model_Based_Calibration/, when the dataset is an joined data set, where labels might change later, we need to use unjoined logloss.
The implementation is almost the same as in Sigrid (https://fburl.com/1trngsls), where
loss = y (log(p) - log(1-p)) + (1-y)(log(1-p)) = xy - (1-y)x - (1-y)log(1+exp(-x))
For x < 0, to ensure stability and avoid overflow, we reformulate the above exp as
loss = xy - (1-y)x - (1-y)x + (1-y)log(1+exp(x)) = xy + (1-y)log(1+exp(x))
Then the final expression becomes
loss = xy + (y - 1) x (x >= 0) - (1 - y) log(1 + exp(x - 2 x (x >= 0)))
where y is the true label, x is the dot product and p = logistic(x).
This kind of implementation is align with the current implementation of the original cross entropy in
https://phabricator.intern.facebook.com/diffusion/FBS/browse/master/fbcode/caffe2/caffe2/operators/cross_entropy_op.cc;0bae3b5d0f825897c5e0dd0ff10f489d7271bf25$7-13
* Keep the array to fix the conflict
* [C2] Compute Adagrad effective LR
The AdagradWithLR op outputs an extra blob which is contains the average effective learning rate across all weights in this blob.
* Open-source extractMetaNetDef & runGlobalInitialization, add new Predictor constructor from db file, and add run_map_outputs
1. Open-source extractMetaNetDef and runGlobalInitialization, for use in
2. new Predictor constructor from db file.
3. Add new run function that returns outputs as TensorMap
* Disable eigen cpu
Disable eigen cpu in transpose and reduce
* Introduce request_only/object_only property of ModelLayer
by default this is False
* A simple TC Caffe2 benchmark
We can run tunner, get MappingOptions and then use them to
compare against cuBLAS
currently broken due to LLVM issues. How to run:
hg checkout eec1ab31b59c03b8deded1c755a9abaf8c45be01
add D7401202
add D7434625
add D7506031
add D7540728
buck run @mode/dev-nosan tc/tc/benchmarks_python:caffe2_benchmark
* Move Caffe2 feature_maps_ops to open source
Need feature maps operators in open source project facebookresearch/BlueWhale
* Manually fix the conflicts in channel shuffle op
* Fix the inconsistency between different gh and fbcode
* Skip Adagrad GPU Test (Because some gpu implementation is missing)
* Fix another test to make sure it won't run on gpu when implementation is not available yet
2018-05-02 03:49:00 +00:00
|
|
|
return DebugMode.run(func)
|
2017-01-23 17:32:12 +00:00
|
|
|
|
|
|
|
|
return wrapper
|
Update from Facebook (#8887)
* add opencl + fpga context
adds an opencl context inside caffe2/fb which can be used for fpga access
* [Caffe2] Force tensor inference checks to be triggered during testing
We've started to rely on TensorInference functions more for different analysis. This diff ensures that the TensorInference function's result matches what is expected from the definition of the operator.
* Enable building //caffe2:torch with @mode/opt
In @mode/opt, python runs out of a PAR, which breaks a lot of
assumptions in the code about where templates/ folders live relative
to __file__. Rather than introduce hacks with parutil, I simply turn
template_path into a parameter for all the relevant functions and
thread it through from the top level.
* [Caffe2] Fix cost models for DotProduct and Div. Update Tensor Inference for dot product
As title. DotProduct states that output is a 1-D tensor (https://caffe2.ai/docs/operators-catalogue.html#dotproduct) though code suggests it is either 0- or 1-D depending on inputs. TensorInference defined to support implementation.
* [SG-MoE] Add an option to make the experts NOT as components
* [nomnigraph] Rename and fixup convertToNeuralNetOperator API
This will make things a bit cleaner
* no longer symlink THNN.h and THCUNN.h
* forced decoder network (onnx export)
Closes https://github.com/pytorch/translate/pull/95
Add networks in ensemble_export.py to create a forced decoding network from PyTorch NMT checkpoints. This network takes an arbitrary numberized (source, target) pair and returns the model score for the translation, including penalties.
Vocabulary reduction networks are also supported, but note that target indices which are not in the possible_translation_tokens generated for the source input will be trea
* Revert schema change to fix production models
Revert schema change to fix production models
* MockLogDeviceReader - rebase on FIX
# Goal
1), Build a make_mock_log_device_reader using make_mock_reader
2), Replace the real log_device_reader here: https://fburl.com/raihwf1p
# Log by D8151734
Real log_device_reader:
```
I0529 20:29:05.373108 954994 tensor.h:839] Tensor print_net/log of type std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >. Dims: (): read_net/ParseOpenTrainingRow:0
I0529 20:29:05.373244 954994 tensor.h:839] Tensor read_net/ParseOpenTrainin
* [C2/D2][1/n]: Nonnegative-Constrained Optimization -- log barrier
implement log barrier as a regularization method
* Add teacher weight screening.
Add teacher weight sceening according to teacher labels. If teacher label is zero, we do not use the distill loss in the objective function.
* Add NormalizerContext
See task for more detail. This implementation is a copy of what exists for RegularizerContext except for how the parameters are defined in the model_definition thrift file.
I'll try an alternative implementation which overrides the default arguments of functions instead like for argscopes in tensorflow.
https://github.com/pytorch/pytorch/compare/master...MaximeBoucher:update-from-facebook-0939578c068c?expand=1
* Adding cosine similarity option in dot processor
Add pairwise cosine similarity option in dot product.
Add an option to concate dot product and cosine similarity.
Add test cases.
* [nomnigraph][redo] Concat elim for sparseNN
Same as D7962948, which was reverted because Operator Schema was not
defined
* [pytorch] Revert pytorch/pytorch#7918 'Release GIL when copying to shared memory', breaks ASAN
Revert this pytorch diff that breaks ASAN when running Filament in dev mode; in opt mode it gives "bad file descriptor" errors. Looks like a race when copying tensors to shared memory in multiple mp.Queue's (which spawn separate threads).
https://github.com/pytorch/pytorch/pull/7918/files
* [nomnigraph][mobile] Enable nomnigraph by default, use -Oz on nomnigraph related code to reduce code size
enables nomnigraph and reduces codesize
* [Warmup] Allow both offline incremental training and online training
Change plan name on saving side and reading side to support both training type
This diff depends on D8128530 and D8168651.
* Revert D7802642: [Warmup] Allow both offline incremental training and online training
This reverts commit afc213cf9b36cecf75333a788391c4d09f4afccc
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* Add legacy grad logic to fix div op on old graphs.
Add legacy grad logic to fix div op on old graphs.
* Correctly propagate operator failures
Propagate errors from operators that throw exceptions and return false
* Revert D8374829: [caffe2][nomnigraph][redo] Concat elim for sparseNN
This reverts commit 6dda028c463e54bb5c32188bbbe9202107e188a5
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* [Caffe2] Added extra_info to core.DeviceOption(), enforced extra_info to be inherited in scope.DeviceScope
extra_info is a newly defined field in DeviceOption proto. This diff added extra_info to the core.DeviceOption(). And, In scope.DeviceScope(), this diff enforce the new scope to inherit the extra_info from old scope.
* [opt] hgdirsync wasn't enabled, merge diverged code
Here's the damage, P59732616 basically xplat was left behind but had
the change from assert to CAFFE_ENFORCE
* OMP parallelism over RoIs for RoIAlign op
Simpler to parallelize over RoIs. Shouldn't affect other uses as it relies on
the number of OMP threads set during startup.
PR: https://github.com/pytorch/pytorch/pull/8562
* Use int64_t for shape in FillOps
to avoid overflow of int32
* Implement Rotated RoIAlign op
Based on Rotated RPNs as explained in https://arxiv.org/abs/1703.01086.
The idea is simple - orientation/angle is added as an RPN
anchor parameter and then the angle is further regressed similar to bbox
coords. There are some additional changes related to NMS and IoU, but besides
that it's a direct extension to Faster-RCNN. Further details in https://fb.quip.com/sZHlA1iMfWPZ.
RoIs are represented in [center_x, center_y, width, height, angle] format.
`angle` repre
* Rotated RoIAlign op CUDA forward implementation
CUDA forward impl for D8415490
* RoIAlignRotated op CUDA backward pass implementation
TSIA
* All remaining fixes to eliminate process_github.sh
Most of this diff has already been reviewed separately, except for the parts relating to _thnn/utils.py and _utils._internal.py
remove skipIf(True, 'Fbcode') line from process_github.sh
replace sed of cpp file with #ifdef to control cudnnDestroy use
undo sync-time deletion of .gitattributes, remove process_github.sh
switch to using _utils._internal rather than try-import-except
This diff also fixes the open-source bug where rebuilds have
* Back out "Revert D7802642: [Warmup] Allow both offline incremental training and online training"
Original commit changeset: 7707d2efe60e The original diff is backout becuase the online trainer package is backed out. This code would only work with new online trainer package
* [easy] improve error log in adagrad op
as title
* re-allow use of thnn_h_path
This fixes cffi usage in OSS
* [4/4] [tum] paralyzing layerNorm for GPU full sync
as title
* add compile=False to pytorch tests, remove hack with pyc
* Add shape and type inference for RowWiseArgMax operator
See title
* Revert D8515341: Back out "Revert D7802642: [Warmup] Allow both offline incremental training and online training"
This reverts commit 78167eeef0af16b60f72c82f9dcdda9b41b4dcbd
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* [fix-flaky-test] mock_hive_reader_test flaky, because GlobalCounter collects local counts intervally
# Problem
`MockHiveReader` uses `GlobalCounter` to limit `max_examples`.
GlobalCounter on server node collect local counts from worker nodes every 1 sec.
This 1 sec delay makes it impossible to limit exactly to the `max_examples`, it will definitely exceed `max_examples`.
# Plan
Given,
```
Expected num_examples = max_examples + num_examples/sec (Read Speed) x 1 sec (GlobalCounter Sync Int
* [Caffe2] Fix FCGradient cost inference. Prevent overflow in cost inference
FCGradient missed a factor 2 in the `num_outputs == 3` case. Overflow was occurring with flop calculation for FC. Changed types to `uint64_t` to prevent future problems.
* Fix binary ops with empty inputs
Fix binary ops with empty inputs
* Support the filling of input blob with provided data
as title for Biz Integrity case
* Back out "Revert D8515341: Back out "Revert D7802642: [Warmup] Allow both offline incremental training and online training""
Original commit changeset: 30c55dd38816 Original diff is reverted due to introducing bad integration test. Fixed the integration test.
* [c2][easy] improve pack ops error loggings
as desc.
* Add ShapeTypeInference for LpNorm operator
As desc
* Shard test_nn to reduce runtime for each test target
Closes https://github.com/pytorch/pytorch/pull/8793
The current test_nn would time out and be disabled in GreenWarden, and we need to have an option to split it up in order to pass the stress test. Right now GreenWarden roughly allows running 100 test cases in test_nn before timing out, and here we have an option to divide test_nn into 30 shards (with ~40 tests in each shard) to allow for some test suite growth in the future.
* Change default caffe2_streams_per_gpu to 1
* Remove IN_SANDCASTLE from common.py and test_nn.py
We prefer to disable the failing tests through Sandcastle UI instead.
* Add a new class for an updated prof_dag.proto
This diff contains:
- An updated prof_dag.proto that contains blob profiles.
- A class to deserialize this information (serialization is in a follow up diff)
- Update to separate profiling information from NeuralNet (and use it as part of the class above).
- Unit tests
* Lambdarank for SparseNN
This diff adds a lambda_rank_layer for SparseNN.
changes include
1) Adds support for multi sessions in c2 op
2) Adds support for two different loss functions in c2 op
3) Unit tests for op
* Revert D8586950: Back out "Revert D8515341: Back out "Revert D7802642: [Warmup] Allow both offline incremental training and online training""
This reverts commit 012220ed63eccc35659a57b31d16a3625da6317b
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* [easy] A few fixups to multithread predictor benchmark
(1) support perf on T6 server
(2) remove dead code
* fix a bug about the map size
as title
* Fix reduce sum on in-place case.
Fix reduce sum on in-place case.
* [Warmup] Reland reverted diff Allow both offline incremental training and online training
Closes https://github.com/pytorch/pytorch/pull/8827
fix net transform integration test. Allow offline and online trainer to coexist D7802642.
* Add StoreHandlerNotAvailableException
Add an exception for a store that is not available or has been
deleted.
* Use exception handling for fault tolerance, missing KV store
Remove status blobs to communication ops so that exceptions propagate on
failure.
* [C2/D2][2/n]: Nonnegative-Constrained Optimization -- bounded grad proj
for simple bounded constrained optimization, incl non-negative box constraints.
* [GanH]: Adaptive Weighting with More Estimations
With implemented postivity optimization, we now learn adaptive weights with different
parameterizations.
This improves parameter estimation and training stability.
* Revert some changes for landing
* Remove AutoNoGIL in StorageSharing
* Temporarily disable net_tests
* Revert "[Caffe2] Force tensor inference checks to be triggered during testing"
This reverts commit 67ef05c22b2f71b4a489695384932f968384a2a4.
* Revert "Fix reduce sum on in-place case."
This reverts commit 6cb8a8e1b3db7b6d20941b0053e3f3836068eb64.
* Revert "Revert "Fix reduce sum on in-place case.""
This reverts commit 130a257c0893dc09f4bd6e6a45d112261807fd2c.
2018-06-26 21:55:48 +00:00
|
|
|
|
|
|
|
|
|
|
|
|
|
def BuildUniqueMutexIter(
|
|
|
|
|
init_net,
|
|
|
|
|
net,
|
|
|
|
|
iter=None,
|
|
|
|
|
iter_mutex=None,
|
|
|
|
|
iter_val=0
|
|
|
|
|
):
|
|
|
|
|
'''
|
|
|
|
|
Often, a mutex guarded iteration counter is needed. This function creates a
|
|
|
|
|
mutex iter in the net uniquely (if the iter already existing, it does
|
|
|
|
|
nothing)
|
|
|
|
|
|
|
|
|
|
This function returns the iter blob
|
|
|
|
|
'''
|
|
|
|
|
iter = iter if iter is not None else OPTIMIZER_ITERATION_NAME
|
|
|
|
|
iter_mutex = iter_mutex if iter_mutex is not None else ITERATION_MUTEX_NAME
|
|
|
|
|
from caffe2.python import core
|
|
|
|
|
if not init_net.BlobIsDefined(iter):
|
|
|
|
|
# Add training operators.
|
2020-02-06 07:45:03 +00:00
|
|
|
with core.DeviceScope(
|
|
|
|
|
core.DeviceOption(caffe2_pb2.CPU,
|
|
|
|
|
extra_info=["device_type_override:cpu"])
|
|
|
|
|
):
|
Update from Facebook (#8887)
* add opencl + fpga context
adds an opencl context inside caffe2/fb which can be used for fpga access
* [Caffe2] Force tensor inference checks to be triggered during testing
We've started to rely on TensorInference functions more for different analysis. This diff ensures that the TensorInference function's result matches what is expected from the definition of the operator.
* Enable building //caffe2:torch with @mode/opt
In @mode/opt, python runs out of a PAR, which breaks a lot of
assumptions in the code about where templates/ folders live relative
to __file__. Rather than introduce hacks with parutil, I simply turn
template_path into a parameter for all the relevant functions and
thread it through from the top level.
* [Caffe2] Fix cost models for DotProduct and Div. Update Tensor Inference for dot product
As title. DotProduct states that output is a 1-D tensor (https://caffe2.ai/docs/operators-catalogue.html#dotproduct) though code suggests it is either 0- or 1-D depending on inputs. TensorInference defined to support implementation.
* [SG-MoE] Add an option to make the experts NOT as components
* [nomnigraph] Rename and fixup convertToNeuralNetOperator API
This will make things a bit cleaner
* no longer symlink THNN.h and THCUNN.h
* forced decoder network (onnx export)
Closes https://github.com/pytorch/translate/pull/95
Add networks in ensemble_export.py to create a forced decoding network from PyTorch NMT checkpoints. This network takes an arbitrary numberized (source, target) pair and returns the model score for the translation, including penalties.
Vocabulary reduction networks are also supported, but note that target indices which are not in the possible_translation_tokens generated for the source input will be trea
* Revert schema change to fix production models
Revert schema change to fix production models
* MockLogDeviceReader - rebase on FIX
# Goal
1), Build a make_mock_log_device_reader using make_mock_reader
2), Replace the real log_device_reader here: https://fburl.com/raihwf1p
# Log by D8151734
Real log_device_reader:
```
I0529 20:29:05.373108 954994 tensor.h:839] Tensor print_net/log of type std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >. Dims: (): read_net/ParseOpenTrainingRow:0
I0529 20:29:05.373244 954994 tensor.h:839] Tensor read_net/ParseOpenTrainin
* [C2/D2][1/n]: Nonnegative-Constrained Optimization -- log barrier
implement log barrier as a regularization method
* Add teacher weight screening.
Add teacher weight sceening according to teacher labels. If teacher label is zero, we do not use the distill loss in the objective function.
* Add NormalizerContext
See task for more detail. This implementation is a copy of what exists for RegularizerContext except for how the parameters are defined in the model_definition thrift file.
I'll try an alternative implementation which overrides the default arguments of functions instead like for argscopes in tensorflow.
https://github.com/pytorch/pytorch/compare/master...MaximeBoucher:update-from-facebook-0939578c068c?expand=1
* Adding cosine similarity option in dot processor
Add pairwise cosine similarity option in dot product.
Add an option to concate dot product and cosine similarity.
Add test cases.
* [nomnigraph][redo] Concat elim for sparseNN
Same as D7962948, which was reverted because Operator Schema was not
defined
* [pytorch] Revert pytorch/pytorch#7918 'Release GIL when copying to shared memory', breaks ASAN
Revert this pytorch diff that breaks ASAN when running Filament in dev mode; in opt mode it gives "bad file descriptor" errors. Looks like a race when copying tensors to shared memory in multiple mp.Queue's (which spawn separate threads).
https://github.com/pytorch/pytorch/pull/7918/files
* [nomnigraph][mobile] Enable nomnigraph by default, use -Oz on nomnigraph related code to reduce code size
enables nomnigraph and reduces codesize
* [Warmup] Allow both offline incremental training and online training
Change plan name on saving side and reading side to support both training type
This diff depends on D8128530 and D8168651.
* Revert D7802642: [Warmup] Allow both offline incremental training and online training
This reverts commit afc213cf9b36cecf75333a788391c4d09f4afccc
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* Add legacy grad logic to fix div op on old graphs.
Add legacy grad logic to fix div op on old graphs.
* Correctly propagate operator failures
Propagate errors from operators that throw exceptions and return false
* Revert D8374829: [caffe2][nomnigraph][redo] Concat elim for sparseNN
This reverts commit 6dda028c463e54bb5c32188bbbe9202107e188a5
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* [Caffe2] Added extra_info to core.DeviceOption(), enforced extra_info to be inherited in scope.DeviceScope
extra_info is a newly defined field in DeviceOption proto. This diff added extra_info to the core.DeviceOption(). And, In scope.DeviceScope(), this diff enforce the new scope to inherit the extra_info from old scope.
* [opt] hgdirsync wasn't enabled, merge diverged code
Here's the damage, P59732616 basically xplat was left behind but had
the change from assert to CAFFE_ENFORCE
* OMP parallelism over RoIs for RoIAlign op
Simpler to parallelize over RoIs. Shouldn't affect other uses as it relies on
the number of OMP threads set during startup.
PR: https://github.com/pytorch/pytorch/pull/8562
* Use int64_t for shape in FillOps
to avoid overflow of int32
* Implement Rotated RoIAlign op
Based on Rotated RPNs as explained in https://arxiv.org/abs/1703.01086.
The idea is simple - orientation/angle is added as an RPN
anchor parameter and then the angle is further regressed similar to bbox
coords. There are some additional changes related to NMS and IoU, but besides
that it's a direct extension to Faster-RCNN. Further details in https://fb.quip.com/sZHlA1iMfWPZ.
RoIs are represented in [center_x, center_y, width, height, angle] format.
`angle` repre
* Rotated RoIAlign op CUDA forward implementation
CUDA forward impl for D8415490
* RoIAlignRotated op CUDA backward pass implementation
TSIA
* All remaining fixes to eliminate process_github.sh
Most of this diff has already been reviewed separately, except for the parts relating to _thnn/utils.py and _utils._internal.py
remove skipIf(True, 'Fbcode') line from process_github.sh
replace sed of cpp file with #ifdef to control cudnnDestroy use
undo sync-time deletion of .gitattributes, remove process_github.sh
switch to using _utils._internal rather than try-import-except
This diff also fixes the open-source bug where rebuilds have
* Back out "Revert D7802642: [Warmup] Allow both offline incremental training and online training"
Original commit changeset: 7707d2efe60e The original diff is backout becuase the online trainer package is backed out. This code would only work with new online trainer package
* [easy] improve error log in adagrad op
as title
* re-allow use of thnn_h_path
This fixes cffi usage in OSS
* [4/4] [tum] paralyzing layerNorm for GPU full sync
as title
* add compile=False to pytorch tests, remove hack with pyc
* Add shape and type inference for RowWiseArgMax operator
See title
* Revert D8515341: Back out "Revert D7802642: [Warmup] Allow both offline incremental training and online training"
This reverts commit 78167eeef0af16b60f72c82f9dcdda9b41b4dcbd
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* [fix-flaky-test] mock_hive_reader_test flaky, because GlobalCounter collects local counts intervally
# Problem
`MockHiveReader` uses `GlobalCounter` to limit `max_examples`.
GlobalCounter on server node collect local counts from worker nodes every 1 sec.
This 1 sec delay makes it impossible to limit exactly to the `max_examples`, it will definitely exceed `max_examples`.
# Plan
Given,
```
Expected num_examples = max_examples + num_examples/sec (Read Speed) x 1 sec (GlobalCounter Sync Int
* [Caffe2] Fix FCGradient cost inference. Prevent overflow in cost inference
FCGradient missed a factor 2 in the `num_outputs == 3` case. Overflow was occurring with flop calculation for FC. Changed types to `uint64_t` to prevent future problems.
* Fix binary ops with empty inputs
Fix binary ops with empty inputs
* Support the filling of input blob with provided data
as title for Biz Integrity case
* Back out "Revert D8515341: Back out "Revert D7802642: [Warmup] Allow both offline incremental training and online training""
Original commit changeset: 30c55dd38816 Original diff is reverted due to introducing bad integration test. Fixed the integration test.
* [c2][easy] improve pack ops error loggings
as desc.
* Add ShapeTypeInference for LpNorm operator
As desc
* Shard test_nn to reduce runtime for each test target
Closes https://github.com/pytorch/pytorch/pull/8793
The current test_nn would time out and be disabled in GreenWarden, and we need to have an option to split it up in order to pass the stress test. Right now GreenWarden roughly allows running 100 test cases in test_nn before timing out, and here we have an option to divide test_nn into 30 shards (with ~40 tests in each shard) to allow for some test suite growth in the future.
* Change default caffe2_streams_per_gpu to 1
* Remove IN_SANDCASTLE from common.py and test_nn.py
We prefer to disable the failing tests through Sandcastle UI instead.
* Add a new class for an updated prof_dag.proto
This diff contains:
- An updated prof_dag.proto that contains blob profiles.
- A class to deserialize this information (serialization is in a follow up diff)
- Update to separate profiling information from NeuralNet (and use it as part of the class above).
- Unit tests
* Lambdarank for SparseNN
This diff adds a lambda_rank_layer for SparseNN.
changes include
1) Adds support for multi sessions in c2 op
2) Adds support for two different loss functions in c2 op
3) Unit tests for op
* Revert D8586950: Back out "Revert D8515341: Back out "Revert D7802642: [Warmup] Allow both offline incremental training and online training""
This reverts commit 012220ed63eccc35659a57b31d16a3625da6317b
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* [easy] A few fixups to multithread predictor benchmark
(1) support perf on T6 server
(2) remove dead code
* fix a bug about the map size
as title
* Fix reduce sum on in-place case.
Fix reduce sum on in-place case.
* [Warmup] Reland reverted diff Allow both offline incremental training and online training
Closes https://github.com/pytorch/pytorch/pull/8827
fix net transform integration test. Allow offline and online trainer to coexist D7802642.
* Add StoreHandlerNotAvailableException
Add an exception for a store that is not available or has been
deleted.
* Use exception handling for fault tolerance, missing KV store
Remove status blobs to communication ops so that exceptions propagate on
failure.
* [C2/D2][2/n]: Nonnegative-Constrained Optimization -- bounded grad proj
for simple bounded constrained optimization, incl non-negative box constraints.
* [GanH]: Adaptive Weighting with More Estimations
With implemented postivity optimization, we now learn adaptive weights with different
parameterizations.
This improves parameter estimation and training stability.
* Revert some changes for landing
* Remove AutoNoGIL in StorageSharing
* Temporarily disable net_tests
* Revert "[Caffe2] Force tensor inference checks to be triggered during testing"
This reverts commit 67ef05c22b2f71b4a489695384932f968384a2a4.
* Revert "Fix reduce sum on in-place case."
This reverts commit 6cb8a8e1b3db7b6d20941b0053e3f3836068eb64.
* Revert "Revert "Fix reduce sum on in-place case.""
This reverts commit 130a257c0893dc09f4bd6e6a45d112261807fd2c.
2018-06-26 21:55:48 +00:00
|
|
|
iteration = init_net.ConstantFill(
|
|
|
|
|
[],
|
|
|
|
|
iter,
|
|
|
|
|
shape=[1],
|
|
|
|
|
value=iter_val,
|
|
|
|
|
dtype=core.DataType.INT64,
|
|
|
|
|
)
|
|
|
|
|
iter_mutex = init_net.CreateMutex([], [iter_mutex])
|
|
|
|
|
net.AtomicIter([iter_mutex, iteration], [iteration])
|
|
|
|
|
else:
|
|
|
|
|
iteration = init_net.GetBlobRef(iter)
|
|
|
|
|
return iteration
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
def EnumClassKeyVals(cls):
|
|
|
|
|
# cls can only be derived from object
|
|
|
|
|
assert type(cls) == type
|
|
|
|
|
# Enum attribute keys are all capitalized and values are strings
|
|
|
|
|
enum = {}
|
|
|
|
|
for k in dir(cls):
|
|
|
|
|
if k == k.upper():
|
|
|
|
|
v = getattr(cls, k)
|
|
|
|
|
if isinstance(v, string_types):
|
|
|
|
|
assert v not in enum.values(), (
|
|
|
|
|
"Failed to resolve {} as Enum: "
|
|
|
|
|
"duplicate entries {}={}, {}={}".format(
|
|
|
|
|
cls, k, v, [key for key in enum if enum[key] == v][0], v
|
|
|
|
|
)
|
|
|
|
|
)
|
|
|
|
|
enum[k] = v
|
|
|
|
|
return enum
|
2018-11-01 21:22:57 +00:00
|
|
|
|
|
|
|
|
|
|
|
|
|
def ArgsToDict(args):
|
|
|
|
|
"""
|
|
|
|
|
Convert a list of arguments to a name, value dictionary. Assumes that
|
|
|
|
|
each argument has a name. Otherwise, the argument is skipped.
|
|
|
|
|
"""
|
|
|
|
|
ans = {}
|
|
|
|
|
for arg in args:
|
|
|
|
|
if not arg.HasField("name"):
|
|
|
|
|
continue
|
|
|
|
|
for d in arg.DESCRIPTOR.fields:
|
|
|
|
|
if d.name == "name":
|
|
|
|
|
continue
|
|
|
|
|
if d.label == d.LABEL_OPTIONAL and arg.HasField(d.name):
|
|
|
|
|
ans[arg.name] = getattr(arg, d.name)
|
|
|
|
|
break
|
|
|
|
|
elif d.label == d.LABEL_REPEATED:
|
|
|
|
|
list_ = getattr(arg, d.name)
|
|
|
|
|
if len(list_) > 0:
|
|
|
|
|
ans[arg.name] = list_
|
|
|
|
|
break
|
|
|
|
|
else:
|
|
|
|
|
ans[arg.name] = None
|
|
|
|
|
return ans
|
2018-12-29 01:32:11 +00:00
|
|
|
|
|
|
|
|
|
|
|
|
|
def NHWC2NCHW(tensor):
|
|
|
|
|
assert tensor.ndim >= 1
|
|
|
|
|
return tensor.transpose((0, tensor.ndim - 1) + tuple(range(1, tensor.ndim - 1)))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
def NCHW2NHWC(tensor):
|
|
|
|
|
assert tensor.ndim >= 2
|
|
|
|
|
return tensor.transpose((0,) + tuple(range(2, tensor.ndim)) + (1,))
|