2018-07-06 20:38:36 +00:00
|
|
|
# @package optimizer
|
2017-03-29 13:44:02 +00:00
|
|
|
# Module caffe2.python.optimizer
|
2020-09-24 00:55:24 +00:00
|
|
|
|
2017-03-08 02:44:45 +00:00
|
|
|
|
2019-04-18 04:07:42 +00:00
|
|
|
import copy
|
2020-09-10 02:35:22 +00:00
|
|
|
import logging
|
|
|
|
|
from collections import defaultdict, namedtuple
|
2018-09-14 09:36:26 +00:00
|
|
|
|
2017-08-26 01:53:20 +00:00
|
|
|
import numpy as np
|
2020-09-10 02:35:22 +00:00
|
|
|
from caffe2.proto import caffe2_pb2
|
Update from Facebook (#8887)
* add opencl + fpga context
adds an opencl context inside caffe2/fb which can be used for fpga access
* [Caffe2] Force tensor inference checks to be triggered during testing
We've started to rely on TensorInference functions more for different analysis. This diff ensures that the TensorInference function's result matches what is expected from the definition of the operator.
* Enable building //caffe2:torch with @mode/opt
In @mode/opt, python runs out of a PAR, which breaks a lot of
assumptions in the code about where templates/ folders live relative
to __file__. Rather than introduce hacks with parutil, I simply turn
template_path into a parameter for all the relevant functions and
thread it through from the top level.
* [Caffe2] Fix cost models for DotProduct and Div. Update Tensor Inference for dot product
As title. DotProduct states that output is a 1-D tensor (https://caffe2.ai/docs/operators-catalogue.html#dotproduct) though code suggests it is either 0- or 1-D depending on inputs. TensorInference defined to support implementation.
* [SG-MoE] Add an option to make the experts NOT as components
* [nomnigraph] Rename and fixup convertToNeuralNetOperator API
This will make things a bit cleaner
* no longer symlink THNN.h and THCUNN.h
* forced decoder network (onnx export)
Closes https://github.com/pytorch/translate/pull/95
Add networks in ensemble_export.py to create a forced decoding network from PyTorch NMT checkpoints. This network takes an arbitrary numberized (source, target) pair and returns the model score for the translation, including penalties.
Vocabulary reduction networks are also supported, but note that target indices which are not in the possible_translation_tokens generated for the source input will be trea
* Revert schema change to fix production models
Revert schema change to fix production models
* MockLogDeviceReader - rebase on FIX
# Goal
1), Build a make_mock_log_device_reader using make_mock_reader
2), Replace the real log_device_reader here: https://fburl.com/raihwf1p
# Log by D8151734
Real log_device_reader:
```
I0529 20:29:05.373108 954994 tensor.h:839] Tensor print_net/log of type std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >. Dims: (): read_net/ParseOpenTrainingRow:0
I0529 20:29:05.373244 954994 tensor.h:839] Tensor read_net/ParseOpenTrainin
* [C2/D2][1/n]: Nonnegative-Constrained Optimization -- log barrier
implement log barrier as a regularization method
* Add teacher weight screening.
Add teacher weight sceening according to teacher labels. If teacher label is zero, we do not use the distill loss in the objective function.
* Add NormalizerContext
See task for more detail. This implementation is a copy of what exists for RegularizerContext except for how the parameters are defined in the model_definition thrift file.
I'll try an alternative implementation which overrides the default arguments of functions instead like for argscopes in tensorflow.
https://github.com/pytorch/pytorch/compare/master...MaximeBoucher:update-from-facebook-0939578c068c?expand=1
* Adding cosine similarity option in dot processor
Add pairwise cosine similarity option in dot product.
Add an option to concate dot product and cosine similarity.
Add test cases.
* [nomnigraph][redo] Concat elim for sparseNN
Same as D7962948, which was reverted because Operator Schema was not
defined
* [pytorch] Revert pytorch/pytorch#7918 'Release GIL when copying to shared memory', breaks ASAN
Revert this pytorch diff that breaks ASAN when running Filament in dev mode; in opt mode it gives "bad file descriptor" errors. Looks like a race when copying tensors to shared memory in multiple mp.Queue's (which spawn separate threads).
https://github.com/pytorch/pytorch/pull/7918/files
* [nomnigraph][mobile] Enable nomnigraph by default, use -Oz on nomnigraph related code to reduce code size
enables nomnigraph and reduces codesize
* [Warmup] Allow both offline incremental training and online training
Change plan name on saving side and reading side to support both training type
This diff depends on D8128530 and D8168651.
* Revert D7802642: [Warmup] Allow both offline incremental training and online training
This reverts commit afc213cf9b36cecf75333a788391c4d09f4afccc
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* Add legacy grad logic to fix div op on old graphs.
Add legacy grad logic to fix div op on old graphs.
* Correctly propagate operator failures
Propagate errors from operators that throw exceptions and return false
* Revert D8374829: [caffe2][nomnigraph][redo] Concat elim for sparseNN
This reverts commit 6dda028c463e54bb5c32188bbbe9202107e188a5
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* [Caffe2] Added extra_info to core.DeviceOption(), enforced extra_info to be inherited in scope.DeviceScope
extra_info is a newly defined field in DeviceOption proto. This diff added extra_info to the core.DeviceOption(). And, In scope.DeviceScope(), this diff enforce the new scope to inherit the extra_info from old scope.
* [opt] hgdirsync wasn't enabled, merge diverged code
Here's the damage, P59732616 basically xplat was left behind but had
the change from assert to CAFFE_ENFORCE
* OMP parallelism over RoIs for RoIAlign op
Simpler to parallelize over RoIs. Shouldn't affect other uses as it relies on
the number of OMP threads set during startup.
PR: https://github.com/pytorch/pytorch/pull/8562
* Use int64_t for shape in FillOps
to avoid overflow of int32
* Implement Rotated RoIAlign op
Based on Rotated RPNs as explained in https://arxiv.org/abs/1703.01086.
The idea is simple - orientation/angle is added as an RPN
anchor parameter and then the angle is further regressed similar to bbox
coords. There are some additional changes related to NMS and IoU, but besides
that it's a direct extension to Faster-RCNN. Further details in https://fb.quip.com/sZHlA1iMfWPZ.
RoIs are represented in [center_x, center_y, width, height, angle] format.
`angle` repre
* Rotated RoIAlign op CUDA forward implementation
CUDA forward impl for D8415490
* RoIAlignRotated op CUDA backward pass implementation
TSIA
* All remaining fixes to eliminate process_github.sh
Most of this diff has already been reviewed separately, except for the parts relating to _thnn/utils.py and _utils._internal.py
remove skipIf(True, 'Fbcode') line from process_github.sh
replace sed of cpp file with #ifdef to control cudnnDestroy use
undo sync-time deletion of .gitattributes, remove process_github.sh
switch to using _utils._internal rather than try-import-except
This diff also fixes the open-source bug where rebuilds have
* Back out "Revert D7802642: [Warmup] Allow both offline incremental training and online training"
Original commit changeset: 7707d2efe60e The original diff is backout becuase the online trainer package is backed out. This code would only work with new online trainer package
* [easy] improve error log in adagrad op
as title
* re-allow use of thnn_h_path
This fixes cffi usage in OSS
* [4/4] [tum] paralyzing layerNorm for GPU full sync
as title
* add compile=False to pytorch tests, remove hack with pyc
* Add shape and type inference for RowWiseArgMax operator
See title
* Revert D8515341: Back out "Revert D7802642: [Warmup] Allow both offline incremental training and online training"
This reverts commit 78167eeef0af16b60f72c82f9dcdda9b41b4dcbd
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* [fix-flaky-test] mock_hive_reader_test flaky, because GlobalCounter collects local counts intervally
# Problem
`MockHiveReader` uses `GlobalCounter` to limit `max_examples`.
GlobalCounter on server node collect local counts from worker nodes every 1 sec.
This 1 sec delay makes it impossible to limit exactly to the `max_examples`, it will definitely exceed `max_examples`.
# Plan
Given,
```
Expected num_examples = max_examples + num_examples/sec (Read Speed) x 1 sec (GlobalCounter Sync Int
* [Caffe2] Fix FCGradient cost inference. Prevent overflow in cost inference
FCGradient missed a factor 2 in the `num_outputs == 3` case. Overflow was occurring with flop calculation for FC. Changed types to `uint64_t` to prevent future problems.
* Fix binary ops with empty inputs
Fix binary ops with empty inputs
* Support the filling of input blob with provided data
as title for Biz Integrity case
* Back out "Revert D8515341: Back out "Revert D7802642: [Warmup] Allow both offline incremental training and online training""
Original commit changeset: 30c55dd38816 Original diff is reverted due to introducing bad integration test. Fixed the integration test.
* [c2][easy] improve pack ops error loggings
as desc.
* Add ShapeTypeInference for LpNorm operator
As desc
* Shard test_nn to reduce runtime for each test target
Closes https://github.com/pytorch/pytorch/pull/8793
The current test_nn would time out and be disabled in GreenWarden, and we need to have an option to split it up in order to pass the stress test. Right now GreenWarden roughly allows running 100 test cases in test_nn before timing out, and here we have an option to divide test_nn into 30 shards (with ~40 tests in each shard) to allow for some test suite growth in the future.
* Change default caffe2_streams_per_gpu to 1
* Remove IN_SANDCASTLE from common.py and test_nn.py
We prefer to disable the failing tests through Sandcastle UI instead.
* Add a new class for an updated prof_dag.proto
This diff contains:
- An updated prof_dag.proto that contains blob profiles.
- A class to deserialize this information (serialization is in a follow up diff)
- Update to separate profiling information from NeuralNet (and use it as part of the class above).
- Unit tests
* Lambdarank for SparseNN
This diff adds a lambda_rank_layer for SparseNN.
changes include
1) Adds support for multi sessions in c2 op
2) Adds support for two different loss functions in c2 op
3) Unit tests for op
* Revert D8586950: Back out "Revert D8515341: Back out "Revert D7802642: [Warmup] Allow both offline incremental training and online training""
This reverts commit 012220ed63eccc35659a57b31d16a3625da6317b
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* [easy] A few fixups to multithread predictor benchmark
(1) support perf on T6 server
(2) remove dead code
* fix a bug about the map size
as title
* Fix reduce sum on in-place case.
Fix reduce sum on in-place case.
* [Warmup] Reland reverted diff Allow both offline incremental training and online training
Closes https://github.com/pytorch/pytorch/pull/8827
fix net transform integration test. Allow offline and online trainer to coexist D7802642.
* Add StoreHandlerNotAvailableException
Add an exception for a store that is not available or has been
deleted.
* Use exception handling for fault tolerance, missing KV store
Remove status blobs to communication ops so that exceptions propagate on
failure.
* [C2/D2][2/n]: Nonnegative-Constrained Optimization -- bounded grad proj
for simple bounded constrained optimization, incl non-negative box constraints.
* [GanH]: Adaptive Weighting with More Estimations
With implemented postivity optimization, we now learn adaptive weights with different
parameterizations.
This improves parameter estimation and training stability.
* Revert some changes for landing
* Remove AutoNoGIL in StorageSharing
* Temporarily disable net_tests
* Revert "[Caffe2] Force tensor inference checks to be triggered during testing"
This reverts commit 67ef05c22b2f71b4a489695384932f968384a2a4.
* Revert "Fix reduce sum on in-place case."
This reverts commit 6cb8a8e1b3db7b6d20941b0053e3f3836068eb64.
* Revert "Revert "Fix reduce sum on in-place case.""
This reverts commit 130a257c0893dc09f4bd6e6a45d112261807fd2c.
2018-06-26 21:55:48 +00:00
|
|
|
from caffe2.python import core, scope, utils, workspace
|
2017-05-26 05:01:54 +00:00
|
|
|
from caffe2.python.modeling import parameter_info
|
2020-09-10 02:35:22 +00:00
|
|
|
from past.builtins import basestring
|
2017-03-08 02:44:45 +00:00
|
|
|
|
2017-05-26 05:01:54 +00:00
|
|
|
|
2017-08-26 01:53:20 +00:00
|
|
|
_LEARNING_RATE_INJECTION = "lr_injection"
|
2017-03-08 02:44:45 +00:00
|
|
|
|
2017-05-09 19:57:20 +00:00
|
|
|
AuxOptimizerParams = namedtuple("AuxOptimizerParams", ["local", "shared"])
|
Optimizer: one LR op per (device, optimizer)
Summary:
Try running this script through `nvprof`:
```py
import numpy as np
from caffe2.proto import caffe2_pb2
from caffe2.python import brew, core, optimizer, workspace
from caffe2.python.model_helper import ModelHelper
do = core.DeviceOption(caffe2_pb2.CUDA, 0)
with core.DeviceScope(do):
model = ModelHelper(arg_scope={'order': 'NCHW'})
conv1 = brew.conv(model, 'data', 'conv1', 1, 20, 5)
pool1 = brew.max_pool(model, conv1, 'pool1', kernel=2, stride=2)
conv2 = brew.conv(model, pool1, 'conv2', 20, 50, 5)
pool2 = brew.max_pool(model, conv2, 'pool2', kernel=2, stride=2)
fc3 = brew.fc(model, pool2, 'fc3', 50 * 4 * 4, 500)
fc3 = brew.relu(model, fc3, fc3)
pred = brew.fc(model, fc3, 'pred', 500, 10)
softmax, loss = model.SoftmaxWithLoss([pred, 'label'], ['softmax', 'loss'])
model.AddGradientOperators([loss])
optimizer.build_sgd(model, 0.01,
policy='step', stepsize=1, gamma=0.999,
momentum=0.9, nesterov=False)
workspace.FeedBlob('data', np.zeros((1, 1, 28, 28), dtype=np.float32))
workspace.FeedBlob('label', np.zeros((1, 1), dtype=np.int32))
workspace.RunNetOnce(model.param_init_net)
workspace.CreateNet(model.net)
for _ in range(100):
workspace.RunNet(model.net)
```
Before this change:
```
1.55% 1.4185ms 837 1.6940us 1.6630us 2.4000us [CUDA memcpy HtoD]
0.72% 656.03us 200 3.2800us 3.1350us 3.5840us [CUDA memcpy DtoD]
0.39% 7.1574ms 1034 6.9220us 3.8300us 18.677us cudaMemcpyAsync
0.00% 34.180us 3 11.393us 9.0960us 12.910us cudaMemcpy
```
And after it (look at the third column):
```
0.73% 657.15us 200 3.2850us 3.1040us 3.6160us [CUDA memcpy DtoD]
0.26% 235.07us 137 1.7150us 1.6640us 2.3680us [CUDA memcpy HtoD]
0.20% 3.4493ms 334 10.327us 6.4220us 16.958us cudaMemcpyAsync
0.00% 37.376us 3 12.458us 9.4120us 15.412us cudaMemcpy
```
That makes a pretty big difference in performance. Is there any particular reason you decided to have a separate `LearningRate` op for every parameter in https://github.com/caffe2/caffe2/commit/1317e3498cf6ac2e58ab93b5239ad09ca7f46bc6?
Closes https://github.com/caffe2/caffe2/pull/893
Reviewed By: kennyhorror
Differential Revision: D5372541
Pulled By: asaadaldien
fbshipit-source-id: 57357e1be2d58ce294058e9422fb3b1eddfca24d
2017-07-13 04:11:20 +00:00
|
|
|
_optimizer_instance_count = defaultdict(int)
|
2017-05-09 19:57:20 +00:00
|
|
|
|
2018-09-30 04:43:14 +00:00
|
|
|
FP16_ENGINES = ["SIMD_Q_FP16", "SIMD_Q_STOC_FP16", "SIMD_Q_STOC_MKL_FP16"]
|
|
|
|
|
|
2018-09-14 09:36:26 +00:00
|
|
|
logger = logging.getLogger(__name__)
|
|
|
|
|
|
2017-07-12 15:32:28 +00:00
|
|
|
|
2017-03-08 02:44:45 +00:00
|
|
|
class Optimizer(object):
|
|
|
|
|
def __init__(self):
|
2017-05-09 19:57:20 +00:00
|
|
|
self._aux_params = AuxOptimizerParams(local=[], shared=[])
|
Optimizer: one LR op per (device, optimizer)
Summary:
Try running this script through `nvprof`:
```py
import numpy as np
from caffe2.proto import caffe2_pb2
from caffe2.python import brew, core, optimizer, workspace
from caffe2.python.model_helper import ModelHelper
do = core.DeviceOption(caffe2_pb2.CUDA, 0)
with core.DeviceScope(do):
model = ModelHelper(arg_scope={'order': 'NCHW'})
conv1 = brew.conv(model, 'data', 'conv1', 1, 20, 5)
pool1 = brew.max_pool(model, conv1, 'pool1', kernel=2, stride=2)
conv2 = brew.conv(model, pool1, 'conv2', 20, 50, 5)
pool2 = brew.max_pool(model, conv2, 'pool2', kernel=2, stride=2)
fc3 = brew.fc(model, pool2, 'fc3', 50 * 4 * 4, 500)
fc3 = brew.relu(model, fc3, fc3)
pred = brew.fc(model, fc3, 'pred', 500, 10)
softmax, loss = model.SoftmaxWithLoss([pred, 'label'], ['softmax', 'loss'])
model.AddGradientOperators([loss])
optimizer.build_sgd(model, 0.01,
policy='step', stepsize=1, gamma=0.999,
momentum=0.9, nesterov=False)
workspace.FeedBlob('data', np.zeros((1, 1, 28, 28), dtype=np.float32))
workspace.FeedBlob('label', np.zeros((1, 1), dtype=np.int32))
workspace.RunNetOnce(model.param_init_net)
workspace.CreateNet(model.net)
for _ in range(100):
workspace.RunNet(model.net)
```
Before this change:
```
1.55% 1.4185ms 837 1.6940us 1.6630us 2.4000us [CUDA memcpy HtoD]
0.72% 656.03us 200 3.2800us 3.1350us 3.5840us [CUDA memcpy DtoD]
0.39% 7.1574ms 1034 6.9220us 3.8300us 18.677us cudaMemcpyAsync
0.00% 34.180us 3 11.393us 9.0960us 12.910us cudaMemcpy
```
And after it (look at the third column):
```
0.73% 657.15us 200 3.2850us 3.1040us 3.6160us [CUDA memcpy DtoD]
0.26% 235.07us 137 1.7150us 1.6640us 2.3680us [CUDA memcpy HtoD]
0.20% 3.4493ms 334 10.327us 6.4220us 16.958us cudaMemcpyAsync
0.00% 37.376us 3 12.458us 9.4120us 15.412us cudaMemcpy
```
That makes a pretty big difference in performance. Is there any particular reason you decided to have a separate `LearningRate` op for every parameter in https://github.com/caffe2/caffe2/commit/1317e3498cf6ac2e58ab93b5239ad09ca7f46bc6?
Closes https://github.com/caffe2/caffe2/pull/893
Reviewed By: kennyhorror
Differential Revision: D5372541
Pulled By: asaadaldien
fbshipit-source-id: 57357e1be2d58ce294058e9422fb3b1eddfca24d
2017-07-13 04:11:20 +00:00
|
|
|
self._instance_num = _optimizer_instance_count[self.__class__.__name__]
|
|
|
|
|
_optimizer_instance_count[self.__class__.__name__] += 1
|
2017-08-26 01:53:20 +00:00
|
|
|
self._lr_multiplier = None
|
Update from facebook (#7451)
* [bootcamp] Improve "Shape" operator to support axes specification
To improve .shape operator of Caffe2 to support x.shape(tensor, axes), which takes an optional int array "axes" as input. For example, x.shape(tensor, [1, 0]) will return the dimension for axis 1 and 0 following the specified order. For current version, "axes" input allows duplications and can have arbitrary length.
* Back out "Add barrier net that runs before training nets"
Original commit changeset: b373fdc9c30f. Need additional changes to some callers to support barrier failures.
* Change warning to verbose log to reduce log spam
The `LOG(WARNING)` was a bit spammy for regular use so lets just make it a `VLOG`.
* Extract the shared code from different caffe2_benchmark binaries
The OSS benchmark and Internal benchmark will share most functions in the benchmark.
* Support MFR in sequence training
As titled.
* Make knowledge distillation work with using logged prediction feature as teacher label.
1) Add loading raw dense feature as teacher label.
2) Optional calibration function for teacher label
3) Add teacher label into generic unit test
4) Deprecated TTSN workflow version using feature_options to config teacher label
* [C2/CUDA]: unjoined cross entropy sigmoid
as desc
* Add async_scheduling executor into deferrable_net_exec_test
Add async_scheduling into tests and fix some exception cases
* Fix Event disabled error
When disabling event in RNN ops make sure we don't call Finish on disabled
event from op's RunAsync
* cuda ensure cpu output op can handle both TensorCPU and TensorCUDA
as desc.
* [C2 Core] Infer input device option in C2 hypothesis_test checkers
Improve how we default input blob device options.
Previously it defaults as where op lives but it is not necessarily the case.
For example:
CopyCPUToGPU
* [C2 Op]SplitByLengthsOp CPU/GPU implementation
[C2 Op]SplitByLengthsOp CPU/GPU implementation
* fix undefined symbol error
not sure why we're getting undefined symbol even with link_whole = True
Need to figure out why but need this workaround for now
* Add tools in DAIPlayground platform to help debugging models
Add additional tools to allow Plauground override individual method defined in AnyExp. This will allow user to create module that specificly change certain default method behavior. An example included in this diff is deactivating test model and checkpointing. When debugging any model problems, switching off components helps me quickly narrow down the location of the bug. The technique is extensively used in task T27038712 (Steady memory increase in EDPM, eventually resulting in gloo/cuda.cu:34: out of memory)
* add shape and type inference for int8 conversion operator
* Fix flaky test for group_norm
Fix flaky test for group_norm
* Fix group_norm_op_test flaky
Fix group_norm_op_test flaky
* Implementation of composite learning rate policy
In many state-of-the-arts deep learning works, people use a simple trick to
schedule the learning rate: use a fixed learning rate until error plateaus
and then switch to a different fixed learning rate, and so on. In this diff,
we implemented a simple version of the composite learning rate. The user gives
a set of learning rates policies and corresponding iteration nums, and the
optimizer will change the learning rate policy based on the number of iterations so far.
For example, the user give two learning rate policies, one is FixedLearningRate
and PolyLearningRate, with an iteration number of 1k. Then the first 1k iteration,
we use FixedLearningRate. For the following iterations, we use PolyLearningRate.
* Split two use cases of CachedReader into two classes, DBFileReader and CachedReader
# Use Cases:
1). input: DB file -> output: DatasetReader.
Use DBFileReader.
2). input: Reader -> build cache DB file -> output: DatasetReader.
Use CachedReader.
# Changes to CachedReader:
1). Move db_path to the constructor.
Because in mock reader. cache will always be built ahead.
# Changes to tests:
1). Make a separate TestCase class for CachedReader and DBFileReader.
2). Make it possible to add more test functions by adding setUp, tearDown and _make_temp_path.
3). Make delete db_path more general. `db_path` could be a file for `log_file_db`, but could also be a directory for `leveldb`.
* Back out "On Mobile phones, call GlobalInit with no arguments in predictor in case we need to perform initialization"
Original commit changeset: 4489c6133f11
* Fix LARS bug
Fixed a bug in the LARS implementation which caused all subsequent blobs not using LARS to have the LARS learning rate multiplier applied to them.
* [tum] support sparse init & add uniformFill option
as title
* Propagate exception for async nets
Capture the exception when an exception is thrown in async nets and re-throw it after wait(). This allows exceptions to be propagated up to the caller.
This diff was a part of D7752068. We split the diff so that C2 core files changes are in a separate diff.
* Automatic update of fbcode/onnx to 69894f207dfcd72d1e70497d387201cec327efbc
Previous import was 403ccfbd0161c38f0834413d790bad0874afbf9a
Included changes:
- **[69894f2](https://github.com/onnx/onnx/commit/69894f2)**: Use op schema.all tensor types in random like definitions (#865) <Scott McKay>
- **[b9d6b90](https://github.com/onnx/onnx/commit/b9d6b90)**: Clarify random like operators (#846) <Scott McKay>
- **[fc6b5fb](https://github.com/onnx/onnx/commit/fc6b5fb)**: Refactor shape inference implementation (#855) <anderspapitto>
- **[b7d8dc8](https://github.com/onnx/onnx/commit/b7d8dc8)**: fix cmake warning message (#863) <Eric S. Yu>
- **[f585c5d](https://github.com/onnx/onnx/commit/f585c5d)**: add pytorch-operator test for tile (#831) <Wenhao Hu>
- **[993fe70](https://github.com/onnx/onnx/commit/993fe70)**: add install step (#832) <Eric S. Yu>
- **[68bc26c](https://github.com/onnx/onnx/commit/68bc26c)**: add type inference for traditional ml ops except classifier ops. (#857) <Ke Zhang>
- **[9cc0cda](https://github.com/onnx/onnx/commit/9cc0cda)**: fix string representation of scalar types (#858) <G. Ramalingam>
- **[1078925](https://github.com/onnx/onnx/commit/1078925)**: fix y in pow test case to scalar (#852) <Wenhao Hu>
- **[c66fb6f](https://github.com/onnx/onnx/commit/c66fb6f)**: Add some math function shape inference (#845) <anderspapitto>
- **[ff667d1](https://github.com/onnx/onnx/commit/ff667d1)**: Refactor return type and docs for ONNXIFI_BACKEND_DIRECTX_ID (#853) <Marat Dukhan>
- **[11c6876](https://github.com/onnx/onnx/commit/11c6876)**: clear initializer names when clear initializer (#849) <Wenhao Hu>
- **[73c34ae](https://github.com/onnx/onnx/commit/73c34ae)**: Clarify FeatureVectorizer description. (#843) <Scott McKay>
- **[1befb9b](https://github.com/onnx/onnx/commit/1befb9b)**: Remove useless text in docs (#850) <Lu Fang>
- **[e84788f](https://github.com/onnx/onnx/commit/e84788f)**: Fix SELU attributes' default values (#839) <Lu Fang>
- **[ebac046](https://github.com/onnx/onnx/commit/ebac046)**: Add tile test case (#823) <Wenhao Hu>
- **[8b7a925](https://github.com/onnx/onnx/commit/8b7a925)**: a few more shape inference functions (#772) <anderspapitto>
- **[9718f42](https://github.com/onnx/onnx/commit/9718f42)**: Make the coefficient non optional for LinearClassifier (#836) <Jaliya Ekanayake>
- **[ef083d0](https://github.com/onnx/onnx/commit/ef083d0)**: Add save_tensor and load_tensor functions for Protos (#770) <Lu Fang>
- **[45ceb55](https://github.com/onnx/onnx/commit/45ceb55)**: Check if CMAKE_BUILD_TYPE set before project(). (#812) <Sergii Dymchenko>
- **[4b3d2b0](https://github.com/onnx/onnx/commit/4b3d2b0)**: [WIP] reenable shape inference tests (#834) <anderspapitto>
- **[22d17ee](https://github.com/onnx/onnx/commit/22d17ee)**: RNN tests: LSTM, GRU, SimpleRNN (#739) <Peyman Manikashani>
- **[de65b95](https://github.com/onnx/onnx/commit/de65b95)**: dimension denotation (#443) <Tian Jin>
- **[eccc76e](https://github.com/onnx/onnx/commit/eccc76e)**: fix field number issue in onnx operator proto and enable its build (#829) <Ke Zhang>
- **[d582beb](https://github.com/onnx/onnx/commit/d582beb)**: disable shape inference test to unbreak ci (#830) <Lu Fang>
- **[485b787](https://github.com/onnx/onnx/commit/485b787)**: function proto for composite op. (#802) <Ke Zhang>
- **[cd58928](https://github.com/onnx/onnx/commit/cd58928)**: specify defaults for attributes of Affine op (#820) <G. Ramalingam>
- **[7ee2cf9](https://github.com/onnx/onnx/commit/7ee2cf9)**: merge the dummy backend back into the main one (#743) <anderspapitto>
- **[1c03a5a](https://github.com/onnx/onnx/commit/1c03a5a)**: [Proposal] ONNX Interface for Framework Integration (previously ONNX Backend API) header and docs (#551) <Marat Dukhan>
- **[3769a98](https://github.com/onnx/onnx/commit/3769a98)**: Rename real model test case from VGG-16 to ZFNet (#821) <Lu Fang>
* [C2]ReluN Op
relu n op.
tf reference: https://www.tensorflow.org/api_docs/python/tf/nn/relu6
* Call destructor when assigning a blob value
* Add executor overrides
Add executor overrides flag to enable migration to async_scheduling executor
* Add barrier net that runs before training nets - attempt #2
Add a synchonize barrier net that is run before training nets. With this net, shards that are faster will wait for other shards before start training. This reduce chances of the faster shards timing out during GLOO AllReduce.
Removed explicit data_parallel_model.py.synchronize call in holmes workflow.
This change was landed previously but caused errors for some EDPM workflows - See https://fb.facebook.com/groups/1426530000692545/permalink/1906766366002237/ - because EDPM assumes any call to CreateOrCloneCommonWorld and Gloo ops are wrapped in exception handlers but in this case exception thrown in the barrier init net is not handled.
To address this issue, we add _CreateOrCloneCommonWorld to the param_init_net instead of a new barrier init net. Since errors for param_init_net run is handled gracefully and re-rendezvous, it should fixes the problem.
* Handle empty nets in async_scheduling
Make sure we don't get stuck on empty nets
* use CUDA_ARCH for conditional compile
* [C2 fix] infer function for ensure_cpu_output_op
* Update group_norm test to reduce flaky test
* Fix lr_multiplier for GPU
2018-05-11 06:14:27 +00:00
|
|
|
self._local_lr_multiplier = None
|
|
|
|
|
self._local_lr_multiplier_on_gpu = False
|
2017-03-08 02:44:45 +00:00
|
|
|
|
2020-09-10 02:35:22 +00:00
|
|
|
"""
|
2017-05-26 05:01:54 +00:00
|
|
|
Adds optimization operators to the net for given parameter and its gradient
|
|
|
|
|
Parameter is specified by either 'param' being a ParameterInfo object.
|
|
|
|
|
In this case param.grad has to be set
|
|
|
|
|
|
|
|
|
|
Or by 'param' being a BlobReference and 'grad' being a BlobReference for its
|
|
|
|
|
gradient.
|
2020-09-10 02:35:22 +00:00
|
|
|
"""
|
|
|
|
|
|
2017-05-26 05:01:54 +00:00
|
|
|
def __call__(self, net, param_init_net, param, grad=None):
|
|
|
|
|
if grad is None:
|
2020-09-10 02:35:22 +00:00
|
|
|
assert isinstance(
|
|
|
|
|
param, parameter_info.ParameterInfo
|
|
|
|
|
), "Expected parameter to be of type ParameterInfo, got {}".format(param)
|
2017-05-26 05:01:54 +00:00
|
|
|
assert param.grad is not None
|
|
|
|
|
else:
|
2017-06-05 23:52:15 +00:00
|
|
|
if isinstance(param, basestring):
|
2017-05-26 05:01:54 +00:00
|
|
|
param = core.BlobReference(param)
|
2020-09-10 02:35:22 +00:00
|
|
|
param = parameter_info.ParameterInfo(param_id=None, param=param, grad=grad)
|
2017-05-26 05:01:54 +00:00
|
|
|
|
|
|
|
|
self._run(net, param_init_net, param)
|
|
|
|
|
|
|
|
|
|
def _run(self, net, param_init_net, param_info):
|
2017-10-07 02:08:58 +00:00
|
|
|
raise Exception("Not Implemented")
|
2017-03-08 02:44:45 +00:00
|
|
|
|
2020-09-10 02:35:22 +00:00
|
|
|
def get_cpu_blob_name(self, base_str, node_name=""):
|
2017-07-21 23:22:21 +00:00
|
|
|
classname = self.__class__.__name__
|
2020-09-10 02:35:22 +00:00
|
|
|
return "%s_%d_%s%s_cpu" % (classname, self._instance_num, base_str, node_name)
|
2017-07-21 23:22:21 +00:00
|
|
|
|
2017-10-07 02:08:58 +00:00
|
|
|
def get_gpu_blob_name(self, base_str, gpu_id, node_name):
|
2017-07-21 23:22:21 +00:00
|
|
|
classname = self.__class__.__name__
|
2020-09-10 02:35:22 +00:00
|
|
|
return "%s_%d_%s%s_gpu%d" % (
|
|
|
|
|
classname,
|
|
|
|
|
self._instance_num,
|
|
|
|
|
base_str,
|
|
|
|
|
node_name,
|
|
|
|
|
gpu_id,
|
2017-08-24 17:10:58 +00:00
|
|
|
)
|
2017-07-21 23:22:21 +00:00
|
|
|
|
2019-04-18 04:07:42 +00:00
|
|
|
@property
|
|
|
|
|
def attributes(self):
|
|
|
|
|
# return a dict that contains attributes related to init args only
|
|
|
|
|
attr = copy.deepcopy(self.__dict__)
|
2020-09-10 02:35:22 +00:00
|
|
|
del attr["_instance_num"]
|
2019-04-18 04:07:42 +00:00
|
|
|
return attr
|
|
|
|
|
|
2017-08-24 17:10:58 +00:00
|
|
|
def make_unique_blob_name(self, base_str):
|
|
|
|
|
"""
|
|
|
|
|
Returns a blob name that will be unique to the current device
|
|
|
|
|
and optimizer instance.
|
Optimizer: one LR op per (device, optimizer)
Summary:
Try running this script through `nvprof`:
```py
import numpy as np
from caffe2.proto import caffe2_pb2
from caffe2.python import brew, core, optimizer, workspace
from caffe2.python.model_helper import ModelHelper
do = core.DeviceOption(caffe2_pb2.CUDA, 0)
with core.DeviceScope(do):
model = ModelHelper(arg_scope={'order': 'NCHW'})
conv1 = brew.conv(model, 'data', 'conv1', 1, 20, 5)
pool1 = brew.max_pool(model, conv1, 'pool1', kernel=2, stride=2)
conv2 = brew.conv(model, pool1, 'conv2', 20, 50, 5)
pool2 = brew.max_pool(model, conv2, 'pool2', kernel=2, stride=2)
fc3 = brew.fc(model, pool2, 'fc3', 50 * 4 * 4, 500)
fc3 = brew.relu(model, fc3, fc3)
pred = brew.fc(model, fc3, 'pred', 500, 10)
softmax, loss = model.SoftmaxWithLoss([pred, 'label'], ['softmax', 'loss'])
model.AddGradientOperators([loss])
optimizer.build_sgd(model, 0.01,
policy='step', stepsize=1, gamma=0.999,
momentum=0.9, nesterov=False)
workspace.FeedBlob('data', np.zeros((1, 1, 28, 28), dtype=np.float32))
workspace.FeedBlob('label', np.zeros((1, 1), dtype=np.int32))
workspace.RunNetOnce(model.param_init_net)
workspace.CreateNet(model.net)
for _ in range(100):
workspace.RunNet(model.net)
```
Before this change:
```
1.55% 1.4185ms 837 1.6940us 1.6630us 2.4000us [CUDA memcpy HtoD]
0.72% 656.03us 200 3.2800us 3.1350us 3.5840us [CUDA memcpy DtoD]
0.39% 7.1574ms 1034 6.9220us 3.8300us 18.677us cudaMemcpyAsync
0.00% 34.180us 3 11.393us 9.0960us 12.910us cudaMemcpy
```
And after it (look at the third column):
```
0.73% 657.15us 200 3.2850us 3.1040us 3.6160us [CUDA memcpy DtoD]
0.26% 235.07us 137 1.7150us 1.6640us 2.3680us [CUDA memcpy HtoD]
0.20% 3.4493ms 334 10.327us 6.4220us 16.958us cudaMemcpyAsync
0.00% 37.376us 3 12.458us 9.4120us 15.412us cudaMemcpy
```
That makes a pretty big difference in performance. Is there any particular reason you decided to have a separate `LearningRate` op for every parameter in https://github.com/caffe2/caffe2/commit/1317e3498cf6ac2e58ab93b5239ad09ca7f46bc6?
Closes https://github.com/caffe2/caffe2/pull/893
Reviewed By: kennyhorror
Differential Revision: D5372541
Pulled By: asaadaldien
fbshipit-source-id: 57357e1be2d58ce294058e9422fb3b1eddfca24d
2017-07-13 04:11:20 +00:00
|
|
|
"""
|
2017-07-14 00:50:41 +00:00
|
|
|
current_scope = scope.CurrentDeviceScope()
|
|
|
|
|
if current_scope is None:
|
2017-08-24 17:10:58 +00:00
|
|
|
return self.get_cpu_blob_name(base_str)
|
2017-07-21 23:22:21 +00:00
|
|
|
|
2018-11-29 21:58:11 +00:00
|
|
|
if core.IsGPUDeviceType(current_scope.device_type):
|
2017-10-07 02:08:58 +00:00
|
|
|
return self.get_gpu_blob_name(
|
2018-10-09 22:44:49 +00:00
|
|
|
base_str, current_scope.device_id, current_scope.node_name
|
2017-10-07 02:08:58 +00:00
|
|
|
)
|
Optimizer: one LR op per (device, optimizer)
Summary:
Try running this script through `nvprof`:
```py
import numpy as np
from caffe2.proto import caffe2_pb2
from caffe2.python import brew, core, optimizer, workspace
from caffe2.python.model_helper import ModelHelper
do = core.DeviceOption(caffe2_pb2.CUDA, 0)
with core.DeviceScope(do):
model = ModelHelper(arg_scope={'order': 'NCHW'})
conv1 = brew.conv(model, 'data', 'conv1', 1, 20, 5)
pool1 = brew.max_pool(model, conv1, 'pool1', kernel=2, stride=2)
conv2 = brew.conv(model, pool1, 'conv2', 20, 50, 5)
pool2 = brew.max_pool(model, conv2, 'pool2', kernel=2, stride=2)
fc3 = brew.fc(model, pool2, 'fc3', 50 * 4 * 4, 500)
fc3 = brew.relu(model, fc3, fc3)
pred = brew.fc(model, fc3, 'pred', 500, 10)
softmax, loss = model.SoftmaxWithLoss([pred, 'label'], ['softmax', 'loss'])
model.AddGradientOperators([loss])
optimizer.build_sgd(model, 0.01,
policy='step', stepsize=1, gamma=0.999,
momentum=0.9, nesterov=False)
workspace.FeedBlob('data', np.zeros((1, 1, 28, 28), dtype=np.float32))
workspace.FeedBlob('label', np.zeros((1, 1), dtype=np.int32))
workspace.RunNetOnce(model.param_init_net)
workspace.CreateNet(model.net)
for _ in range(100):
workspace.RunNet(model.net)
```
Before this change:
```
1.55% 1.4185ms 837 1.6940us 1.6630us 2.4000us [CUDA memcpy HtoD]
0.72% 656.03us 200 3.2800us 3.1350us 3.5840us [CUDA memcpy DtoD]
0.39% 7.1574ms 1034 6.9220us 3.8300us 18.677us cudaMemcpyAsync
0.00% 34.180us 3 11.393us 9.0960us 12.910us cudaMemcpy
```
And after it (look at the third column):
```
0.73% 657.15us 200 3.2850us 3.1040us 3.6160us [CUDA memcpy DtoD]
0.26% 235.07us 137 1.7150us 1.6640us 2.3680us [CUDA memcpy HtoD]
0.20% 3.4493ms 334 10.327us 6.4220us 16.958us cudaMemcpyAsync
0.00% 37.376us 3 12.458us 9.4120us 15.412us cudaMemcpy
```
That makes a pretty big difference in performance. Is there any particular reason you decided to have a separate `LearningRate` op for every parameter in https://github.com/caffe2/caffe2/commit/1317e3498cf6ac2e58ab93b5239ad09ca7f46bc6?
Closes https://github.com/caffe2/caffe2/pull/893
Reviewed By: kennyhorror
Differential Revision: D5372541
Pulled By: asaadaldien
fbshipit-source-id: 57357e1be2d58ce294058e9422fb3b1eddfca24d
2017-07-13 04:11:20 +00:00
|
|
|
else:
|
2017-10-07 02:08:58 +00:00
|
|
|
return self.get_cpu_blob_name(base_str, current_scope.node_name)
|
Optimizer: one LR op per (device, optimizer)
Summary:
Try running this script through `nvprof`:
```py
import numpy as np
from caffe2.proto import caffe2_pb2
from caffe2.python import brew, core, optimizer, workspace
from caffe2.python.model_helper import ModelHelper
do = core.DeviceOption(caffe2_pb2.CUDA, 0)
with core.DeviceScope(do):
model = ModelHelper(arg_scope={'order': 'NCHW'})
conv1 = brew.conv(model, 'data', 'conv1', 1, 20, 5)
pool1 = brew.max_pool(model, conv1, 'pool1', kernel=2, stride=2)
conv2 = brew.conv(model, pool1, 'conv2', 20, 50, 5)
pool2 = brew.max_pool(model, conv2, 'pool2', kernel=2, stride=2)
fc3 = brew.fc(model, pool2, 'fc3', 50 * 4 * 4, 500)
fc3 = brew.relu(model, fc3, fc3)
pred = brew.fc(model, fc3, 'pred', 500, 10)
softmax, loss = model.SoftmaxWithLoss([pred, 'label'], ['softmax', 'loss'])
model.AddGradientOperators([loss])
optimizer.build_sgd(model, 0.01,
policy='step', stepsize=1, gamma=0.999,
momentum=0.9, nesterov=False)
workspace.FeedBlob('data', np.zeros((1, 1, 28, 28), dtype=np.float32))
workspace.FeedBlob('label', np.zeros((1, 1), dtype=np.int32))
workspace.RunNetOnce(model.param_init_net)
workspace.CreateNet(model.net)
for _ in range(100):
workspace.RunNet(model.net)
```
Before this change:
```
1.55% 1.4185ms 837 1.6940us 1.6630us 2.4000us [CUDA memcpy HtoD]
0.72% 656.03us 200 3.2800us 3.1350us 3.5840us [CUDA memcpy DtoD]
0.39% 7.1574ms 1034 6.9220us 3.8300us 18.677us cudaMemcpyAsync
0.00% 34.180us 3 11.393us 9.0960us 12.910us cudaMemcpy
```
And after it (look at the third column):
```
0.73% 657.15us 200 3.2850us 3.1040us 3.6160us [CUDA memcpy DtoD]
0.26% 235.07us 137 1.7150us 1.6640us 2.3680us [CUDA memcpy HtoD]
0.20% 3.4493ms 334 10.327us 6.4220us 16.958us cudaMemcpyAsync
0.00% 37.376us 3 12.458us 9.4120us 15.412us cudaMemcpy
```
That makes a pretty big difference in performance. Is there any particular reason you decided to have a separate `LearningRate` op for every parameter in https://github.com/caffe2/caffe2/commit/1317e3498cf6ac2e58ab93b5239ad09ca7f46bc6?
Closes https://github.com/caffe2/caffe2/pull/893
Reviewed By: kennyhorror
Differential Revision: D5372541
Pulled By: asaadaldien
fbshipit-source-id: 57357e1be2d58ce294058e9422fb3b1eddfca24d
2017-07-13 04:11:20 +00:00
|
|
|
|
2020-09-10 02:35:22 +00:00
|
|
|
def build_lr(
|
|
|
|
|
self,
|
|
|
|
|
net,
|
|
|
|
|
param_init_net,
|
|
|
|
|
base_learning_rate,
|
|
|
|
|
learning_rate_blob=None,
|
|
|
|
|
policy="fixed",
|
|
|
|
|
iter_val=0,
|
|
|
|
|
**kwargs
|
|
|
|
|
):
|
Optimizer: one LR op per (device, optimizer)
Summary:
Try running this script through `nvprof`:
```py
import numpy as np
from caffe2.proto import caffe2_pb2
from caffe2.python import brew, core, optimizer, workspace
from caffe2.python.model_helper import ModelHelper
do = core.DeviceOption(caffe2_pb2.CUDA, 0)
with core.DeviceScope(do):
model = ModelHelper(arg_scope={'order': 'NCHW'})
conv1 = brew.conv(model, 'data', 'conv1', 1, 20, 5)
pool1 = brew.max_pool(model, conv1, 'pool1', kernel=2, stride=2)
conv2 = brew.conv(model, pool1, 'conv2', 20, 50, 5)
pool2 = brew.max_pool(model, conv2, 'pool2', kernel=2, stride=2)
fc3 = brew.fc(model, pool2, 'fc3', 50 * 4 * 4, 500)
fc3 = brew.relu(model, fc3, fc3)
pred = brew.fc(model, fc3, 'pred', 500, 10)
softmax, loss = model.SoftmaxWithLoss([pred, 'label'], ['softmax', 'loss'])
model.AddGradientOperators([loss])
optimizer.build_sgd(model, 0.01,
policy='step', stepsize=1, gamma=0.999,
momentum=0.9, nesterov=False)
workspace.FeedBlob('data', np.zeros((1, 1, 28, 28), dtype=np.float32))
workspace.FeedBlob('label', np.zeros((1, 1), dtype=np.int32))
workspace.RunNetOnce(model.param_init_net)
workspace.CreateNet(model.net)
for _ in range(100):
workspace.RunNet(model.net)
```
Before this change:
```
1.55% 1.4185ms 837 1.6940us 1.6630us 2.4000us [CUDA memcpy HtoD]
0.72% 656.03us 200 3.2800us 3.1350us 3.5840us [CUDA memcpy DtoD]
0.39% 7.1574ms 1034 6.9220us 3.8300us 18.677us cudaMemcpyAsync
0.00% 34.180us 3 11.393us 9.0960us 12.910us cudaMemcpy
```
And after it (look at the third column):
```
0.73% 657.15us 200 3.2850us 3.1040us 3.6160us [CUDA memcpy DtoD]
0.26% 235.07us 137 1.7150us 1.6640us 2.3680us [CUDA memcpy HtoD]
0.20% 3.4493ms 334 10.327us 6.4220us 16.958us cudaMemcpyAsync
0.00% 37.376us 3 12.458us 9.4120us 15.412us cudaMemcpy
```
That makes a pretty big difference in performance. Is there any particular reason you decided to have a separate `LearningRate` op for every parameter in https://github.com/caffe2/caffe2/commit/1317e3498cf6ac2e58ab93b5239ad09ca7f46bc6?
Closes https://github.com/caffe2/caffe2/pull/893
Reviewed By: kennyhorror
Differential Revision: D5372541
Pulled By: asaadaldien
fbshipit-source-id: 57357e1be2d58ce294058e9422fb3b1eddfca24d
2017-07-13 04:11:20 +00:00
|
|
|
if learning_rate_blob is None:
|
2020-09-10 02:35:22 +00:00
|
|
|
learning_rate_blob = self.make_unique_blob_name("lr")
|
2017-10-07 02:08:58 +00:00
|
|
|
|
2020-09-10 02:35:22 +00:00
|
|
|
iteration = utils.BuildUniqueMutexIter(param_init_net, net, iter_val=iter_val)
|
2017-03-08 02:44:45 +00:00
|
|
|
|
Optimizer: one LR op per (device, optimizer)
Summary:
Try running this script through `nvprof`:
```py
import numpy as np
from caffe2.proto import caffe2_pb2
from caffe2.python import brew, core, optimizer, workspace
from caffe2.python.model_helper import ModelHelper
do = core.DeviceOption(caffe2_pb2.CUDA, 0)
with core.DeviceScope(do):
model = ModelHelper(arg_scope={'order': 'NCHW'})
conv1 = brew.conv(model, 'data', 'conv1', 1, 20, 5)
pool1 = brew.max_pool(model, conv1, 'pool1', kernel=2, stride=2)
conv2 = brew.conv(model, pool1, 'conv2', 20, 50, 5)
pool2 = brew.max_pool(model, conv2, 'pool2', kernel=2, stride=2)
fc3 = brew.fc(model, pool2, 'fc3', 50 * 4 * 4, 500)
fc3 = brew.relu(model, fc3, fc3)
pred = brew.fc(model, fc3, 'pred', 500, 10)
softmax, loss = model.SoftmaxWithLoss([pred, 'label'], ['softmax', 'loss'])
model.AddGradientOperators([loss])
optimizer.build_sgd(model, 0.01,
policy='step', stepsize=1, gamma=0.999,
momentum=0.9, nesterov=False)
workspace.FeedBlob('data', np.zeros((1, 1, 28, 28), dtype=np.float32))
workspace.FeedBlob('label', np.zeros((1, 1), dtype=np.int32))
workspace.RunNetOnce(model.param_init_net)
workspace.CreateNet(model.net)
for _ in range(100):
workspace.RunNet(model.net)
```
Before this change:
```
1.55% 1.4185ms 837 1.6940us 1.6630us 2.4000us [CUDA memcpy HtoD]
0.72% 656.03us 200 3.2800us 3.1350us 3.5840us [CUDA memcpy DtoD]
0.39% 7.1574ms 1034 6.9220us 3.8300us 18.677us cudaMemcpyAsync
0.00% 34.180us 3 11.393us 9.0960us 12.910us cudaMemcpy
```
And after it (look at the third column):
```
0.73% 657.15us 200 3.2850us 3.1040us 3.6160us [CUDA memcpy DtoD]
0.26% 235.07us 137 1.7150us 1.6640us 2.3680us [CUDA memcpy HtoD]
0.20% 3.4493ms 334 10.327us 6.4220us 16.958us cudaMemcpyAsync
0.00% 37.376us 3 12.458us 9.4120us 15.412us cudaMemcpy
```
That makes a pretty big difference in performance. Is there any particular reason you decided to have a separate `LearningRate` op for every parameter in https://github.com/caffe2/caffe2/commit/1317e3498cf6ac2e58ab93b5239ad09ca7f46bc6?
Closes https://github.com/caffe2/caffe2/pull/893
Reviewed By: kennyhorror
Differential Revision: D5372541
Pulled By: asaadaldien
fbshipit-source-id: 57357e1be2d58ce294058e9422fb3b1eddfca24d
2017-07-13 04:11:20 +00:00
|
|
|
if not net.BlobIsDefined(learning_rate_blob):
|
|
|
|
|
# There is one interesting thing here: since we are minimizing, we are
|
|
|
|
|
# doing "descent" so the learning rate is set to be negative.
|
|
|
|
|
lr = net.LearningRate(
|
|
|
|
|
[iteration],
|
|
|
|
|
learning_rate_blob,
|
|
|
|
|
base_lr=-base_learning_rate,
|
|
|
|
|
policy=policy,
|
|
|
|
|
**kwargs
|
|
|
|
|
)
|
|
|
|
|
else:
|
|
|
|
|
lr = net.GetBlobRef(learning_rate_blob)
|
2017-08-24 17:10:58 +00:00
|
|
|
|
2017-08-26 01:53:20 +00:00
|
|
|
if self._lr_multiplier is not None:
|
Update from facebook (#7451)
* [bootcamp] Improve "Shape" operator to support axes specification
To improve .shape operator of Caffe2 to support x.shape(tensor, axes), which takes an optional int array "axes" as input. For example, x.shape(tensor, [1, 0]) will return the dimension for axis 1 and 0 following the specified order. For current version, "axes" input allows duplications and can have arbitrary length.
* Back out "Add barrier net that runs before training nets"
Original commit changeset: b373fdc9c30f. Need additional changes to some callers to support barrier failures.
* Change warning to verbose log to reduce log spam
The `LOG(WARNING)` was a bit spammy for regular use so lets just make it a `VLOG`.
* Extract the shared code from different caffe2_benchmark binaries
The OSS benchmark and Internal benchmark will share most functions in the benchmark.
* Support MFR in sequence training
As titled.
* Make knowledge distillation work with using logged prediction feature as teacher label.
1) Add loading raw dense feature as teacher label.
2) Optional calibration function for teacher label
3) Add teacher label into generic unit test
4) Deprecated TTSN workflow version using feature_options to config teacher label
* [C2/CUDA]: unjoined cross entropy sigmoid
as desc
* Add async_scheduling executor into deferrable_net_exec_test
Add async_scheduling into tests and fix some exception cases
* Fix Event disabled error
When disabling event in RNN ops make sure we don't call Finish on disabled
event from op's RunAsync
* cuda ensure cpu output op can handle both TensorCPU and TensorCUDA
as desc.
* [C2 Core] Infer input device option in C2 hypothesis_test checkers
Improve how we default input blob device options.
Previously it defaults as where op lives but it is not necessarily the case.
For example:
CopyCPUToGPU
* [C2 Op]SplitByLengthsOp CPU/GPU implementation
[C2 Op]SplitByLengthsOp CPU/GPU implementation
* fix undefined symbol error
not sure why we're getting undefined symbol even with link_whole = True
Need to figure out why but need this workaround for now
* Add tools in DAIPlayground platform to help debugging models
Add additional tools to allow Plauground override individual method defined in AnyExp. This will allow user to create module that specificly change certain default method behavior. An example included in this diff is deactivating test model and checkpointing. When debugging any model problems, switching off components helps me quickly narrow down the location of the bug. The technique is extensively used in task T27038712 (Steady memory increase in EDPM, eventually resulting in gloo/cuda.cu:34: out of memory)
* add shape and type inference for int8 conversion operator
* Fix flaky test for group_norm
Fix flaky test for group_norm
* Fix group_norm_op_test flaky
Fix group_norm_op_test flaky
* Implementation of composite learning rate policy
In many state-of-the-arts deep learning works, people use a simple trick to
schedule the learning rate: use a fixed learning rate until error plateaus
and then switch to a different fixed learning rate, and so on. In this diff,
we implemented a simple version of the composite learning rate. The user gives
a set of learning rates policies and corresponding iteration nums, and the
optimizer will change the learning rate policy based on the number of iterations so far.
For example, the user give two learning rate policies, one is FixedLearningRate
and PolyLearningRate, with an iteration number of 1k. Then the first 1k iteration,
we use FixedLearningRate. For the following iterations, we use PolyLearningRate.
* Split two use cases of CachedReader into two classes, DBFileReader and CachedReader
# Use Cases:
1). input: DB file -> output: DatasetReader.
Use DBFileReader.
2). input: Reader -> build cache DB file -> output: DatasetReader.
Use CachedReader.
# Changes to CachedReader:
1). Move db_path to the constructor.
Because in mock reader. cache will always be built ahead.
# Changes to tests:
1). Make a separate TestCase class for CachedReader and DBFileReader.
2). Make it possible to add more test functions by adding setUp, tearDown and _make_temp_path.
3). Make delete db_path more general. `db_path` could be a file for `log_file_db`, but could also be a directory for `leveldb`.
* Back out "On Mobile phones, call GlobalInit with no arguments in predictor in case we need to perform initialization"
Original commit changeset: 4489c6133f11
* Fix LARS bug
Fixed a bug in the LARS implementation which caused all subsequent blobs not using LARS to have the LARS learning rate multiplier applied to them.
* [tum] support sparse init & add uniformFill option
as title
* Propagate exception for async nets
Capture the exception when an exception is thrown in async nets and re-throw it after wait(). This allows exceptions to be propagated up to the caller.
This diff was a part of D7752068. We split the diff so that C2 core files changes are in a separate diff.
* Automatic update of fbcode/onnx to 69894f207dfcd72d1e70497d387201cec327efbc
Previous import was 403ccfbd0161c38f0834413d790bad0874afbf9a
Included changes:
- **[69894f2](https://github.com/onnx/onnx/commit/69894f2)**: Use op schema.all tensor types in random like definitions (#865) <Scott McKay>
- **[b9d6b90](https://github.com/onnx/onnx/commit/b9d6b90)**: Clarify random like operators (#846) <Scott McKay>
- **[fc6b5fb](https://github.com/onnx/onnx/commit/fc6b5fb)**: Refactor shape inference implementation (#855) <anderspapitto>
- **[b7d8dc8](https://github.com/onnx/onnx/commit/b7d8dc8)**: fix cmake warning message (#863) <Eric S. Yu>
- **[f585c5d](https://github.com/onnx/onnx/commit/f585c5d)**: add pytorch-operator test for tile (#831) <Wenhao Hu>
- **[993fe70](https://github.com/onnx/onnx/commit/993fe70)**: add install step (#832) <Eric S. Yu>
- **[68bc26c](https://github.com/onnx/onnx/commit/68bc26c)**: add type inference for traditional ml ops except classifier ops. (#857) <Ke Zhang>
- **[9cc0cda](https://github.com/onnx/onnx/commit/9cc0cda)**: fix string representation of scalar types (#858) <G. Ramalingam>
- **[1078925](https://github.com/onnx/onnx/commit/1078925)**: fix y in pow test case to scalar (#852) <Wenhao Hu>
- **[c66fb6f](https://github.com/onnx/onnx/commit/c66fb6f)**: Add some math function shape inference (#845) <anderspapitto>
- **[ff667d1](https://github.com/onnx/onnx/commit/ff667d1)**: Refactor return type and docs for ONNXIFI_BACKEND_DIRECTX_ID (#853) <Marat Dukhan>
- **[11c6876](https://github.com/onnx/onnx/commit/11c6876)**: clear initializer names when clear initializer (#849) <Wenhao Hu>
- **[73c34ae](https://github.com/onnx/onnx/commit/73c34ae)**: Clarify FeatureVectorizer description. (#843) <Scott McKay>
- **[1befb9b](https://github.com/onnx/onnx/commit/1befb9b)**: Remove useless text in docs (#850) <Lu Fang>
- **[e84788f](https://github.com/onnx/onnx/commit/e84788f)**: Fix SELU attributes' default values (#839) <Lu Fang>
- **[ebac046](https://github.com/onnx/onnx/commit/ebac046)**: Add tile test case (#823) <Wenhao Hu>
- **[8b7a925](https://github.com/onnx/onnx/commit/8b7a925)**: a few more shape inference functions (#772) <anderspapitto>
- **[9718f42](https://github.com/onnx/onnx/commit/9718f42)**: Make the coefficient non optional for LinearClassifier (#836) <Jaliya Ekanayake>
- **[ef083d0](https://github.com/onnx/onnx/commit/ef083d0)**: Add save_tensor and load_tensor functions for Protos (#770) <Lu Fang>
- **[45ceb55](https://github.com/onnx/onnx/commit/45ceb55)**: Check if CMAKE_BUILD_TYPE set before project(). (#812) <Sergii Dymchenko>
- **[4b3d2b0](https://github.com/onnx/onnx/commit/4b3d2b0)**: [WIP] reenable shape inference tests (#834) <anderspapitto>
- **[22d17ee](https://github.com/onnx/onnx/commit/22d17ee)**: RNN tests: LSTM, GRU, SimpleRNN (#739) <Peyman Manikashani>
- **[de65b95](https://github.com/onnx/onnx/commit/de65b95)**: dimension denotation (#443) <Tian Jin>
- **[eccc76e](https://github.com/onnx/onnx/commit/eccc76e)**: fix field number issue in onnx operator proto and enable its build (#829) <Ke Zhang>
- **[d582beb](https://github.com/onnx/onnx/commit/d582beb)**: disable shape inference test to unbreak ci (#830) <Lu Fang>
- **[485b787](https://github.com/onnx/onnx/commit/485b787)**: function proto for composite op. (#802) <Ke Zhang>
- **[cd58928](https://github.com/onnx/onnx/commit/cd58928)**: specify defaults for attributes of Affine op (#820) <G. Ramalingam>
- **[7ee2cf9](https://github.com/onnx/onnx/commit/7ee2cf9)**: merge the dummy backend back into the main one (#743) <anderspapitto>
- **[1c03a5a](https://github.com/onnx/onnx/commit/1c03a5a)**: [Proposal] ONNX Interface for Framework Integration (previously ONNX Backend API) header and docs (#551) <Marat Dukhan>
- **[3769a98](https://github.com/onnx/onnx/commit/3769a98)**: Rename real model test case from VGG-16 to ZFNet (#821) <Lu Fang>
* [C2]ReluN Op
relu n op.
tf reference: https://www.tensorflow.org/api_docs/python/tf/nn/relu6
* Call destructor when assigning a blob value
* Add executor overrides
Add executor overrides flag to enable migration to async_scheduling executor
* Add barrier net that runs before training nets - attempt #2
Add a synchonize barrier net that is run before training nets. With this net, shards that are faster will wait for other shards before start training. This reduce chances of the faster shards timing out during GLOO AllReduce.
Removed explicit data_parallel_model.py.synchronize call in holmes workflow.
This change was landed previously but caused errors for some EDPM workflows - See https://fb.facebook.com/groups/1426530000692545/permalink/1906766366002237/ - because EDPM assumes any call to CreateOrCloneCommonWorld and Gloo ops are wrapped in exception handlers but in this case exception thrown in the barrier init net is not handled.
To address this issue, we add _CreateOrCloneCommonWorld to the param_init_net instead of a new barrier init net. Since errors for param_init_net run is handled gracefully and re-rendezvous, it should fixes the problem.
* Handle empty nets in async_scheduling
Make sure we don't get stuck on empty nets
* use CUDA_ARCH for conditional compile
* [C2 fix] infer function for ensure_cpu_output_op
* Update group_norm test to reduce flaky test
* Fix lr_multiplier for GPU
2018-05-11 06:14:27 +00:00
|
|
|
lr_multiplier = net.CopyFromCPUInput(
|
2020-09-10 02:35:22 +00:00
|
|
|
self._lr_multiplier, self.make_unique_blob_name("lr_multiplier")
|
Update from facebook (#7451)
* [bootcamp] Improve "Shape" operator to support axes specification
To improve .shape operator of Caffe2 to support x.shape(tensor, axes), which takes an optional int array "axes" as input. For example, x.shape(tensor, [1, 0]) will return the dimension for axis 1 and 0 following the specified order. For current version, "axes" input allows duplications and can have arbitrary length.
* Back out "Add barrier net that runs before training nets"
Original commit changeset: b373fdc9c30f. Need additional changes to some callers to support barrier failures.
* Change warning to verbose log to reduce log spam
The `LOG(WARNING)` was a bit spammy for regular use so lets just make it a `VLOG`.
* Extract the shared code from different caffe2_benchmark binaries
The OSS benchmark and Internal benchmark will share most functions in the benchmark.
* Support MFR in sequence training
As titled.
* Make knowledge distillation work with using logged prediction feature as teacher label.
1) Add loading raw dense feature as teacher label.
2) Optional calibration function for teacher label
3) Add teacher label into generic unit test
4) Deprecated TTSN workflow version using feature_options to config teacher label
* [C2/CUDA]: unjoined cross entropy sigmoid
as desc
* Add async_scheduling executor into deferrable_net_exec_test
Add async_scheduling into tests and fix some exception cases
* Fix Event disabled error
When disabling event in RNN ops make sure we don't call Finish on disabled
event from op's RunAsync
* cuda ensure cpu output op can handle both TensorCPU and TensorCUDA
as desc.
* [C2 Core] Infer input device option in C2 hypothesis_test checkers
Improve how we default input blob device options.
Previously it defaults as where op lives but it is not necessarily the case.
For example:
CopyCPUToGPU
* [C2 Op]SplitByLengthsOp CPU/GPU implementation
[C2 Op]SplitByLengthsOp CPU/GPU implementation
* fix undefined symbol error
not sure why we're getting undefined symbol even with link_whole = True
Need to figure out why but need this workaround for now
* Add tools in DAIPlayground platform to help debugging models
Add additional tools to allow Plauground override individual method defined in AnyExp. This will allow user to create module that specificly change certain default method behavior. An example included in this diff is deactivating test model and checkpointing. When debugging any model problems, switching off components helps me quickly narrow down the location of the bug. The technique is extensively used in task T27038712 (Steady memory increase in EDPM, eventually resulting in gloo/cuda.cu:34: out of memory)
* add shape and type inference for int8 conversion operator
* Fix flaky test for group_norm
Fix flaky test for group_norm
* Fix group_norm_op_test flaky
Fix group_norm_op_test flaky
* Implementation of composite learning rate policy
In many state-of-the-arts deep learning works, people use a simple trick to
schedule the learning rate: use a fixed learning rate until error plateaus
and then switch to a different fixed learning rate, and so on. In this diff,
we implemented a simple version of the composite learning rate. The user gives
a set of learning rates policies and corresponding iteration nums, and the
optimizer will change the learning rate policy based on the number of iterations so far.
For example, the user give two learning rate policies, one is FixedLearningRate
and PolyLearningRate, with an iteration number of 1k. Then the first 1k iteration,
we use FixedLearningRate. For the following iterations, we use PolyLearningRate.
* Split two use cases of CachedReader into two classes, DBFileReader and CachedReader
# Use Cases:
1). input: DB file -> output: DatasetReader.
Use DBFileReader.
2). input: Reader -> build cache DB file -> output: DatasetReader.
Use CachedReader.
# Changes to CachedReader:
1). Move db_path to the constructor.
Because in mock reader. cache will always be built ahead.
# Changes to tests:
1). Make a separate TestCase class for CachedReader and DBFileReader.
2). Make it possible to add more test functions by adding setUp, tearDown and _make_temp_path.
3). Make delete db_path more general. `db_path` could be a file for `log_file_db`, but could also be a directory for `leveldb`.
* Back out "On Mobile phones, call GlobalInit with no arguments in predictor in case we need to perform initialization"
Original commit changeset: 4489c6133f11
* Fix LARS bug
Fixed a bug in the LARS implementation which caused all subsequent blobs not using LARS to have the LARS learning rate multiplier applied to them.
* [tum] support sparse init & add uniformFill option
as title
* Propagate exception for async nets
Capture the exception when an exception is thrown in async nets and re-throw it after wait(). This allows exceptions to be propagated up to the caller.
This diff was a part of D7752068. We split the diff so that C2 core files changes are in a separate diff.
* Automatic update of fbcode/onnx to 69894f207dfcd72d1e70497d387201cec327efbc
Previous import was 403ccfbd0161c38f0834413d790bad0874afbf9a
Included changes:
- **[69894f2](https://github.com/onnx/onnx/commit/69894f2)**: Use op schema.all tensor types in random like definitions (#865) <Scott McKay>
- **[b9d6b90](https://github.com/onnx/onnx/commit/b9d6b90)**: Clarify random like operators (#846) <Scott McKay>
- **[fc6b5fb](https://github.com/onnx/onnx/commit/fc6b5fb)**: Refactor shape inference implementation (#855) <anderspapitto>
- **[b7d8dc8](https://github.com/onnx/onnx/commit/b7d8dc8)**: fix cmake warning message (#863) <Eric S. Yu>
- **[f585c5d](https://github.com/onnx/onnx/commit/f585c5d)**: add pytorch-operator test for tile (#831) <Wenhao Hu>
- **[993fe70](https://github.com/onnx/onnx/commit/993fe70)**: add install step (#832) <Eric S. Yu>
- **[68bc26c](https://github.com/onnx/onnx/commit/68bc26c)**: add type inference for traditional ml ops except classifier ops. (#857) <Ke Zhang>
- **[9cc0cda](https://github.com/onnx/onnx/commit/9cc0cda)**: fix string representation of scalar types (#858) <G. Ramalingam>
- **[1078925](https://github.com/onnx/onnx/commit/1078925)**: fix y in pow test case to scalar (#852) <Wenhao Hu>
- **[c66fb6f](https://github.com/onnx/onnx/commit/c66fb6f)**: Add some math function shape inference (#845) <anderspapitto>
- **[ff667d1](https://github.com/onnx/onnx/commit/ff667d1)**: Refactor return type and docs for ONNXIFI_BACKEND_DIRECTX_ID (#853) <Marat Dukhan>
- **[11c6876](https://github.com/onnx/onnx/commit/11c6876)**: clear initializer names when clear initializer (#849) <Wenhao Hu>
- **[73c34ae](https://github.com/onnx/onnx/commit/73c34ae)**: Clarify FeatureVectorizer description. (#843) <Scott McKay>
- **[1befb9b](https://github.com/onnx/onnx/commit/1befb9b)**: Remove useless text in docs (#850) <Lu Fang>
- **[e84788f](https://github.com/onnx/onnx/commit/e84788f)**: Fix SELU attributes' default values (#839) <Lu Fang>
- **[ebac046](https://github.com/onnx/onnx/commit/ebac046)**: Add tile test case (#823) <Wenhao Hu>
- **[8b7a925](https://github.com/onnx/onnx/commit/8b7a925)**: a few more shape inference functions (#772) <anderspapitto>
- **[9718f42](https://github.com/onnx/onnx/commit/9718f42)**: Make the coefficient non optional for LinearClassifier (#836) <Jaliya Ekanayake>
- **[ef083d0](https://github.com/onnx/onnx/commit/ef083d0)**: Add save_tensor and load_tensor functions for Protos (#770) <Lu Fang>
- **[45ceb55](https://github.com/onnx/onnx/commit/45ceb55)**: Check if CMAKE_BUILD_TYPE set before project(). (#812) <Sergii Dymchenko>
- **[4b3d2b0](https://github.com/onnx/onnx/commit/4b3d2b0)**: [WIP] reenable shape inference tests (#834) <anderspapitto>
- **[22d17ee](https://github.com/onnx/onnx/commit/22d17ee)**: RNN tests: LSTM, GRU, SimpleRNN (#739) <Peyman Manikashani>
- **[de65b95](https://github.com/onnx/onnx/commit/de65b95)**: dimension denotation (#443) <Tian Jin>
- **[eccc76e](https://github.com/onnx/onnx/commit/eccc76e)**: fix field number issue in onnx operator proto and enable its build (#829) <Ke Zhang>
- **[d582beb](https://github.com/onnx/onnx/commit/d582beb)**: disable shape inference test to unbreak ci (#830) <Lu Fang>
- **[485b787](https://github.com/onnx/onnx/commit/485b787)**: function proto for composite op. (#802) <Ke Zhang>
- **[cd58928](https://github.com/onnx/onnx/commit/cd58928)**: specify defaults for attributes of Affine op (#820) <G. Ramalingam>
- **[7ee2cf9](https://github.com/onnx/onnx/commit/7ee2cf9)**: merge the dummy backend back into the main one (#743) <anderspapitto>
- **[1c03a5a](https://github.com/onnx/onnx/commit/1c03a5a)**: [Proposal] ONNX Interface for Framework Integration (previously ONNX Backend API) header and docs (#551) <Marat Dukhan>
- **[3769a98](https://github.com/onnx/onnx/commit/3769a98)**: Rename real model test case from VGG-16 to ZFNet (#821) <Lu Fang>
* [C2]ReluN Op
relu n op.
tf reference: https://www.tensorflow.org/api_docs/python/tf/nn/relu6
* Call destructor when assigning a blob value
* Add executor overrides
Add executor overrides flag to enable migration to async_scheduling executor
* Add barrier net that runs before training nets - attempt #2
Add a synchonize barrier net that is run before training nets. With this net, shards that are faster will wait for other shards before start training. This reduce chances of the faster shards timing out during GLOO AllReduce.
Removed explicit data_parallel_model.py.synchronize call in holmes workflow.
This change was landed previously but caused errors for some EDPM workflows - See https://fb.facebook.com/groups/1426530000692545/permalink/1906766366002237/ - because EDPM assumes any call to CreateOrCloneCommonWorld and Gloo ops are wrapped in exception handlers but in this case exception thrown in the barrier init net is not handled.
To address this issue, we add _CreateOrCloneCommonWorld to the param_init_net instead of a new barrier init net. Since errors for param_init_net run is handled gracefully and re-rendezvous, it should fixes the problem.
* Handle empty nets in async_scheduling
Make sure we don't get stuck on empty nets
* use CUDA_ARCH for conditional compile
* [C2 fix] infer function for ensure_cpu_output_op
* Update group_norm test to reduce flaky test
* Fix lr_multiplier for GPU
2018-05-11 06:14:27 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
|
|
lr = net.Mul(
|
|
|
|
|
[lr, lr_multiplier],
|
2020-09-10 02:35:22 +00:00
|
|
|
self.make_unique_blob_name("scaled_lr"),
|
Update from facebook (#7451)
* [bootcamp] Improve "Shape" operator to support axes specification
To improve .shape operator of Caffe2 to support x.shape(tensor, axes), which takes an optional int array "axes" as input. For example, x.shape(tensor, [1, 0]) will return the dimension for axis 1 and 0 following the specified order. For current version, "axes" input allows duplications and can have arbitrary length.
* Back out "Add barrier net that runs before training nets"
Original commit changeset: b373fdc9c30f. Need additional changes to some callers to support barrier failures.
* Change warning to verbose log to reduce log spam
The `LOG(WARNING)` was a bit spammy for regular use so lets just make it a `VLOG`.
* Extract the shared code from different caffe2_benchmark binaries
The OSS benchmark and Internal benchmark will share most functions in the benchmark.
* Support MFR in sequence training
As titled.
* Make knowledge distillation work with using logged prediction feature as teacher label.
1) Add loading raw dense feature as teacher label.
2) Optional calibration function for teacher label
3) Add teacher label into generic unit test
4) Deprecated TTSN workflow version using feature_options to config teacher label
* [C2/CUDA]: unjoined cross entropy sigmoid
as desc
* Add async_scheduling executor into deferrable_net_exec_test
Add async_scheduling into tests and fix some exception cases
* Fix Event disabled error
When disabling event in RNN ops make sure we don't call Finish on disabled
event from op's RunAsync
* cuda ensure cpu output op can handle both TensorCPU and TensorCUDA
as desc.
* [C2 Core] Infer input device option in C2 hypothesis_test checkers
Improve how we default input blob device options.
Previously it defaults as where op lives but it is not necessarily the case.
For example:
CopyCPUToGPU
* [C2 Op]SplitByLengthsOp CPU/GPU implementation
[C2 Op]SplitByLengthsOp CPU/GPU implementation
* fix undefined symbol error
not sure why we're getting undefined symbol even with link_whole = True
Need to figure out why but need this workaround for now
* Add tools in DAIPlayground platform to help debugging models
Add additional tools to allow Plauground override individual method defined in AnyExp. This will allow user to create module that specificly change certain default method behavior. An example included in this diff is deactivating test model and checkpointing. When debugging any model problems, switching off components helps me quickly narrow down the location of the bug. The technique is extensively used in task T27038712 (Steady memory increase in EDPM, eventually resulting in gloo/cuda.cu:34: out of memory)
* add shape and type inference for int8 conversion operator
* Fix flaky test for group_norm
Fix flaky test for group_norm
* Fix group_norm_op_test flaky
Fix group_norm_op_test flaky
* Implementation of composite learning rate policy
In many state-of-the-arts deep learning works, people use a simple trick to
schedule the learning rate: use a fixed learning rate until error plateaus
and then switch to a different fixed learning rate, and so on. In this diff,
we implemented a simple version of the composite learning rate. The user gives
a set of learning rates policies and corresponding iteration nums, and the
optimizer will change the learning rate policy based on the number of iterations so far.
For example, the user give two learning rate policies, one is FixedLearningRate
and PolyLearningRate, with an iteration number of 1k. Then the first 1k iteration,
we use FixedLearningRate. For the following iterations, we use PolyLearningRate.
* Split two use cases of CachedReader into two classes, DBFileReader and CachedReader
# Use Cases:
1). input: DB file -> output: DatasetReader.
Use DBFileReader.
2). input: Reader -> build cache DB file -> output: DatasetReader.
Use CachedReader.
# Changes to CachedReader:
1). Move db_path to the constructor.
Because in mock reader. cache will always be built ahead.
# Changes to tests:
1). Make a separate TestCase class for CachedReader and DBFileReader.
2). Make it possible to add more test functions by adding setUp, tearDown and _make_temp_path.
3). Make delete db_path more general. `db_path` could be a file for `log_file_db`, but could also be a directory for `leveldb`.
* Back out "On Mobile phones, call GlobalInit with no arguments in predictor in case we need to perform initialization"
Original commit changeset: 4489c6133f11
* Fix LARS bug
Fixed a bug in the LARS implementation which caused all subsequent blobs not using LARS to have the LARS learning rate multiplier applied to them.
* [tum] support sparse init & add uniformFill option
as title
* Propagate exception for async nets
Capture the exception when an exception is thrown in async nets and re-throw it after wait(). This allows exceptions to be propagated up to the caller.
This diff was a part of D7752068. We split the diff so that C2 core files changes are in a separate diff.
* Automatic update of fbcode/onnx to 69894f207dfcd72d1e70497d387201cec327efbc
Previous import was 403ccfbd0161c38f0834413d790bad0874afbf9a
Included changes:
- **[69894f2](https://github.com/onnx/onnx/commit/69894f2)**: Use op schema.all tensor types in random like definitions (#865) <Scott McKay>
- **[b9d6b90](https://github.com/onnx/onnx/commit/b9d6b90)**: Clarify random like operators (#846) <Scott McKay>
- **[fc6b5fb](https://github.com/onnx/onnx/commit/fc6b5fb)**: Refactor shape inference implementation (#855) <anderspapitto>
- **[b7d8dc8](https://github.com/onnx/onnx/commit/b7d8dc8)**: fix cmake warning message (#863) <Eric S. Yu>
- **[f585c5d](https://github.com/onnx/onnx/commit/f585c5d)**: add pytorch-operator test for tile (#831) <Wenhao Hu>
- **[993fe70](https://github.com/onnx/onnx/commit/993fe70)**: add install step (#832) <Eric S. Yu>
- **[68bc26c](https://github.com/onnx/onnx/commit/68bc26c)**: add type inference for traditional ml ops except classifier ops. (#857) <Ke Zhang>
- **[9cc0cda](https://github.com/onnx/onnx/commit/9cc0cda)**: fix string representation of scalar types (#858) <G. Ramalingam>
- **[1078925](https://github.com/onnx/onnx/commit/1078925)**: fix y in pow test case to scalar (#852) <Wenhao Hu>
- **[c66fb6f](https://github.com/onnx/onnx/commit/c66fb6f)**: Add some math function shape inference (#845) <anderspapitto>
- **[ff667d1](https://github.com/onnx/onnx/commit/ff667d1)**: Refactor return type and docs for ONNXIFI_BACKEND_DIRECTX_ID (#853) <Marat Dukhan>
- **[11c6876](https://github.com/onnx/onnx/commit/11c6876)**: clear initializer names when clear initializer (#849) <Wenhao Hu>
- **[73c34ae](https://github.com/onnx/onnx/commit/73c34ae)**: Clarify FeatureVectorizer description. (#843) <Scott McKay>
- **[1befb9b](https://github.com/onnx/onnx/commit/1befb9b)**: Remove useless text in docs (#850) <Lu Fang>
- **[e84788f](https://github.com/onnx/onnx/commit/e84788f)**: Fix SELU attributes' default values (#839) <Lu Fang>
- **[ebac046](https://github.com/onnx/onnx/commit/ebac046)**: Add tile test case (#823) <Wenhao Hu>
- **[8b7a925](https://github.com/onnx/onnx/commit/8b7a925)**: a few more shape inference functions (#772) <anderspapitto>
- **[9718f42](https://github.com/onnx/onnx/commit/9718f42)**: Make the coefficient non optional for LinearClassifier (#836) <Jaliya Ekanayake>
- **[ef083d0](https://github.com/onnx/onnx/commit/ef083d0)**: Add save_tensor and load_tensor functions for Protos (#770) <Lu Fang>
- **[45ceb55](https://github.com/onnx/onnx/commit/45ceb55)**: Check if CMAKE_BUILD_TYPE set before project(). (#812) <Sergii Dymchenko>
- **[4b3d2b0](https://github.com/onnx/onnx/commit/4b3d2b0)**: [WIP] reenable shape inference tests (#834) <anderspapitto>
- **[22d17ee](https://github.com/onnx/onnx/commit/22d17ee)**: RNN tests: LSTM, GRU, SimpleRNN (#739) <Peyman Manikashani>
- **[de65b95](https://github.com/onnx/onnx/commit/de65b95)**: dimension denotation (#443) <Tian Jin>
- **[eccc76e](https://github.com/onnx/onnx/commit/eccc76e)**: fix field number issue in onnx operator proto and enable its build (#829) <Ke Zhang>
- **[d582beb](https://github.com/onnx/onnx/commit/d582beb)**: disable shape inference test to unbreak ci (#830) <Lu Fang>
- **[485b787](https://github.com/onnx/onnx/commit/485b787)**: function proto for composite op. (#802) <Ke Zhang>
- **[cd58928](https://github.com/onnx/onnx/commit/cd58928)**: specify defaults for attributes of Affine op (#820) <G. Ramalingam>
- **[7ee2cf9](https://github.com/onnx/onnx/commit/7ee2cf9)**: merge the dummy backend back into the main one (#743) <anderspapitto>
- **[1c03a5a](https://github.com/onnx/onnx/commit/1c03a5a)**: [Proposal] ONNX Interface for Framework Integration (previously ONNX Backend API) header and docs (#551) <Marat Dukhan>
- **[3769a98](https://github.com/onnx/onnx/commit/3769a98)**: Rename real model test case from VGG-16 to ZFNet (#821) <Lu Fang>
* [C2]ReluN Op
relu n op.
tf reference: https://www.tensorflow.org/api_docs/python/tf/nn/relu6
* Call destructor when assigning a blob value
* Add executor overrides
Add executor overrides flag to enable migration to async_scheduling executor
* Add barrier net that runs before training nets - attempt #2
Add a synchonize barrier net that is run before training nets. With this net, shards that are faster will wait for other shards before start training. This reduce chances of the faster shards timing out during GLOO AllReduce.
Removed explicit data_parallel_model.py.synchronize call in holmes workflow.
This change was landed previously but caused errors for some EDPM workflows - See https://fb.facebook.com/groups/1426530000692545/permalink/1906766366002237/ - because EDPM assumes any call to CreateOrCloneCommonWorld and Gloo ops are wrapped in exception handlers but in this case exception thrown in the barrier init net is not handled.
To address this issue, we add _CreateOrCloneCommonWorld to the param_init_net instead of a new barrier init net. Since errors for param_init_net run is handled gracefully and re-rendezvous, it should fixes the problem.
* Handle empty nets in async_scheduling
Make sure we don't get stuck on empty nets
* use CUDA_ARCH for conditional compile
* [C2 fix] infer function for ensure_cpu_output_op
* Update group_norm test to reduce flaky test
* Fix lr_multiplier for GPU
2018-05-11 06:14:27 +00:00
|
|
|
broadcast=1,
|
|
|
|
|
)
|
|
|
|
|
|
|
|
|
|
if self._local_lr_multiplier is not None:
|
2018-03-12 19:22:59 +00:00
|
|
|
current_scope = scope.CurrentDeviceScope()
|
2020-09-10 02:35:22 +00:00
|
|
|
if (
|
|
|
|
|
current_scope is not None
|
|
|
|
|
and core.IsGPUDeviceType(current_scope.device_type)
|
|
|
|
|
and not self._local_lr_multiplier_on_gpu
|
|
|
|
|
):
|
Update from facebook (#7451)
* [bootcamp] Improve "Shape" operator to support axes specification
To improve .shape operator of Caffe2 to support x.shape(tensor, axes), which takes an optional int array "axes" as input. For example, x.shape(tensor, [1, 0]) will return the dimension for axis 1 and 0 following the specified order. For current version, "axes" input allows duplications and can have arbitrary length.
* Back out "Add barrier net that runs before training nets"
Original commit changeset: b373fdc9c30f. Need additional changes to some callers to support barrier failures.
* Change warning to verbose log to reduce log spam
The `LOG(WARNING)` was a bit spammy for regular use so lets just make it a `VLOG`.
* Extract the shared code from different caffe2_benchmark binaries
The OSS benchmark and Internal benchmark will share most functions in the benchmark.
* Support MFR in sequence training
As titled.
* Make knowledge distillation work with using logged prediction feature as teacher label.
1) Add loading raw dense feature as teacher label.
2) Optional calibration function for teacher label
3) Add teacher label into generic unit test
4) Deprecated TTSN workflow version using feature_options to config teacher label
* [C2/CUDA]: unjoined cross entropy sigmoid
as desc
* Add async_scheduling executor into deferrable_net_exec_test
Add async_scheduling into tests and fix some exception cases
* Fix Event disabled error
When disabling event in RNN ops make sure we don't call Finish on disabled
event from op's RunAsync
* cuda ensure cpu output op can handle both TensorCPU and TensorCUDA
as desc.
* [C2 Core] Infer input device option in C2 hypothesis_test checkers
Improve how we default input blob device options.
Previously it defaults as where op lives but it is not necessarily the case.
For example:
CopyCPUToGPU
* [C2 Op]SplitByLengthsOp CPU/GPU implementation
[C2 Op]SplitByLengthsOp CPU/GPU implementation
* fix undefined symbol error
not sure why we're getting undefined symbol even with link_whole = True
Need to figure out why but need this workaround for now
* Add tools in DAIPlayground platform to help debugging models
Add additional tools to allow Plauground override individual method defined in AnyExp. This will allow user to create module that specificly change certain default method behavior. An example included in this diff is deactivating test model and checkpointing. When debugging any model problems, switching off components helps me quickly narrow down the location of the bug. The technique is extensively used in task T27038712 (Steady memory increase in EDPM, eventually resulting in gloo/cuda.cu:34: out of memory)
* add shape and type inference for int8 conversion operator
* Fix flaky test for group_norm
Fix flaky test for group_norm
* Fix group_norm_op_test flaky
Fix group_norm_op_test flaky
* Implementation of composite learning rate policy
In many state-of-the-arts deep learning works, people use a simple trick to
schedule the learning rate: use a fixed learning rate until error plateaus
and then switch to a different fixed learning rate, and so on. In this diff,
we implemented a simple version of the composite learning rate. The user gives
a set of learning rates policies and corresponding iteration nums, and the
optimizer will change the learning rate policy based on the number of iterations so far.
For example, the user give two learning rate policies, one is FixedLearningRate
and PolyLearningRate, with an iteration number of 1k. Then the first 1k iteration,
we use FixedLearningRate. For the following iterations, we use PolyLearningRate.
* Split two use cases of CachedReader into two classes, DBFileReader and CachedReader
# Use Cases:
1). input: DB file -> output: DatasetReader.
Use DBFileReader.
2). input: Reader -> build cache DB file -> output: DatasetReader.
Use CachedReader.
# Changes to CachedReader:
1). Move db_path to the constructor.
Because in mock reader. cache will always be built ahead.
# Changes to tests:
1). Make a separate TestCase class for CachedReader and DBFileReader.
2). Make it possible to add more test functions by adding setUp, tearDown and _make_temp_path.
3). Make delete db_path more general. `db_path` could be a file for `log_file_db`, but could also be a directory for `leveldb`.
* Back out "On Mobile phones, call GlobalInit with no arguments in predictor in case we need to perform initialization"
Original commit changeset: 4489c6133f11
* Fix LARS bug
Fixed a bug in the LARS implementation which caused all subsequent blobs not using LARS to have the LARS learning rate multiplier applied to them.
* [tum] support sparse init & add uniformFill option
as title
* Propagate exception for async nets
Capture the exception when an exception is thrown in async nets and re-throw it after wait(). This allows exceptions to be propagated up to the caller.
This diff was a part of D7752068. We split the diff so that C2 core files changes are in a separate diff.
* Automatic update of fbcode/onnx to 69894f207dfcd72d1e70497d387201cec327efbc
Previous import was 403ccfbd0161c38f0834413d790bad0874afbf9a
Included changes:
- **[69894f2](https://github.com/onnx/onnx/commit/69894f2)**: Use op schema.all tensor types in random like definitions (#865) <Scott McKay>
- **[b9d6b90](https://github.com/onnx/onnx/commit/b9d6b90)**: Clarify random like operators (#846) <Scott McKay>
- **[fc6b5fb](https://github.com/onnx/onnx/commit/fc6b5fb)**: Refactor shape inference implementation (#855) <anderspapitto>
- **[b7d8dc8](https://github.com/onnx/onnx/commit/b7d8dc8)**: fix cmake warning message (#863) <Eric S. Yu>
- **[f585c5d](https://github.com/onnx/onnx/commit/f585c5d)**: add pytorch-operator test for tile (#831) <Wenhao Hu>
- **[993fe70](https://github.com/onnx/onnx/commit/993fe70)**: add install step (#832) <Eric S. Yu>
- **[68bc26c](https://github.com/onnx/onnx/commit/68bc26c)**: add type inference for traditional ml ops except classifier ops. (#857) <Ke Zhang>
- **[9cc0cda](https://github.com/onnx/onnx/commit/9cc0cda)**: fix string representation of scalar types (#858) <G. Ramalingam>
- **[1078925](https://github.com/onnx/onnx/commit/1078925)**: fix y in pow test case to scalar (#852) <Wenhao Hu>
- **[c66fb6f](https://github.com/onnx/onnx/commit/c66fb6f)**: Add some math function shape inference (#845) <anderspapitto>
- **[ff667d1](https://github.com/onnx/onnx/commit/ff667d1)**: Refactor return type and docs for ONNXIFI_BACKEND_DIRECTX_ID (#853) <Marat Dukhan>
- **[11c6876](https://github.com/onnx/onnx/commit/11c6876)**: clear initializer names when clear initializer (#849) <Wenhao Hu>
- **[73c34ae](https://github.com/onnx/onnx/commit/73c34ae)**: Clarify FeatureVectorizer description. (#843) <Scott McKay>
- **[1befb9b](https://github.com/onnx/onnx/commit/1befb9b)**: Remove useless text in docs (#850) <Lu Fang>
- **[e84788f](https://github.com/onnx/onnx/commit/e84788f)**: Fix SELU attributes' default values (#839) <Lu Fang>
- **[ebac046](https://github.com/onnx/onnx/commit/ebac046)**: Add tile test case (#823) <Wenhao Hu>
- **[8b7a925](https://github.com/onnx/onnx/commit/8b7a925)**: a few more shape inference functions (#772) <anderspapitto>
- **[9718f42](https://github.com/onnx/onnx/commit/9718f42)**: Make the coefficient non optional for LinearClassifier (#836) <Jaliya Ekanayake>
- **[ef083d0](https://github.com/onnx/onnx/commit/ef083d0)**: Add save_tensor and load_tensor functions for Protos (#770) <Lu Fang>
- **[45ceb55](https://github.com/onnx/onnx/commit/45ceb55)**: Check if CMAKE_BUILD_TYPE set before project(). (#812) <Sergii Dymchenko>
- **[4b3d2b0](https://github.com/onnx/onnx/commit/4b3d2b0)**: [WIP] reenable shape inference tests (#834) <anderspapitto>
- **[22d17ee](https://github.com/onnx/onnx/commit/22d17ee)**: RNN tests: LSTM, GRU, SimpleRNN (#739) <Peyman Manikashani>
- **[de65b95](https://github.com/onnx/onnx/commit/de65b95)**: dimension denotation (#443) <Tian Jin>
- **[eccc76e](https://github.com/onnx/onnx/commit/eccc76e)**: fix field number issue in onnx operator proto and enable its build (#829) <Ke Zhang>
- **[d582beb](https://github.com/onnx/onnx/commit/d582beb)**: disable shape inference test to unbreak ci (#830) <Lu Fang>
- **[485b787](https://github.com/onnx/onnx/commit/485b787)**: function proto for composite op. (#802) <Ke Zhang>
- **[cd58928](https://github.com/onnx/onnx/commit/cd58928)**: specify defaults for attributes of Affine op (#820) <G. Ramalingam>
- **[7ee2cf9](https://github.com/onnx/onnx/commit/7ee2cf9)**: merge the dummy backend back into the main one (#743) <anderspapitto>
- **[1c03a5a](https://github.com/onnx/onnx/commit/1c03a5a)**: [Proposal] ONNX Interface for Framework Integration (previously ONNX Backend API) header and docs (#551) <Marat Dukhan>
- **[3769a98](https://github.com/onnx/onnx/commit/3769a98)**: Rename real model test case from VGG-16 to ZFNet (#821) <Lu Fang>
* [C2]ReluN Op
relu n op.
tf reference: https://www.tensorflow.org/api_docs/python/tf/nn/relu6
* Call destructor when assigning a blob value
* Add executor overrides
Add executor overrides flag to enable migration to async_scheduling executor
* Add barrier net that runs before training nets - attempt #2
Add a synchonize barrier net that is run before training nets. With this net, shards that are faster will wait for other shards before start training. This reduce chances of the faster shards timing out during GLOO AllReduce.
Removed explicit data_parallel_model.py.synchronize call in holmes workflow.
This change was landed previously but caused errors for some EDPM workflows - See https://fb.facebook.com/groups/1426530000692545/permalink/1906766366002237/ - because EDPM assumes any call to CreateOrCloneCommonWorld and Gloo ops are wrapped in exception handlers but in this case exception thrown in the barrier init net is not handled.
To address this issue, we add _CreateOrCloneCommonWorld to the param_init_net instead of a new barrier init net. Since errors for param_init_net run is handled gracefully and re-rendezvous, it should fixes the problem.
* Handle empty nets in async_scheduling
Make sure we don't get stuck on empty nets
* use CUDA_ARCH for conditional compile
* [C2 fix] infer function for ensure_cpu_output_op
* Update group_norm test to reduce flaky test
* Fix lr_multiplier for GPU
2018-05-11 06:14:27 +00:00
|
|
|
local_lr_multiplier = net.CopyFromCPUInput(
|
|
|
|
|
self._local_lr_multiplier,
|
2020-09-10 02:35:22 +00:00
|
|
|
self.make_unique_blob_name("local_lr_multiplier"),
|
2018-03-03 02:06:19 +00:00
|
|
|
)
|
2018-03-12 19:22:59 +00:00
|
|
|
else:
|
Update from facebook (#7451)
* [bootcamp] Improve "Shape" operator to support axes specification
To improve .shape operator of Caffe2 to support x.shape(tensor, axes), which takes an optional int array "axes" as input. For example, x.shape(tensor, [1, 0]) will return the dimension for axis 1 and 0 following the specified order. For current version, "axes" input allows duplications and can have arbitrary length.
* Back out "Add barrier net that runs before training nets"
Original commit changeset: b373fdc9c30f. Need additional changes to some callers to support barrier failures.
* Change warning to verbose log to reduce log spam
The `LOG(WARNING)` was a bit spammy for regular use so lets just make it a `VLOG`.
* Extract the shared code from different caffe2_benchmark binaries
The OSS benchmark and Internal benchmark will share most functions in the benchmark.
* Support MFR in sequence training
As titled.
* Make knowledge distillation work with using logged prediction feature as teacher label.
1) Add loading raw dense feature as teacher label.
2) Optional calibration function for teacher label
3) Add teacher label into generic unit test
4) Deprecated TTSN workflow version using feature_options to config teacher label
* [C2/CUDA]: unjoined cross entropy sigmoid
as desc
* Add async_scheduling executor into deferrable_net_exec_test
Add async_scheduling into tests and fix some exception cases
* Fix Event disabled error
When disabling event in RNN ops make sure we don't call Finish on disabled
event from op's RunAsync
* cuda ensure cpu output op can handle both TensorCPU and TensorCUDA
as desc.
* [C2 Core] Infer input device option in C2 hypothesis_test checkers
Improve how we default input blob device options.
Previously it defaults as where op lives but it is not necessarily the case.
For example:
CopyCPUToGPU
* [C2 Op]SplitByLengthsOp CPU/GPU implementation
[C2 Op]SplitByLengthsOp CPU/GPU implementation
* fix undefined symbol error
not sure why we're getting undefined symbol even with link_whole = True
Need to figure out why but need this workaround for now
* Add tools in DAIPlayground platform to help debugging models
Add additional tools to allow Plauground override individual method defined in AnyExp. This will allow user to create module that specificly change certain default method behavior. An example included in this diff is deactivating test model and checkpointing. When debugging any model problems, switching off components helps me quickly narrow down the location of the bug. The technique is extensively used in task T27038712 (Steady memory increase in EDPM, eventually resulting in gloo/cuda.cu:34: out of memory)
* add shape and type inference for int8 conversion operator
* Fix flaky test for group_norm
Fix flaky test for group_norm
* Fix group_norm_op_test flaky
Fix group_norm_op_test flaky
* Implementation of composite learning rate policy
In many state-of-the-arts deep learning works, people use a simple trick to
schedule the learning rate: use a fixed learning rate until error plateaus
and then switch to a different fixed learning rate, and so on. In this diff,
we implemented a simple version of the composite learning rate. The user gives
a set of learning rates policies and corresponding iteration nums, and the
optimizer will change the learning rate policy based on the number of iterations so far.
For example, the user give two learning rate policies, one is FixedLearningRate
and PolyLearningRate, with an iteration number of 1k. Then the first 1k iteration,
we use FixedLearningRate. For the following iterations, we use PolyLearningRate.
* Split two use cases of CachedReader into two classes, DBFileReader and CachedReader
# Use Cases:
1). input: DB file -> output: DatasetReader.
Use DBFileReader.
2). input: Reader -> build cache DB file -> output: DatasetReader.
Use CachedReader.
# Changes to CachedReader:
1). Move db_path to the constructor.
Because in mock reader. cache will always be built ahead.
# Changes to tests:
1). Make a separate TestCase class for CachedReader and DBFileReader.
2). Make it possible to add more test functions by adding setUp, tearDown and _make_temp_path.
3). Make delete db_path more general. `db_path` could be a file for `log_file_db`, but could also be a directory for `leveldb`.
* Back out "On Mobile phones, call GlobalInit with no arguments in predictor in case we need to perform initialization"
Original commit changeset: 4489c6133f11
* Fix LARS bug
Fixed a bug in the LARS implementation which caused all subsequent blobs not using LARS to have the LARS learning rate multiplier applied to them.
* [tum] support sparse init & add uniformFill option
as title
* Propagate exception for async nets
Capture the exception when an exception is thrown in async nets and re-throw it after wait(). This allows exceptions to be propagated up to the caller.
This diff was a part of D7752068. We split the diff so that C2 core files changes are in a separate diff.
* Automatic update of fbcode/onnx to 69894f207dfcd72d1e70497d387201cec327efbc
Previous import was 403ccfbd0161c38f0834413d790bad0874afbf9a
Included changes:
- **[69894f2](https://github.com/onnx/onnx/commit/69894f2)**: Use op schema.all tensor types in random like definitions (#865) <Scott McKay>
- **[b9d6b90](https://github.com/onnx/onnx/commit/b9d6b90)**: Clarify random like operators (#846) <Scott McKay>
- **[fc6b5fb](https://github.com/onnx/onnx/commit/fc6b5fb)**: Refactor shape inference implementation (#855) <anderspapitto>
- **[b7d8dc8](https://github.com/onnx/onnx/commit/b7d8dc8)**: fix cmake warning message (#863) <Eric S. Yu>
- **[f585c5d](https://github.com/onnx/onnx/commit/f585c5d)**: add pytorch-operator test for tile (#831) <Wenhao Hu>
- **[993fe70](https://github.com/onnx/onnx/commit/993fe70)**: add install step (#832) <Eric S. Yu>
- **[68bc26c](https://github.com/onnx/onnx/commit/68bc26c)**: add type inference for traditional ml ops except classifier ops. (#857) <Ke Zhang>
- **[9cc0cda](https://github.com/onnx/onnx/commit/9cc0cda)**: fix string representation of scalar types (#858) <G. Ramalingam>
- **[1078925](https://github.com/onnx/onnx/commit/1078925)**: fix y in pow test case to scalar (#852) <Wenhao Hu>
- **[c66fb6f](https://github.com/onnx/onnx/commit/c66fb6f)**: Add some math function shape inference (#845) <anderspapitto>
- **[ff667d1](https://github.com/onnx/onnx/commit/ff667d1)**: Refactor return type and docs for ONNXIFI_BACKEND_DIRECTX_ID (#853) <Marat Dukhan>
- **[11c6876](https://github.com/onnx/onnx/commit/11c6876)**: clear initializer names when clear initializer (#849) <Wenhao Hu>
- **[73c34ae](https://github.com/onnx/onnx/commit/73c34ae)**: Clarify FeatureVectorizer description. (#843) <Scott McKay>
- **[1befb9b](https://github.com/onnx/onnx/commit/1befb9b)**: Remove useless text in docs (#850) <Lu Fang>
- **[e84788f](https://github.com/onnx/onnx/commit/e84788f)**: Fix SELU attributes' default values (#839) <Lu Fang>
- **[ebac046](https://github.com/onnx/onnx/commit/ebac046)**: Add tile test case (#823) <Wenhao Hu>
- **[8b7a925](https://github.com/onnx/onnx/commit/8b7a925)**: a few more shape inference functions (#772) <anderspapitto>
- **[9718f42](https://github.com/onnx/onnx/commit/9718f42)**: Make the coefficient non optional for LinearClassifier (#836) <Jaliya Ekanayake>
- **[ef083d0](https://github.com/onnx/onnx/commit/ef083d0)**: Add save_tensor and load_tensor functions for Protos (#770) <Lu Fang>
- **[45ceb55](https://github.com/onnx/onnx/commit/45ceb55)**: Check if CMAKE_BUILD_TYPE set before project(). (#812) <Sergii Dymchenko>
- **[4b3d2b0](https://github.com/onnx/onnx/commit/4b3d2b0)**: [WIP] reenable shape inference tests (#834) <anderspapitto>
- **[22d17ee](https://github.com/onnx/onnx/commit/22d17ee)**: RNN tests: LSTM, GRU, SimpleRNN (#739) <Peyman Manikashani>
- **[de65b95](https://github.com/onnx/onnx/commit/de65b95)**: dimension denotation (#443) <Tian Jin>
- **[eccc76e](https://github.com/onnx/onnx/commit/eccc76e)**: fix field number issue in onnx operator proto and enable its build (#829) <Ke Zhang>
- **[d582beb](https://github.com/onnx/onnx/commit/d582beb)**: disable shape inference test to unbreak ci (#830) <Lu Fang>
- **[485b787](https://github.com/onnx/onnx/commit/485b787)**: function proto for composite op. (#802) <Ke Zhang>
- **[cd58928](https://github.com/onnx/onnx/commit/cd58928)**: specify defaults for attributes of Affine op (#820) <G. Ramalingam>
- **[7ee2cf9](https://github.com/onnx/onnx/commit/7ee2cf9)**: merge the dummy backend back into the main one (#743) <anderspapitto>
- **[1c03a5a](https://github.com/onnx/onnx/commit/1c03a5a)**: [Proposal] ONNX Interface for Framework Integration (previously ONNX Backend API) header and docs (#551) <Marat Dukhan>
- **[3769a98](https://github.com/onnx/onnx/commit/3769a98)**: Rename real model test case from VGG-16 to ZFNet (#821) <Lu Fang>
* [C2]ReluN Op
relu n op.
tf reference: https://www.tensorflow.org/api_docs/python/tf/nn/relu6
* Call destructor when assigning a blob value
* Add executor overrides
Add executor overrides flag to enable migration to async_scheduling executor
* Add barrier net that runs before training nets - attempt #2
Add a synchonize barrier net that is run before training nets. With this net, shards that are faster will wait for other shards before start training. This reduce chances of the faster shards timing out during GLOO AllReduce.
Removed explicit data_parallel_model.py.synchronize call in holmes workflow.
This change was landed previously but caused errors for some EDPM workflows - See https://fb.facebook.com/groups/1426530000692545/permalink/1906766366002237/ - because EDPM assumes any call to CreateOrCloneCommonWorld and Gloo ops are wrapped in exception handlers but in this case exception thrown in the barrier init net is not handled.
To address this issue, we add _CreateOrCloneCommonWorld to the param_init_net instead of a new barrier init net. Since errors for param_init_net run is handled gracefully and re-rendezvous, it should fixes the problem.
* Handle empty nets in async_scheduling
Make sure we don't get stuck on empty nets
* use CUDA_ARCH for conditional compile
* [C2 fix] infer function for ensure_cpu_output_op
* Update group_norm test to reduce flaky test
* Fix lr_multiplier for GPU
2018-05-11 06:14:27 +00:00
|
|
|
local_lr_multiplier = self._local_lr_multiplier
|
2018-03-12 19:22:59 +00:00
|
|
|
|
Update from facebook (#7451)
* [bootcamp] Improve "Shape" operator to support axes specification
To improve .shape operator of Caffe2 to support x.shape(tensor, axes), which takes an optional int array "axes" as input. For example, x.shape(tensor, [1, 0]) will return the dimension for axis 1 and 0 following the specified order. For current version, "axes" input allows duplications and can have arbitrary length.
* Back out "Add barrier net that runs before training nets"
Original commit changeset: b373fdc9c30f. Need additional changes to some callers to support barrier failures.
* Change warning to verbose log to reduce log spam
The `LOG(WARNING)` was a bit spammy for regular use so lets just make it a `VLOG`.
* Extract the shared code from different caffe2_benchmark binaries
The OSS benchmark and Internal benchmark will share most functions in the benchmark.
* Support MFR in sequence training
As titled.
* Make knowledge distillation work with using logged prediction feature as teacher label.
1) Add loading raw dense feature as teacher label.
2) Optional calibration function for teacher label
3) Add teacher label into generic unit test
4) Deprecated TTSN workflow version using feature_options to config teacher label
* [C2/CUDA]: unjoined cross entropy sigmoid
as desc
* Add async_scheduling executor into deferrable_net_exec_test
Add async_scheduling into tests and fix some exception cases
* Fix Event disabled error
When disabling event in RNN ops make sure we don't call Finish on disabled
event from op's RunAsync
* cuda ensure cpu output op can handle both TensorCPU and TensorCUDA
as desc.
* [C2 Core] Infer input device option in C2 hypothesis_test checkers
Improve how we default input blob device options.
Previously it defaults as where op lives but it is not necessarily the case.
For example:
CopyCPUToGPU
* [C2 Op]SplitByLengthsOp CPU/GPU implementation
[C2 Op]SplitByLengthsOp CPU/GPU implementation
* fix undefined symbol error
not sure why we're getting undefined symbol even with link_whole = True
Need to figure out why but need this workaround for now
* Add tools in DAIPlayground platform to help debugging models
Add additional tools to allow Plauground override individual method defined in AnyExp. This will allow user to create module that specificly change certain default method behavior. An example included in this diff is deactivating test model and checkpointing. When debugging any model problems, switching off components helps me quickly narrow down the location of the bug. The technique is extensively used in task T27038712 (Steady memory increase in EDPM, eventually resulting in gloo/cuda.cu:34: out of memory)
* add shape and type inference for int8 conversion operator
* Fix flaky test for group_norm
Fix flaky test for group_norm
* Fix group_norm_op_test flaky
Fix group_norm_op_test flaky
* Implementation of composite learning rate policy
In many state-of-the-arts deep learning works, people use a simple trick to
schedule the learning rate: use a fixed learning rate until error plateaus
and then switch to a different fixed learning rate, and so on. In this diff,
we implemented a simple version of the composite learning rate. The user gives
a set of learning rates policies and corresponding iteration nums, and the
optimizer will change the learning rate policy based on the number of iterations so far.
For example, the user give two learning rate policies, one is FixedLearningRate
and PolyLearningRate, with an iteration number of 1k. Then the first 1k iteration,
we use FixedLearningRate. For the following iterations, we use PolyLearningRate.
* Split two use cases of CachedReader into two classes, DBFileReader and CachedReader
# Use Cases:
1). input: DB file -> output: DatasetReader.
Use DBFileReader.
2). input: Reader -> build cache DB file -> output: DatasetReader.
Use CachedReader.
# Changes to CachedReader:
1). Move db_path to the constructor.
Because in mock reader. cache will always be built ahead.
# Changes to tests:
1). Make a separate TestCase class for CachedReader and DBFileReader.
2). Make it possible to add more test functions by adding setUp, tearDown and _make_temp_path.
3). Make delete db_path more general. `db_path` could be a file for `log_file_db`, but could also be a directory for `leveldb`.
* Back out "On Mobile phones, call GlobalInit with no arguments in predictor in case we need to perform initialization"
Original commit changeset: 4489c6133f11
* Fix LARS bug
Fixed a bug in the LARS implementation which caused all subsequent blobs not using LARS to have the LARS learning rate multiplier applied to them.
* [tum] support sparse init & add uniformFill option
as title
* Propagate exception for async nets
Capture the exception when an exception is thrown in async nets and re-throw it after wait(). This allows exceptions to be propagated up to the caller.
This diff was a part of D7752068. We split the diff so that C2 core files changes are in a separate diff.
* Automatic update of fbcode/onnx to 69894f207dfcd72d1e70497d387201cec327efbc
Previous import was 403ccfbd0161c38f0834413d790bad0874afbf9a
Included changes:
- **[69894f2](https://github.com/onnx/onnx/commit/69894f2)**: Use op schema.all tensor types in random like definitions (#865) <Scott McKay>
- **[b9d6b90](https://github.com/onnx/onnx/commit/b9d6b90)**: Clarify random like operators (#846) <Scott McKay>
- **[fc6b5fb](https://github.com/onnx/onnx/commit/fc6b5fb)**: Refactor shape inference implementation (#855) <anderspapitto>
- **[b7d8dc8](https://github.com/onnx/onnx/commit/b7d8dc8)**: fix cmake warning message (#863) <Eric S. Yu>
- **[f585c5d](https://github.com/onnx/onnx/commit/f585c5d)**: add pytorch-operator test for tile (#831) <Wenhao Hu>
- **[993fe70](https://github.com/onnx/onnx/commit/993fe70)**: add install step (#832) <Eric S. Yu>
- **[68bc26c](https://github.com/onnx/onnx/commit/68bc26c)**: add type inference for traditional ml ops except classifier ops. (#857) <Ke Zhang>
- **[9cc0cda](https://github.com/onnx/onnx/commit/9cc0cda)**: fix string representation of scalar types (#858) <G. Ramalingam>
- **[1078925](https://github.com/onnx/onnx/commit/1078925)**: fix y in pow test case to scalar (#852) <Wenhao Hu>
- **[c66fb6f](https://github.com/onnx/onnx/commit/c66fb6f)**: Add some math function shape inference (#845) <anderspapitto>
- **[ff667d1](https://github.com/onnx/onnx/commit/ff667d1)**: Refactor return type and docs for ONNXIFI_BACKEND_DIRECTX_ID (#853) <Marat Dukhan>
- **[11c6876](https://github.com/onnx/onnx/commit/11c6876)**: clear initializer names when clear initializer (#849) <Wenhao Hu>
- **[73c34ae](https://github.com/onnx/onnx/commit/73c34ae)**: Clarify FeatureVectorizer description. (#843) <Scott McKay>
- **[1befb9b](https://github.com/onnx/onnx/commit/1befb9b)**: Remove useless text in docs (#850) <Lu Fang>
- **[e84788f](https://github.com/onnx/onnx/commit/e84788f)**: Fix SELU attributes' default values (#839) <Lu Fang>
- **[ebac046](https://github.com/onnx/onnx/commit/ebac046)**: Add tile test case (#823) <Wenhao Hu>
- **[8b7a925](https://github.com/onnx/onnx/commit/8b7a925)**: a few more shape inference functions (#772) <anderspapitto>
- **[9718f42](https://github.com/onnx/onnx/commit/9718f42)**: Make the coefficient non optional for LinearClassifier (#836) <Jaliya Ekanayake>
- **[ef083d0](https://github.com/onnx/onnx/commit/ef083d0)**: Add save_tensor and load_tensor functions for Protos (#770) <Lu Fang>
- **[45ceb55](https://github.com/onnx/onnx/commit/45ceb55)**: Check if CMAKE_BUILD_TYPE set before project(). (#812) <Sergii Dymchenko>
- **[4b3d2b0](https://github.com/onnx/onnx/commit/4b3d2b0)**: [WIP] reenable shape inference tests (#834) <anderspapitto>
- **[22d17ee](https://github.com/onnx/onnx/commit/22d17ee)**: RNN tests: LSTM, GRU, SimpleRNN (#739) <Peyman Manikashani>
- **[de65b95](https://github.com/onnx/onnx/commit/de65b95)**: dimension denotation (#443) <Tian Jin>
- **[eccc76e](https://github.com/onnx/onnx/commit/eccc76e)**: fix field number issue in onnx operator proto and enable its build (#829) <Ke Zhang>
- **[d582beb](https://github.com/onnx/onnx/commit/d582beb)**: disable shape inference test to unbreak ci (#830) <Lu Fang>
- **[485b787](https://github.com/onnx/onnx/commit/485b787)**: function proto for composite op. (#802) <Ke Zhang>
- **[cd58928](https://github.com/onnx/onnx/commit/cd58928)**: specify defaults for attributes of Affine op (#820) <G. Ramalingam>
- **[7ee2cf9](https://github.com/onnx/onnx/commit/7ee2cf9)**: merge the dummy backend back into the main one (#743) <anderspapitto>
- **[1c03a5a](https://github.com/onnx/onnx/commit/1c03a5a)**: [Proposal] ONNX Interface for Framework Integration (previously ONNX Backend API) header and docs (#551) <Marat Dukhan>
- **[3769a98](https://github.com/onnx/onnx/commit/3769a98)**: Rename real model test case from VGG-16 to ZFNet (#821) <Lu Fang>
* [C2]ReluN Op
relu n op.
tf reference: https://www.tensorflow.org/api_docs/python/tf/nn/relu6
* Call destructor when assigning a blob value
* Add executor overrides
Add executor overrides flag to enable migration to async_scheduling executor
* Add barrier net that runs before training nets - attempt #2
Add a synchonize barrier net that is run before training nets. With this net, shards that are faster will wait for other shards before start training. This reduce chances of the faster shards timing out during GLOO AllReduce.
Removed explicit data_parallel_model.py.synchronize call in holmes workflow.
This change was landed previously but caused errors for some EDPM workflows - See https://fb.facebook.com/groups/1426530000692545/permalink/1906766366002237/ - because EDPM assumes any call to CreateOrCloneCommonWorld and Gloo ops are wrapped in exception handlers but in this case exception thrown in the barrier init net is not handled.
To address this issue, we add _CreateOrCloneCommonWorld to the param_init_net instead of a new barrier init net. Since errors for param_init_net run is handled gracefully and re-rendezvous, it should fixes the problem.
* Handle empty nets in async_scheduling
Make sure we don't get stuck on empty nets
* use CUDA_ARCH for conditional compile
* [C2 fix] infer function for ensure_cpu_output_op
* Update group_norm test to reduce flaky test
* Fix lr_multiplier for GPU
2018-05-11 06:14:27 +00:00
|
|
|
lr = net.Mul(
|
|
|
|
|
[lr, local_lr_multiplier],
|
2020-09-10 02:35:22 +00:00
|
|
|
self.make_unique_blob_name("local_scaled_lr"),
|
2017-08-24 17:10:58 +00:00
|
|
|
broadcast=1,
|
|
|
|
|
)
|
|
|
|
|
|
2017-04-17 17:06:49 +00:00
|
|
|
return lr, iteration
|
2017-03-08 02:44:45 +00:00
|
|
|
|
Update from facebook (#7451)
* [bootcamp] Improve "Shape" operator to support axes specification
To improve .shape operator of Caffe2 to support x.shape(tensor, axes), which takes an optional int array "axes" as input. For example, x.shape(tensor, [1, 0]) will return the dimension for axis 1 and 0 following the specified order. For current version, "axes" input allows duplications and can have arbitrary length.
* Back out "Add barrier net that runs before training nets"
Original commit changeset: b373fdc9c30f. Need additional changes to some callers to support barrier failures.
* Change warning to verbose log to reduce log spam
The `LOG(WARNING)` was a bit spammy for regular use so lets just make it a `VLOG`.
* Extract the shared code from different caffe2_benchmark binaries
The OSS benchmark and Internal benchmark will share most functions in the benchmark.
* Support MFR in sequence training
As titled.
* Make knowledge distillation work with using logged prediction feature as teacher label.
1) Add loading raw dense feature as teacher label.
2) Optional calibration function for teacher label
3) Add teacher label into generic unit test
4) Deprecated TTSN workflow version using feature_options to config teacher label
* [C2/CUDA]: unjoined cross entropy sigmoid
as desc
* Add async_scheduling executor into deferrable_net_exec_test
Add async_scheduling into tests and fix some exception cases
* Fix Event disabled error
When disabling event in RNN ops make sure we don't call Finish on disabled
event from op's RunAsync
* cuda ensure cpu output op can handle both TensorCPU and TensorCUDA
as desc.
* [C2 Core] Infer input device option in C2 hypothesis_test checkers
Improve how we default input blob device options.
Previously it defaults as where op lives but it is not necessarily the case.
For example:
CopyCPUToGPU
* [C2 Op]SplitByLengthsOp CPU/GPU implementation
[C2 Op]SplitByLengthsOp CPU/GPU implementation
* fix undefined symbol error
not sure why we're getting undefined symbol even with link_whole = True
Need to figure out why but need this workaround for now
* Add tools in DAIPlayground platform to help debugging models
Add additional tools to allow Plauground override individual method defined in AnyExp. This will allow user to create module that specificly change certain default method behavior. An example included in this diff is deactivating test model and checkpointing. When debugging any model problems, switching off components helps me quickly narrow down the location of the bug. The technique is extensively used in task T27038712 (Steady memory increase in EDPM, eventually resulting in gloo/cuda.cu:34: out of memory)
* add shape and type inference for int8 conversion operator
* Fix flaky test for group_norm
Fix flaky test for group_norm
* Fix group_norm_op_test flaky
Fix group_norm_op_test flaky
* Implementation of composite learning rate policy
In many state-of-the-arts deep learning works, people use a simple trick to
schedule the learning rate: use a fixed learning rate until error plateaus
and then switch to a different fixed learning rate, and so on. In this diff,
we implemented a simple version of the composite learning rate. The user gives
a set of learning rates policies and corresponding iteration nums, and the
optimizer will change the learning rate policy based on the number of iterations so far.
For example, the user give two learning rate policies, one is FixedLearningRate
and PolyLearningRate, with an iteration number of 1k. Then the first 1k iteration,
we use FixedLearningRate. For the following iterations, we use PolyLearningRate.
* Split two use cases of CachedReader into two classes, DBFileReader and CachedReader
# Use Cases:
1). input: DB file -> output: DatasetReader.
Use DBFileReader.
2). input: Reader -> build cache DB file -> output: DatasetReader.
Use CachedReader.
# Changes to CachedReader:
1). Move db_path to the constructor.
Because in mock reader. cache will always be built ahead.
# Changes to tests:
1). Make a separate TestCase class for CachedReader and DBFileReader.
2). Make it possible to add more test functions by adding setUp, tearDown and _make_temp_path.
3). Make delete db_path more general. `db_path` could be a file for `log_file_db`, but could also be a directory for `leveldb`.
* Back out "On Mobile phones, call GlobalInit with no arguments in predictor in case we need to perform initialization"
Original commit changeset: 4489c6133f11
* Fix LARS bug
Fixed a bug in the LARS implementation which caused all subsequent blobs not using LARS to have the LARS learning rate multiplier applied to them.
* [tum] support sparse init & add uniformFill option
as title
* Propagate exception for async nets
Capture the exception when an exception is thrown in async nets and re-throw it after wait(). This allows exceptions to be propagated up to the caller.
This diff was a part of D7752068. We split the diff so that C2 core files changes are in a separate diff.
* Automatic update of fbcode/onnx to 69894f207dfcd72d1e70497d387201cec327efbc
Previous import was 403ccfbd0161c38f0834413d790bad0874afbf9a
Included changes:
- **[69894f2](https://github.com/onnx/onnx/commit/69894f2)**: Use op schema.all tensor types in random like definitions (#865) <Scott McKay>
- **[b9d6b90](https://github.com/onnx/onnx/commit/b9d6b90)**: Clarify random like operators (#846) <Scott McKay>
- **[fc6b5fb](https://github.com/onnx/onnx/commit/fc6b5fb)**: Refactor shape inference implementation (#855) <anderspapitto>
- **[b7d8dc8](https://github.com/onnx/onnx/commit/b7d8dc8)**: fix cmake warning message (#863) <Eric S. Yu>
- **[f585c5d](https://github.com/onnx/onnx/commit/f585c5d)**: add pytorch-operator test for tile (#831) <Wenhao Hu>
- **[993fe70](https://github.com/onnx/onnx/commit/993fe70)**: add install step (#832) <Eric S. Yu>
- **[68bc26c](https://github.com/onnx/onnx/commit/68bc26c)**: add type inference for traditional ml ops except classifier ops. (#857) <Ke Zhang>
- **[9cc0cda](https://github.com/onnx/onnx/commit/9cc0cda)**: fix string representation of scalar types (#858) <G. Ramalingam>
- **[1078925](https://github.com/onnx/onnx/commit/1078925)**: fix y in pow test case to scalar (#852) <Wenhao Hu>
- **[c66fb6f](https://github.com/onnx/onnx/commit/c66fb6f)**: Add some math function shape inference (#845) <anderspapitto>
- **[ff667d1](https://github.com/onnx/onnx/commit/ff667d1)**: Refactor return type and docs for ONNXIFI_BACKEND_DIRECTX_ID (#853) <Marat Dukhan>
- **[11c6876](https://github.com/onnx/onnx/commit/11c6876)**: clear initializer names when clear initializer (#849) <Wenhao Hu>
- **[73c34ae](https://github.com/onnx/onnx/commit/73c34ae)**: Clarify FeatureVectorizer description. (#843) <Scott McKay>
- **[1befb9b](https://github.com/onnx/onnx/commit/1befb9b)**: Remove useless text in docs (#850) <Lu Fang>
- **[e84788f](https://github.com/onnx/onnx/commit/e84788f)**: Fix SELU attributes' default values (#839) <Lu Fang>
- **[ebac046](https://github.com/onnx/onnx/commit/ebac046)**: Add tile test case (#823) <Wenhao Hu>
- **[8b7a925](https://github.com/onnx/onnx/commit/8b7a925)**: a few more shape inference functions (#772) <anderspapitto>
- **[9718f42](https://github.com/onnx/onnx/commit/9718f42)**: Make the coefficient non optional for LinearClassifier (#836) <Jaliya Ekanayake>
- **[ef083d0](https://github.com/onnx/onnx/commit/ef083d0)**: Add save_tensor and load_tensor functions for Protos (#770) <Lu Fang>
- **[45ceb55](https://github.com/onnx/onnx/commit/45ceb55)**: Check if CMAKE_BUILD_TYPE set before project(). (#812) <Sergii Dymchenko>
- **[4b3d2b0](https://github.com/onnx/onnx/commit/4b3d2b0)**: [WIP] reenable shape inference tests (#834) <anderspapitto>
- **[22d17ee](https://github.com/onnx/onnx/commit/22d17ee)**: RNN tests: LSTM, GRU, SimpleRNN (#739) <Peyman Manikashani>
- **[de65b95](https://github.com/onnx/onnx/commit/de65b95)**: dimension denotation (#443) <Tian Jin>
- **[eccc76e](https://github.com/onnx/onnx/commit/eccc76e)**: fix field number issue in onnx operator proto and enable its build (#829) <Ke Zhang>
- **[d582beb](https://github.com/onnx/onnx/commit/d582beb)**: disable shape inference test to unbreak ci (#830) <Lu Fang>
- **[485b787](https://github.com/onnx/onnx/commit/485b787)**: function proto for composite op. (#802) <Ke Zhang>
- **[cd58928](https://github.com/onnx/onnx/commit/cd58928)**: specify defaults for attributes of Affine op (#820) <G. Ramalingam>
- **[7ee2cf9](https://github.com/onnx/onnx/commit/7ee2cf9)**: merge the dummy backend back into the main one (#743) <anderspapitto>
- **[1c03a5a](https://github.com/onnx/onnx/commit/1c03a5a)**: [Proposal] ONNX Interface for Framework Integration (previously ONNX Backend API) header and docs (#551) <Marat Dukhan>
- **[3769a98](https://github.com/onnx/onnx/commit/3769a98)**: Rename real model test case from VGG-16 to ZFNet (#821) <Lu Fang>
* [C2]ReluN Op
relu n op.
tf reference: https://www.tensorflow.org/api_docs/python/tf/nn/relu6
* Call destructor when assigning a blob value
* Add executor overrides
Add executor overrides flag to enable migration to async_scheduling executor
* Add barrier net that runs before training nets - attempt #2
Add a synchonize barrier net that is run before training nets. With this net, shards that are faster will wait for other shards before start training. This reduce chances of the faster shards timing out during GLOO AllReduce.
Removed explicit data_parallel_model.py.synchronize call in holmes workflow.
This change was landed previously but caused errors for some EDPM workflows - See https://fb.facebook.com/groups/1426530000692545/permalink/1906766366002237/ - because EDPM assumes any call to CreateOrCloneCommonWorld and Gloo ops are wrapped in exception handlers but in this case exception thrown in the barrier init net is not handled.
To address this issue, we add _CreateOrCloneCommonWorld to the param_init_net instead of a new barrier init net. Since errors for param_init_net run is handled gracefully and re-rendezvous, it should fixes the problem.
* Handle empty nets in async_scheduling
Make sure we don't get stuck on empty nets
* use CUDA_ARCH for conditional compile
* [C2 fix] infer function for ensure_cpu_output_op
* Update group_norm test to reduce flaky test
* Fix lr_multiplier for GPU
2018-05-11 06:14:27 +00:00
|
|
|
def add_lr_multiplier(self, lr_multiplier):
|
|
|
|
|
"""
|
|
|
|
|
Set the global learning rate multiplier. If a multiplier already
|
|
|
|
|
existed, this will overwrite the existing multiplier. The multiplier is
|
|
|
|
|
used for all future calls to _run(), unless it is overwritten.
|
|
|
|
|
"""
|
2017-08-26 01:53:20 +00:00
|
|
|
self._lr_multiplier = lr_multiplier
|
Update from facebook (#7451)
* [bootcamp] Improve "Shape" operator to support axes specification
To improve .shape operator of Caffe2 to support x.shape(tensor, axes), which takes an optional int array "axes" as input. For example, x.shape(tensor, [1, 0]) will return the dimension for axis 1 and 0 following the specified order. For current version, "axes" input allows duplications and can have arbitrary length.
* Back out "Add barrier net that runs before training nets"
Original commit changeset: b373fdc9c30f. Need additional changes to some callers to support barrier failures.
* Change warning to verbose log to reduce log spam
The `LOG(WARNING)` was a bit spammy for regular use so lets just make it a `VLOG`.
* Extract the shared code from different caffe2_benchmark binaries
The OSS benchmark and Internal benchmark will share most functions in the benchmark.
* Support MFR in sequence training
As titled.
* Make knowledge distillation work with using logged prediction feature as teacher label.
1) Add loading raw dense feature as teacher label.
2) Optional calibration function for teacher label
3) Add teacher label into generic unit test
4) Deprecated TTSN workflow version using feature_options to config teacher label
* [C2/CUDA]: unjoined cross entropy sigmoid
as desc
* Add async_scheduling executor into deferrable_net_exec_test
Add async_scheduling into tests and fix some exception cases
* Fix Event disabled error
When disabling event in RNN ops make sure we don't call Finish on disabled
event from op's RunAsync
* cuda ensure cpu output op can handle both TensorCPU and TensorCUDA
as desc.
* [C2 Core] Infer input device option in C2 hypothesis_test checkers
Improve how we default input blob device options.
Previously it defaults as where op lives but it is not necessarily the case.
For example:
CopyCPUToGPU
* [C2 Op]SplitByLengthsOp CPU/GPU implementation
[C2 Op]SplitByLengthsOp CPU/GPU implementation
* fix undefined symbol error
not sure why we're getting undefined symbol even with link_whole = True
Need to figure out why but need this workaround for now
* Add tools in DAIPlayground platform to help debugging models
Add additional tools to allow Plauground override individual method defined in AnyExp. This will allow user to create module that specificly change certain default method behavior. An example included in this diff is deactivating test model and checkpointing. When debugging any model problems, switching off components helps me quickly narrow down the location of the bug. The technique is extensively used in task T27038712 (Steady memory increase in EDPM, eventually resulting in gloo/cuda.cu:34: out of memory)
* add shape and type inference for int8 conversion operator
* Fix flaky test for group_norm
Fix flaky test for group_norm
* Fix group_norm_op_test flaky
Fix group_norm_op_test flaky
* Implementation of composite learning rate policy
In many state-of-the-arts deep learning works, people use a simple trick to
schedule the learning rate: use a fixed learning rate until error plateaus
and then switch to a different fixed learning rate, and so on. In this diff,
we implemented a simple version of the composite learning rate. The user gives
a set of learning rates policies and corresponding iteration nums, and the
optimizer will change the learning rate policy based on the number of iterations so far.
For example, the user give two learning rate policies, one is FixedLearningRate
and PolyLearningRate, with an iteration number of 1k. Then the first 1k iteration,
we use FixedLearningRate. For the following iterations, we use PolyLearningRate.
* Split two use cases of CachedReader into two classes, DBFileReader and CachedReader
# Use Cases:
1). input: DB file -> output: DatasetReader.
Use DBFileReader.
2). input: Reader -> build cache DB file -> output: DatasetReader.
Use CachedReader.
# Changes to CachedReader:
1). Move db_path to the constructor.
Because in mock reader. cache will always be built ahead.
# Changes to tests:
1). Make a separate TestCase class for CachedReader and DBFileReader.
2). Make it possible to add more test functions by adding setUp, tearDown and _make_temp_path.
3). Make delete db_path more general. `db_path` could be a file for `log_file_db`, but could also be a directory for `leveldb`.
* Back out "On Mobile phones, call GlobalInit with no arguments in predictor in case we need to perform initialization"
Original commit changeset: 4489c6133f11
* Fix LARS bug
Fixed a bug in the LARS implementation which caused all subsequent blobs not using LARS to have the LARS learning rate multiplier applied to them.
* [tum] support sparse init & add uniformFill option
as title
* Propagate exception for async nets
Capture the exception when an exception is thrown in async nets and re-throw it after wait(). This allows exceptions to be propagated up to the caller.
This diff was a part of D7752068. We split the diff so that C2 core files changes are in a separate diff.
* Automatic update of fbcode/onnx to 69894f207dfcd72d1e70497d387201cec327efbc
Previous import was 403ccfbd0161c38f0834413d790bad0874afbf9a
Included changes:
- **[69894f2](https://github.com/onnx/onnx/commit/69894f2)**: Use op schema.all tensor types in random like definitions (#865) <Scott McKay>
- **[b9d6b90](https://github.com/onnx/onnx/commit/b9d6b90)**: Clarify random like operators (#846) <Scott McKay>
- **[fc6b5fb](https://github.com/onnx/onnx/commit/fc6b5fb)**: Refactor shape inference implementation (#855) <anderspapitto>
- **[b7d8dc8](https://github.com/onnx/onnx/commit/b7d8dc8)**: fix cmake warning message (#863) <Eric S. Yu>
- **[f585c5d](https://github.com/onnx/onnx/commit/f585c5d)**: add pytorch-operator test for tile (#831) <Wenhao Hu>
- **[993fe70](https://github.com/onnx/onnx/commit/993fe70)**: add install step (#832) <Eric S. Yu>
- **[68bc26c](https://github.com/onnx/onnx/commit/68bc26c)**: add type inference for traditional ml ops except classifier ops. (#857) <Ke Zhang>
- **[9cc0cda](https://github.com/onnx/onnx/commit/9cc0cda)**: fix string representation of scalar types (#858) <G. Ramalingam>
- **[1078925](https://github.com/onnx/onnx/commit/1078925)**: fix y in pow test case to scalar (#852) <Wenhao Hu>
- **[c66fb6f](https://github.com/onnx/onnx/commit/c66fb6f)**: Add some math function shape inference (#845) <anderspapitto>
- **[ff667d1](https://github.com/onnx/onnx/commit/ff667d1)**: Refactor return type and docs for ONNXIFI_BACKEND_DIRECTX_ID (#853) <Marat Dukhan>
- **[11c6876](https://github.com/onnx/onnx/commit/11c6876)**: clear initializer names when clear initializer (#849) <Wenhao Hu>
- **[73c34ae](https://github.com/onnx/onnx/commit/73c34ae)**: Clarify FeatureVectorizer description. (#843) <Scott McKay>
- **[1befb9b](https://github.com/onnx/onnx/commit/1befb9b)**: Remove useless text in docs (#850) <Lu Fang>
- **[e84788f](https://github.com/onnx/onnx/commit/e84788f)**: Fix SELU attributes' default values (#839) <Lu Fang>
- **[ebac046](https://github.com/onnx/onnx/commit/ebac046)**: Add tile test case (#823) <Wenhao Hu>
- **[8b7a925](https://github.com/onnx/onnx/commit/8b7a925)**: a few more shape inference functions (#772) <anderspapitto>
- **[9718f42](https://github.com/onnx/onnx/commit/9718f42)**: Make the coefficient non optional for LinearClassifier (#836) <Jaliya Ekanayake>
- **[ef083d0](https://github.com/onnx/onnx/commit/ef083d0)**: Add save_tensor and load_tensor functions for Protos (#770) <Lu Fang>
- **[45ceb55](https://github.com/onnx/onnx/commit/45ceb55)**: Check if CMAKE_BUILD_TYPE set before project(). (#812) <Sergii Dymchenko>
- **[4b3d2b0](https://github.com/onnx/onnx/commit/4b3d2b0)**: [WIP] reenable shape inference tests (#834) <anderspapitto>
- **[22d17ee](https://github.com/onnx/onnx/commit/22d17ee)**: RNN tests: LSTM, GRU, SimpleRNN (#739) <Peyman Manikashani>
- **[de65b95](https://github.com/onnx/onnx/commit/de65b95)**: dimension denotation (#443) <Tian Jin>
- **[eccc76e](https://github.com/onnx/onnx/commit/eccc76e)**: fix field number issue in onnx operator proto and enable its build (#829) <Ke Zhang>
- **[d582beb](https://github.com/onnx/onnx/commit/d582beb)**: disable shape inference test to unbreak ci (#830) <Lu Fang>
- **[485b787](https://github.com/onnx/onnx/commit/485b787)**: function proto for composite op. (#802) <Ke Zhang>
- **[cd58928](https://github.com/onnx/onnx/commit/cd58928)**: specify defaults for attributes of Affine op (#820) <G. Ramalingam>
- **[7ee2cf9](https://github.com/onnx/onnx/commit/7ee2cf9)**: merge the dummy backend back into the main one (#743) <anderspapitto>
- **[1c03a5a](https://github.com/onnx/onnx/commit/1c03a5a)**: [Proposal] ONNX Interface for Framework Integration (previously ONNX Backend API) header and docs (#551) <Marat Dukhan>
- **[3769a98](https://github.com/onnx/onnx/commit/3769a98)**: Rename real model test case from VGG-16 to ZFNet (#821) <Lu Fang>
* [C2]ReluN Op
relu n op.
tf reference: https://www.tensorflow.org/api_docs/python/tf/nn/relu6
* Call destructor when assigning a blob value
* Add executor overrides
Add executor overrides flag to enable migration to async_scheduling executor
* Add barrier net that runs before training nets - attempt #2
Add a synchonize barrier net that is run before training nets. With this net, shards that are faster will wait for other shards before start training. This reduce chances of the faster shards timing out during GLOO AllReduce.
Removed explicit data_parallel_model.py.synchronize call in holmes workflow.
This change was landed previously but caused errors for some EDPM workflows - See https://fb.facebook.com/groups/1426530000692545/permalink/1906766366002237/ - because EDPM assumes any call to CreateOrCloneCommonWorld and Gloo ops are wrapped in exception handlers but in this case exception thrown in the barrier init net is not handled.
To address this issue, we add _CreateOrCloneCommonWorld to the param_init_net instead of a new barrier init net. Since errors for param_init_net run is handled gracefully and re-rendezvous, it should fixes the problem.
* Handle empty nets in async_scheduling
Make sure we don't get stuck on empty nets
* use CUDA_ARCH for conditional compile
* [C2 fix] infer function for ensure_cpu_output_op
* Update group_norm test to reduce flaky test
* Fix lr_multiplier for GPU
2018-05-11 06:14:27 +00:00
|
|
|
|
|
|
|
|
def _add_local_lr_multiplier(self, local_lr_multiplier, is_gpu_blob=False):
|
|
|
|
|
"""
|
|
|
|
|
Set the local learning rate multiplier. This local multiplier is
|
|
|
|
|
multiplied with the global learning rate multiplier if it exists. As
|
|
|
|
|
with the global learning rate multiplier, this multiplier will be
|
|
|
|
|
used for all future calls to _run(), so please call
|
|
|
|
|
_clear_local_lr_multiplier() at the beginning of the optimizer's _run()
|
|
|
|
|
before optionally calling this function.
|
|
|
|
|
"""
|
|
|
|
|
self._local_lr_multiplier = local_lr_multiplier
|
|
|
|
|
self._local_lr_multiplier_on_gpu = is_gpu_blob
|
|
|
|
|
|
|
|
|
|
def _clear_local_lr_multiplier(self):
|
|
|
|
|
self._local_lr_multiplier = None
|
|
|
|
|
self._local_lr_multiplier_on_gpu = False
|
2017-08-24 17:10:58 +00:00
|
|
|
|
2017-03-08 02:44:45 +00:00
|
|
|
@staticmethod
|
|
|
|
|
def dedup(net, sparse_dedup_aggregator, grad):
|
2020-09-10 02:35:22 +00:00
|
|
|
assert isinstance(
|
|
|
|
|
grad, core.GradientSlice
|
|
|
|
|
), "Dedup only works for sparse gradient, got {}".format(grad)
|
2017-03-08 02:44:45 +00:00
|
|
|
if sparse_dedup_aggregator:
|
|
|
|
|
return net.DeduplicateGradientSlices(
|
2020-09-10 02:35:22 +00:00
|
|
|
grad, aggregator=sparse_dedup_aggregator
|
|
|
|
|
)
|
2017-03-08 02:44:45 +00:00
|
|
|
else:
|
|
|
|
|
return grad
|
|
|
|
|
|
2017-04-17 17:06:49 +00:00
|
|
|
def get_auxiliary_parameters(self):
|
|
|
|
|
"""Returns a list of auxiliary parameters.
|
|
|
|
|
|
|
|
|
|
Returns:
|
|
|
|
|
aux_params: A namedtuple, AuxParams.
|
|
|
|
|
|
|
|
|
|
aux_params.local stores a list of blobs. Each blob is a local
|
|
|
|
|
auxiliary parameter. A local auxiliary parameter is a parameter in
|
|
|
|
|
parallel to a learning rate parameter. Take adagrad as an example,
|
|
|
|
|
the local auxiliary parameter is the squared sum parameter, because
|
|
|
|
|
every learning rate has a squared sum associated with it.
|
|
|
|
|
|
|
|
|
|
aux_params.shared also stores a list of blobs. Each blob is a shared
|
|
|
|
|
auxiliary parameter. A shared auxiliary parameter is a parameter
|
|
|
|
|
that is shared across all the learning rate parameters. Take adam as
|
|
|
|
|
an example, the iteration parameter is a shared parameter, because
|
|
|
|
|
all the learning rates share the same iteration parameter.
|
|
|
|
|
"""
|
|
|
|
|
return self._aux_params
|
|
|
|
|
|
2017-05-09 20:14:07 +00:00
|
|
|
# TODO(xlwang): In transfer learning, parameter initialized from pretrained
|
|
|
|
|
# model might require a different learning rate than otherwise initialized.
|
|
|
|
|
# To this end, here we implement a python solution where
|
|
|
|
|
# `base_learning_rate` is scaled by `scale`, by calling
|
|
|
|
|
# `scale_learning_rate`; Alternatively, we can achieve same effect by
|
|
|
|
|
# rewriting the LearningRate operator in C++
|
|
|
|
|
# Note that it is the responsibility of specific optimizer to decide what
|
|
|
|
|
# logic should be used for `scale_learning_rate`
|
|
|
|
|
def scale_learning_rate(self, *args, **kwargs):
|
|
|
|
|
raise NotImplementedError(
|
2020-09-10 02:35:22 +00:00
|
|
|
"Optimizer Need to Implement `scale_learning_rate` method."
|
|
|
|
|
)
|
2017-05-09 20:14:07 +00:00
|
|
|
|
2018-08-02 18:41:15 +00:00
|
|
|
def create_lars_inputs(self, param_init_net, weight_decay, trust, lr_max):
|
2020-09-10 02:35:22 +00:00
|
|
|
wd = param_init_net.ConstantFill(
|
|
|
|
|
[], "weight_decay", shape=[1], value=weight_decay
|
|
|
|
|
)
|
2018-08-02 18:41:15 +00:00
|
|
|
trust = param_init_net.ConstantFill([], "trust", shape=[1], value=trust)
|
2020-09-10 02:35:22 +00:00
|
|
|
lr_max = param_init_net.ConstantFill([], "lr_max", shape=[1], value=lr_max)
|
2018-08-02 18:41:15 +00:00
|
|
|
return wd, trust, lr_max
|
|
|
|
|
|
2017-03-08 02:44:45 +00:00
|
|
|
|
|
|
|
|
class SgdOptimizer(Optimizer):
|
2020-09-10 02:35:22 +00:00
|
|
|
def __init__(
|
|
|
|
|
self,
|
|
|
|
|
base_learning_rate=0.01,
|
|
|
|
|
policy="fixed",
|
|
|
|
|
momentum=0.0,
|
|
|
|
|
nesterov=True,
|
|
|
|
|
sparse_dedup_aggregator=None,
|
|
|
|
|
lars=None,
|
|
|
|
|
**kwargs
|
|
|
|
|
):
|
2017-04-17 17:06:49 +00:00
|
|
|
super(SgdOptimizer, self).__init__()
|
2017-03-08 02:44:45 +00:00
|
|
|
self.base_learning_rate = base_learning_rate
|
|
|
|
|
self.policy = policy
|
|
|
|
|
self.momentum = momentum
|
2017-05-30 19:44:50 +00:00
|
|
|
self.nesterov = nesterov
|
2017-07-04 08:56:28 +00:00
|
|
|
self.sparse_dedup_aggregator = sparse_dedup_aggregator
|
2018-03-12 19:22:59 +00:00
|
|
|
self.lars = lars
|
2017-03-08 02:44:45 +00:00
|
|
|
self.init_kwargs = kwargs
|
|
|
|
|
|
2017-05-26 05:01:54 +00:00
|
|
|
def _run(self, net, param_init_net, param_info):
|
|
|
|
|
param = param_info.blob
|
|
|
|
|
grad = param_info.grad
|
2017-05-30 19:44:50 +00:00
|
|
|
if self.base_learning_rate == 0:
|
2017-03-08 02:44:45 +00:00
|
|
|
return
|
2020-09-10 02:35:22 +00:00
|
|
|
assert (
|
|
|
|
|
self.base_learning_rate > 0
|
|
|
|
|
), "Expect positive base learning rate, got {}".format(self.base_learning_rate)
|
2017-03-08 02:44:45 +00:00
|
|
|
|
Update from facebook (#7451)
* [bootcamp] Improve "Shape" operator to support axes specification
To improve .shape operator of Caffe2 to support x.shape(tensor, axes), which takes an optional int array "axes" as input. For example, x.shape(tensor, [1, 0]) will return the dimension for axis 1 and 0 following the specified order. For current version, "axes" input allows duplications and can have arbitrary length.
* Back out "Add barrier net that runs before training nets"
Original commit changeset: b373fdc9c30f. Need additional changes to some callers to support barrier failures.
* Change warning to verbose log to reduce log spam
The `LOG(WARNING)` was a bit spammy for regular use so lets just make it a `VLOG`.
* Extract the shared code from different caffe2_benchmark binaries
The OSS benchmark and Internal benchmark will share most functions in the benchmark.
* Support MFR in sequence training
As titled.
* Make knowledge distillation work with using logged prediction feature as teacher label.
1) Add loading raw dense feature as teacher label.
2) Optional calibration function for teacher label
3) Add teacher label into generic unit test
4) Deprecated TTSN workflow version using feature_options to config teacher label
* [C2/CUDA]: unjoined cross entropy sigmoid
as desc
* Add async_scheduling executor into deferrable_net_exec_test
Add async_scheduling into tests and fix some exception cases
* Fix Event disabled error
When disabling event in RNN ops make sure we don't call Finish on disabled
event from op's RunAsync
* cuda ensure cpu output op can handle both TensorCPU and TensorCUDA
as desc.
* [C2 Core] Infer input device option in C2 hypothesis_test checkers
Improve how we default input blob device options.
Previously it defaults as where op lives but it is not necessarily the case.
For example:
CopyCPUToGPU
* [C2 Op]SplitByLengthsOp CPU/GPU implementation
[C2 Op]SplitByLengthsOp CPU/GPU implementation
* fix undefined symbol error
not sure why we're getting undefined symbol even with link_whole = True
Need to figure out why but need this workaround for now
* Add tools in DAIPlayground platform to help debugging models
Add additional tools to allow Plauground override individual method defined in AnyExp. This will allow user to create module that specificly change certain default method behavior. An example included in this diff is deactivating test model and checkpointing. When debugging any model problems, switching off components helps me quickly narrow down the location of the bug. The technique is extensively used in task T27038712 (Steady memory increase in EDPM, eventually resulting in gloo/cuda.cu:34: out of memory)
* add shape and type inference for int8 conversion operator
* Fix flaky test for group_norm
Fix flaky test for group_norm
* Fix group_norm_op_test flaky
Fix group_norm_op_test flaky
* Implementation of composite learning rate policy
In many state-of-the-arts deep learning works, people use a simple trick to
schedule the learning rate: use a fixed learning rate until error plateaus
and then switch to a different fixed learning rate, and so on. In this diff,
we implemented a simple version of the composite learning rate. The user gives
a set of learning rates policies and corresponding iteration nums, and the
optimizer will change the learning rate policy based on the number of iterations so far.
For example, the user give two learning rate policies, one is FixedLearningRate
and PolyLearningRate, with an iteration number of 1k. Then the first 1k iteration,
we use FixedLearningRate. For the following iterations, we use PolyLearningRate.
* Split two use cases of CachedReader into two classes, DBFileReader and CachedReader
# Use Cases:
1). input: DB file -> output: DatasetReader.
Use DBFileReader.
2). input: Reader -> build cache DB file -> output: DatasetReader.
Use CachedReader.
# Changes to CachedReader:
1). Move db_path to the constructor.
Because in mock reader. cache will always be built ahead.
# Changes to tests:
1). Make a separate TestCase class for CachedReader and DBFileReader.
2). Make it possible to add more test functions by adding setUp, tearDown and _make_temp_path.
3). Make delete db_path more general. `db_path` could be a file for `log_file_db`, but could also be a directory for `leveldb`.
* Back out "On Mobile phones, call GlobalInit with no arguments in predictor in case we need to perform initialization"
Original commit changeset: 4489c6133f11
* Fix LARS bug
Fixed a bug in the LARS implementation which caused all subsequent blobs not using LARS to have the LARS learning rate multiplier applied to them.
* [tum] support sparse init & add uniformFill option
as title
* Propagate exception for async nets
Capture the exception when an exception is thrown in async nets and re-throw it after wait(). This allows exceptions to be propagated up to the caller.
This diff was a part of D7752068. We split the diff so that C2 core files changes are in a separate diff.
* Automatic update of fbcode/onnx to 69894f207dfcd72d1e70497d387201cec327efbc
Previous import was 403ccfbd0161c38f0834413d790bad0874afbf9a
Included changes:
- **[69894f2](https://github.com/onnx/onnx/commit/69894f2)**: Use op schema.all tensor types in random like definitions (#865) <Scott McKay>
- **[b9d6b90](https://github.com/onnx/onnx/commit/b9d6b90)**: Clarify random like operators (#846) <Scott McKay>
- **[fc6b5fb](https://github.com/onnx/onnx/commit/fc6b5fb)**: Refactor shape inference implementation (#855) <anderspapitto>
- **[b7d8dc8](https://github.com/onnx/onnx/commit/b7d8dc8)**: fix cmake warning message (#863) <Eric S. Yu>
- **[f585c5d](https://github.com/onnx/onnx/commit/f585c5d)**: add pytorch-operator test for tile (#831) <Wenhao Hu>
- **[993fe70](https://github.com/onnx/onnx/commit/993fe70)**: add install step (#832) <Eric S. Yu>
- **[68bc26c](https://github.com/onnx/onnx/commit/68bc26c)**: add type inference for traditional ml ops except classifier ops. (#857) <Ke Zhang>
- **[9cc0cda](https://github.com/onnx/onnx/commit/9cc0cda)**: fix string representation of scalar types (#858) <G. Ramalingam>
- **[1078925](https://github.com/onnx/onnx/commit/1078925)**: fix y in pow test case to scalar (#852) <Wenhao Hu>
- **[c66fb6f](https://github.com/onnx/onnx/commit/c66fb6f)**: Add some math function shape inference (#845) <anderspapitto>
- **[ff667d1](https://github.com/onnx/onnx/commit/ff667d1)**: Refactor return type and docs for ONNXIFI_BACKEND_DIRECTX_ID (#853) <Marat Dukhan>
- **[11c6876](https://github.com/onnx/onnx/commit/11c6876)**: clear initializer names when clear initializer (#849) <Wenhao Hu>
- **[73c34ae](https://github.com/onnx/onnx/commit/73c34ae)**: Clarify FeatureVectorizer description. (#843) <Scott McKay>
- **[1befb9b](https://github.com/onnx/onnx/commit/1befb9b)**: Remove useless text in docs (#850) <Lu Fang>
- **[e84788f](https://github.com/onnx/onnx/commit/e84788f)**: Fix SELU attributes' default values (#839) <Lu Fang>
- **[ebac046](https://github.com/onnx/onnx/commit/ebac046)**: Add tile test case (#823) <Wenhao Hu>
- **[8b7a925](https://github.com/onnx/onnx/commit/8b7a925)**: a few more shape inference functions (#772) <anderspapitto>
- **[9718f42](https://github.com/onnx/onnx/commit/9718f42)**: Make the coefficient non optional for LinearClassifier (#836) <Jaliya Ekanayake>
- **[ef083d0](https://github.com/onnx/onnx/commit/ef083d0)**: Add save_tensor and load_tensor functions for Protos (#770) <Lu Fang>
- **[45ceb55](https://github.com/onnx/onnx/commit/45ceb55)**: Check if CMAKE_BUILD_TYPE set before project(). (#812) <Sergii Dymchenko>
- **[4b3d2b0](https://github.com/onnx/onnx/commit/4b3d2b0)**: [WIP] reenable shape inference tests (#834) <anderspapitto>
- **[22d17ee](https://github.com/onnx/onnx/commit/22d17ee)**: RNN tests: LSTM, GRU, SimpleRNN (#739) <Peyman Manikashani>
- **[de65b95](https://github.com/onnx/onnx/commit/de65b95)**: dimension denotation (#443) <Tian Jin>
- **[eccc76e](https://github.com/onnx/onnx/commit/eccc76e)**: fix field number issue in onnx operator proto and enable its build (#829) <Ke Zhang>
- **[d582beb](https://github.com/onnx/onnx/commit/d582beb)**: disable shape inference test to unbreak ci (#830) <Lu Fang>
- **[485b787](https://github.com/onnx/onnx/commit/485b787)**: function proto for composite op. (#802) <Ke Zhang>
- **[cd58928](https://github.com/onnx/onnx/commit/cd58928)**: specify defaults for attributes of Affine op (#820) <G. Ramalingam>
- **[7ee2cf9](https://github.com/onnx/onnx/commit/7ee2cf9)**: merge the dummy backend back into the main one (#743) <anderspapitto>
- **[1c03a5a](https://github.com/onnx/onnx/commit/1c03a5a)**: [Proposal] ONNX Interface for Framework Integration (previously ONNX Backend API) header and docs (#551) <Marat Dukhan>
- **[3769a98](https://github.com/onnx/onnx/commit/3769a98)**: Rename real model test case from VGG-16 to ZFNet (#821) <Lu Fang>
* [C2]ReluN Op
relu n op.
tf reference: https://www.tensorflow.org/api_docs/python/tf/nn/relu6
* Call destructor when assigning a blob value
* Add executor overrides
Add executor overrides flag to enable migration to async_scheduling executor
* Add barrier net that runs before training nets - attempt #2
Add a synchonize barrier net that is run before training nets. With this net, shards that are faster will wait for other shards before start training. This reduce chances of the faster shards timing out during GLOO AllReduce.
Removed explicit data_parallel_model.py.synchronize call in holmes workflow.
This change was landed previously but caused errors for some EDPM workflows - See https://fb.facebook.com/groups/1426530000692545/permalink/1906766366002237/ - because EDPM assumes any call to CreateOrCloneCommonWorld and Gloo ops are wrapped in exception handlers but in this case exception thrown in the barrier init net is not handled.
To address this issue, we add _CreateOrCloneCommonWorld to the param_init_net instead of a new barrier init net. Since errors for param_init_net run is handled gracefully and re-rendezvous, it should fixes the problem.
* Handle empty nets in async_scheduling
Make sure we don't get stuck on empty nets
* use CUDA_ARCH for conditional compile
* [C2 fix] infer function for ensure_cpu_output_op
* Update group_norm test to reduce flaky test
* Fix lr_multiplier for GPU
2018-05-11 06:14:27 +00:00
|
|
|
self._clear_local_lr_multiplier()
|
|
|
|
|
|
2018-02-25 22:58:31 +00:00
|
|
|
# TODO(zqq): support LARS for sparse parameters
|
|
|
|
|
if self.lars is not None and not isinstance(grad, core.GradientSlice):
|
2020-09-10 02:35:22 +00:00
|
|
|
assert self.lars >= 0, "Lars offset must be nonnegative, got {}".format(
|
|
|
|
|
self.lars
|
|
|
|
|
)
|
2018-08-02 18:41:15 +00:00
|
|
|
wd, trust, lr_max = self.create_lars_inputs(
|
2020-09-10 02:35:22 +00:00
|
|
|
param_init_net, 0.0, 1.0, np.finfo(np.float32).max
|
|
|
|
|
)
|
2018-02-25 22:58:31 +00:00
|
|
|
lr_lars_multiplier = net.Lars(
|
2018-08-02 18:41:15 +00:00
|
|
|
[param, grad, wd, trust, lr_max],
|
2018-02-25 22:58:31 +00:00
|
|
|
self.make_unique_blob_name(str(param) + "_lars"),
|
2018-08-02 18:41:15 +00:00
|
|
|
offset=self.lars,
|
2020-09-10 02:35:22 +00:00
|
|
|
lr_min=0.0,
|
|
|
|
|
)
|
2018-03-03 02:06:19 +00:00
|
|
|
current_scope = scope.CurrentDeviceScope()
|
Update from facebook (#7451)
* [bootcamp] Improve "Shape" operator to support axes specification
To improve .shape operator of Caffe2 to support x.shape(tensor, axes), which takes an optional int array "axes" as input. For example, x.shape(tensor, [1, 0]) will return the dimension for axis 1 and 0 following the specified order. For current version, "axes" input allows duplications and can have arbitrary length.
* Back out "Add barrier net that runs before training nets"
Original commit changeset: b373fdc9c30f. Need additional changes to some callers to support barrier failures.
* Change warning to verbose log to reduce log spam
The `LOG(WARNING)` was a bit spammy for regular use so lets just make it a `VLOG`.
* Extract the shared code from different caffe2_benchmark binaries
The OSS benchmark and Internal benchmark will share most functions in the benchmark.
* Support MFR in sequence training
As titled.
* Make knowledge distillation work with using logged prediction feature as teacher label.
1) Add loading raw dense feature as teacher label.
2) Optional calibration function for teacher label
3) Add teacher label into generic unit test
4) Deprecated TTSN workflow version using feature_options to config teacher label
* [C2/CUDA]: unjoined cross entropy sigmoid
as desc
* Add async_scheduling executor into deferrable_net_exec_test
Add async_scheduling into tests and fix some exception cases
* Fix Event disabled error
When disabling event in RNN ops make sure we don't call Finish on disabled
event from op's RunAsync
* cuda ensure cpu output op can handle both TensorCPU and TensorCUDA
as desc.
* [C2 Core] Infer input device option in C2 hypothesis_test checkers
Improve how we default input blob device options.
Previously it defaults as where op lives but it is not necessarily the case.
For example:
CopyCPUToGPU
* [C2 Op]SplitByLengthsOp CPU/GPU implementation
[C2 Op]SplitByLengthsOp CPU/GPU implementation
* fix undefined symbol error
not sure why we're getting undefined symbol even with link_whole = True
Need to figure out why but need this workaround for now
* Add tools in DAIPlayground platform to help debugging models
Add additional tools to allow Plauground override individual method defined in AnyExp. This will allow user to create module that specificly change certain default method behavior. An example included in this diff is deactivating test model and checkpointing. When debugging any model problems, switching off components helps me quickly narrow down the location of the bug. The technique is extensively used in task T27038712 (Steady memory increase in EDPM, eventually resulting in gloo/cuda.cu:34: out of memory)
* add shape and type inference for int8 conversion operator
* Fix flaky test for group_norm
Fix flaky test for group_norm
* Fix group_norm_op_test flaky
Fix group_norm_op_test flaky
* Implementation of composite learning rate policy
In many state-of-the-arts deep learning works, people use a simple trick to
schedule the learning rate: use a fixed learning rate until error plateaus
and then switch to a different fixed learning rate, and so on. In this diff,
we implemented a simple version of the composite learning rate. The user gives
a set of learning rates policies and corresponding iteration nums, and the
optimizer will change the learning rate policy based on the number of iterations so far.
For example, the user give two learning rate policies, one is FixedLearningRate
and PolyLearningRate, with an iteration number of 1k. Then the first 1k iteration,
we use FixedLearningRate. For the following iterations, we use PolyLearningRate.
* Split two use cases of CachedReader into two classes, DBFileReader and CachedReader
# Use Cases:
1). input: DB file -> output: DatasetReader.
Use DBFileReader.
2). input: Reader -> build cache DB file -> output: DatasetReader.
Use CachedReader.
# Changes to CachedReader:
1). Move db_path to the constructor.
Because in mock reader. cache will always be built ahead.
# Changes to tests:
1). Make a separate TestCase class for CachedReader and DBFileReader.
2). Make it possible to add more test functions by adding setUp, tearDown and _make_temp_path.
3). Make delete db_path more general. `db_path` could be a file for `log_file_db`, but could also be a directory for `leveldb`.
* Back out "On Mobile phones, call GlobalInit with no arguments in predictor in case we need to perform initialization"
Original commit changeset: 4489c6133f11
* Fix LARS bug
Fixed a bug in the LARS implementation which caused all subsequent blobs not using LARS to have the LARS learning rate multiplier applied to them.
* [tum] support sparse init & add uniformFill option
as title
* Propagate exception for async nets
Capture the exception when an exception is thrown in async nets and re-throw it after wait(). This allows exceptions to be propagated up to the caller.
This diff was a part of D7752068. We split the diff so that C2 core files changes are in a separate diff.
* Automatic update of fbcode/onnx to 69894f207dfcd72d1e70497d387201cec327efbc
Previous import was 403ccfbd0161c38f0834413d790bad0874afbf9a
Included changes:
- **[69894f2](https://github.com/onnx/onnx/commit/69894f2)**: Use op schema.all tensor types in random like definitions (#865) <Scott McKay>
- **[b9d6b90](https://github.com/onnx/onnx/commit/b9d6b90)**: Clarify random like operators (#846) <Scott McKay>
- **[fc6b5fb](https://github.com/onnx/onnx/commit/fc6b5fb)**: Refactor shape inference implementation (#855) <anderspapitto>
- **[b7d8dc8](https://github.com/onnx/onnx/commit/b7d8dc8)**: fix cmake warning message (#863) <Eric S. Yu>
- **[f585c5d](https://github.com/onnx/onnx/commit/f585c5d)**: add pytorch-operator test for tile (#831) <Wenhao Hu>
- **[993fe70](https://github.com/onnx/onnx/commit/993fe70)**: add install step (#832) <Eric S. Yu>
- **[68bc26c](https://github.com/onnx/onnx/commit/68bc26c)**: add type inference for traditional ml ops except classifier ops. (#857) <Ke Zhang>
- **[9cc0cda](https://github.com/onnx/onnx/commit/9cc0cda)**: fix string representation of scalar types (#858) <G. Ramalingam>
- **[1078925](https://github.com/onnx/onnx/commit/1078925)**: fix y in pow test case to scalar (#852) <Wenhao Hu>
- **[c66fb6f](https://github.com/onnx/onnx/commit/c66fb6f)**: Add some math function shape inference (#845) <anderspapitto>
- **[ff667d1](https://github.com/onnx/onnx/commit/ff667d1)**: Refactor return type and docs for ONNXIFI_BACKEND_DIRECTX_ID (#853) <Marat Dukhan>
- **[11c6876](https://github.com/onnx/onnx/commit/11c6876)**: clear initializer names when clear initializer (#849) <Wenhao Hu>
- **[73c34ae](https://github.com/onnx/onnx/commit/73c34ae)**: Clarify FeatureVectorizer description. (#843) <Scott McKay>
- **[1befb9b](https://github.com/onnx/onnx/commit/1befb9b)**: Remove useless text in docs (#850) <Lu Fang>
- **[e84788f](https://github.com/onnx/onnx/commit/e84788f)**: Fix SELU attributes' default values (#839) <Lu Fang>
- **[ebac046](https://github.com/onnx/onnx/commit/ebac046)**: Add tile test case (#823) <Wenhao Hu>
- **[8b7a925](https://github.com/onnx/onnx/commit/8b7a925)**: a few more shape inference functions (#772) <anderspapitto>
- **[9718f42](https://github.com/onnx/onnx/commit/9718f42)**: Make the coefficient non optional for LinearClassifier (#836) <Jaliya Ekanayake>
- **[ef083d0](https://github.com/onnx/onnx/commit/ef083d0)**: Add save_tensor and load_tensor functions for Protos (#770) <Lu Fang>
- **[45ceb55](https://github.com/onnx/onnx/commit/45ceb55)**: Check if CMAKE_BUILD_TYPE set before project(). (#812) <Sergii Dymchenko>
- **[4b3d2b0](https://github.com/onnx/onnx/commit/4b3d2b0)**: [WIP] reenable shape inference tests (#834) <anderspapitto>
- **[22d17ee](https://github.com/onnx/onnx/commit/22d17ee)**: RNN tests: LSTM, GRU, SimpleRNN (#739) <Peyman Manikashani>
- **[de65b95](https://github.com/onnx/onnx/commit/de65b95)**: dimension denotation (#443) <Tian Jin>
- **[eccc76e](https://github.com/onnx/onnx/commit/eccc76e)**: fix field number issue in onnx operator proto and enable its build (#829) <Ke Zhang>
- **[d582beb](https://github.com/onnx/onnx/commit/d582beb)**: disable shape inference test to unbreak ci (#830) <Lu Fang>
- **[485b787](https://github.com/onnx/onnx/commit/485b787)**: function proto for composite op. (#802) <Ke Zhang>
- **[cd58928](https://github.com/onnx/onnx/commit/cd58928)**: specify defaults for attributes of Affine op (#820) <G. Ramalingam>
- **[7ee2cf9](https://github.com/onnx/onnx/commit/7ee2cf9)**: merge the dummy backend back into the main one (#743) <anderspapitto>
- **[1c03a5a](https://github.com/onnx/onnx/commit/1c03a5a)**: [Proposal] ONNX Interface for Framework Integration (previously ONNX Backend API) header and docs (#551) <Marat Dukhan>
- **[3769a98](https://github.com/onnx/onnx/commit/3769a98)**: Rename real model test case from VGG-16 to ZFNet (#821) <Lu Fang>
* [C2]ReluN Op
relu n op.
tf reference: https://www.tensorflow.org/api_docs/python/tf/nn/relu6
* Call destructor when assigning a blob value
* Add executor overrides
Add executor overrides flag to enable migration to async_scheduling executor
* Add barrier net that runs before training nets - attempt #2
Add a synchonize barrier net that is run before training nets. With this net, shards that are faster will wait for other shards before start training. This reduce chances of the faster shards timing out during GLOO AllReduce.
Removed explicit data_parallel_model.py.synchronize call in holmes workflow.
This change was landed previously but caused errors for some EDPM workflows - See https://fb.facebook.com/groups/1426530000692545/permalink/1906766366002237/ - because EDPM assumes any call to CreateOrCloneCommonWorld and Gloo ops are wrapped in exception handlers but in this case exception thrown in the barrier init net is not handled.
To address this issue, we add _CreateOrCloneCommonWorld to the param_init_net instead of a new barrier init net. Since errors for param_init_net run is handled gracefully and re-rendezvous, it should fixes the problem.
* Handle empty nets in async_scheduling
Make sure we don't get stuck on empty nets
* use CUDA_ARCH for conditional compile
* [C2 fix] infer function for ensure_cpu_output_op
* Update group_norm test to reduce flaky test
* Fix lr_multiplier for GPU
2018-05-11 06:14:27 +00:00
|
|
|
self._add_local_lr_multiplier(
|
2018-03-03 02:06:19 +00:00
|
|
|
lr_lars_multiplier,
|
2020-09-10 02:35:22 +00:00
|
|
|
is_gpu_blob=(
|
|
|
|
|
current_scope is not None
|
|
|
|
|
and core.IsGPUDeviceType(current_scope.device_type)
|
|
|
|
|
),
|
2018-03-03 02:06:19 +00:00
|
|
|
)
|
2018-02-25 22:58:31 +00:00
|
|
|
|
2017-05-30 19:44:50 +00:00
|
|
|
# We need negative sign for LR when used directly with WeightedSum
|
|
|
|
|
# below.
|
|
|
|
|
lr_sign = -1 if self.momentum else 1
|
2017-03-08 02:44:45 +00:00
|
|
|
lr, _ = self.build_lr(
|
2020-09-10 02:35:22 +00:00
|
|
|
net,
|
|
|
|
|
param_init_net,
|
2017-05-30 19:44:50 +00:00
|
|
|
base_learning_rate=self.base_learning_rate * lr_sign,
|
2017-03-08 02:44:45 +00:00
|
|
|
policy=self.policy,
|
|
|
|
|
**(self.init_kwargs)
|
|
|
|
|
)
|
|
|
|
|
|
2017-05-30 18:54:51 +00:00
|
|
|
dev = scope.CurrentDeviceScope()
|
|
|
|
|
if dev is None:
|
|
|
|
|
dev = core.DeviceOption(caffe2_pb2.CPU)
|
|
|
|
|
|
2017-05-30 19:44:50 +00:00
|
|
|
# Each GPU/CPU must have its own ONE blob, thus modify the name
|
|
|
|
|
# to include device information.
|
2017-05-30 18:54:51 +00:00
|
|
|
ONE = param_init_net.ConstantFill(
|
|
|
|
|
[],
|
2018-10-09 22:44:49 +00:00
|
|
|
"ONE_{}_{}{}".format(dev.device_type, dev.device_id, dev.node_name),
|
2017-05-30 18:54:51 +00:00
|
|
|
shape=[1],
|
2020-09-10 02:35:22 +00:00
|
|
|
value=1.0,
|
2017-05-30 18:54:51 +00:00
|
|
|
)
|
2017-05-30 19:44:50 +00:00
|
|
|
|
2017-04-17 17:06:49 +00:00
|
|
|
self._aux_params.shared.append(ONE)
|
2017-03-08 02:44:45 +00:00
|
|
|
|
|
|
|
|
if self.momentum > 0:
|
|
|
|
|
momentum_data = param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
param, str(param) + "_momentum", value=0.0
|
|
|
|
|
)
|
2017-04-17 17:06:49 +00:00
|
|
|
self._aux_params.local.append(momentum_data)
|
2017-03-08 02:44:45 +00:00
|
|
|
|
|
|
|
|
if isinstance(grad, core.GradientSlice):
|
2017-07-04 08:56:28 +00:00
|
|
|
grad = self.dedup(net, self.sparse_dedup_aggregator, grad)
|
2020-09-10 02:35:22 +00:00
|
|
|
if self.momentum > 0.0:
|
2017-11-03 23:07:12 +00:00
|
|
|
net.SparseMomentumSGDUpdate(
|
|
|
|
|
[grad.values, momentum_data, lr, param, grad.indices],
|
|
|
|
|
[grad.values, momentum_data, param],
|
|
|
|
|
momentum=self.momentum,
|
2020-09-10 02:35:22 +00:00
|
|
|
nesterov=self.nesterov,
|
|
|
|
|
)
|
2017-11-03 23:07:12 +00:00
|
|
|
else:
|
|
|
|
|
net.ScatterWeightedSum(
|
2020-09-10 02:35:22 +00:00
|
|
|
[param, ONE, grad.indices, grad.values, lr], param
|
2017-11-03 23:07:12 +00:00
|
|
|
)
|
2017-03-08 02:44:45 +00:00
|
|
|
else:
|
2020-09-10 02:35:22 +00:00
|
|
|
if self.momentum > 0.0:
|
2017-05-30 19:44:50 +00:00
|
|
|
net.MomentumSGDUpdate(
|
|
|
|
|
[grad, momentum_data, lr, param],
|
|
|
|
|
[grad, momentum_data, param],
|
2017-03-08 02:44:45 +00:00
|
|
|
momentum=self.momentum,
|
2020-09-10 02:35:22 +00:00
|
|
|
nesterov=self.nesterov,
|
|
|
|
|
)
|
2017-03-08 02:44:45 +00:00
|
|
|
else:
|
|
|
|
|
coeff = lr
|
|
|
|
|
|
2020-09-10 02:35:22 +00:00
|
|
|
net.WeightedSum([param, ONE, grad, coeff], param)
|
2017-03-08 02:44:45 +00:00
|
|
|
|
2017-05-09 20:14:07 +00:00
|
|
|
def scale_learning_rate(self, scale):
|
|
|
|
|
self.base_learning_rate *= scale
|
|
|
|
|
return
|
|
|
|
|
|
2017-06-02 21:15:45 +00:00
|
|
|
|
2017-06-01 15:31:33 +00:00
|
|
|
class MultiPrecisionSgdOptimizer(SgdOptimizer):
|
2020-09-10 02:35:22 +00:00
|
|
|
def __init__(
|
|
|
|
|
self,
|
|
|
|
|
base_learning_rate=0.1,
|
|
|
|
|
momentum=0.0,
|
|
|
|
|
policy="fixed",
|
|
|
|
|
nesterov=True,
|
|
|
|
|
sparse_dedup_aggregator=None,
|
|
|
|
|
**kwargs
|
|
|
|
|
):
|
2018-02-25 22:58:31 +00:00
|
|
|
super(MultiPrecisionSgdOptimizer, self).__init__(
|
|
|
|
|
base_learning_rate=base_learning_rate,
|
|
|
|
|
policy=policy,
|
|
|
|
|
momentum=momentum,
|
|
|
|
|
nesterov=nesterov,
|
|
|
|
|
sparse_dedup_aggregator=sparse_dedup_aggregator,
|
|
|
|
|
**kwargs
|
|
|
|
|
)
|
2017-06-01 15:31:33 +00:00
|
|
|
|
|
|
|
|
def _run(self, net, param_init_net, param_info):
|
|
|
|
|
param = param_info.blob
|
2020-09-10 02:35:22 +00:00
|
|
|
param_fp32 = (
|
|
|
|
|
param_info.blob_copy[core.DataType.FLOAT]
|
|
|
|
|
if param_info.blob_copy is not None
|
|
|
|
|
else None
|
|
|
|
|
)
|
2017-06-01 15:31:33 +00:00
|
|
|
|
|
|
|
|
# If we have a straight fp32 parameter, run the base class
|
2017-07-12 15:32:28 +00:00
|
|
|
if param_fp32 is None:
|
2017-06-01 15:31:33 +00:00
|
|
|
return SgdOptimizer._run(self, net, param_init_net, param_info)
|
|
|
|
|
|
|
|
|
|
grad = param_info.grad
|
|
|
|
|
if self.base_learning_rate == 0:
|
|
|
|
|
return
|
2020-09-10 02:35:22 +00:00
|
|
|
assert (
|
|
|
|
|
self.base_learning_rate > 0
|
|
|
|
|
), "Expect positive base learning rate, got {}".format(self.base_learning_rate)
|
2017-06-01 15:31:33 +00:00
|
|
|
|
|
|
|
|
lr, _ = self.build_lr(
|
2020-09-10 02:35:22 +00:00
|
|
|
net,
|
|
|
|
|
param_init_net,
|
2017-06-01 15:31:33 +00:00
|
|
|
base_learning_rate=-self.base_learning_rate,
|
|
|
|
|
policy=self.policy,
|
|
|
|
|
**(self.init_kwargs)
|
|
|
|
|
)
|
|
|
|
|
|
|
|
|
|
momentum_data = param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
param_fp32, str(param) + "_momentum", value=0.0
|
|
|
|
|
)
|
2017-06-01 15:31:33 +00:00
|
|
|
self._aux_params.local.append(momentum_data)
|
|
|
|
|
|
2020-09-10 02:35:22 +00:00
|
|
|
assert not isinstance(
|
|
|
|
|
grad, core.GradientSlice
|
|
|
|
|
), "MultiPrecisionSgd does not support sparse gradients"
|
2017-06-01 15:31:33 +00:00
|
|
|
|
|
|
|
|
# Copy gradient to fp32
|
|
|
|
|
grad_fp32 = net.HalfToFloat(grad, grad + "_fp32")
|
|
|
|
|
|
|
|
|
|
# update (fused) in fp32
|
|
|
|
|
net.MomentumSGDUpdate(
|
|
|
|
|
[grad_fp32, momentum_data, lr, param_fp32],
|
2017-06-14 18:25:07 +00:00
|
|
|
[grad_fp32, momentum_data, param_fp32],
|
2017-06-01 15:31:33 +00:00
|
|
|
momentum=self.momentum,
|
2020-09-10 02:35:22 +00:00
|
|
|
nesterov=self.nesterov,
|
|
|
|
|
)
|
2017-06-01 15:31:33 +00:00
|
|
|
|
|
|
|
|
# Copy updated param back to fp16
|
|
|
|
|
net.FloatToHalf(param_fp32, param)
|
2017-03-08 02:44:45 +00:00
|
|
|
|
2017-06-02 21:15:45 +00:00
|
|
|
|
2017-10-24 17:22:41 +00:00
|
|
|
class FP16SgdOptimizer(SgdOptimizer):
|
2020-09-10 02:35:22 +00:00
|
|
|
def __init__(
|
|
|
|
|
self,
|
|
|
|
|
base_learning_rate=0.1,
|
|
|
|
|
momentum=0.0,
|
|
|
|
|
policy="fixed",
|
|
|
|
|
nesterov=True,
|
|
|
|
|
weight_decay=0.0001,
|
|
|
|
|
sparse_dedup_aggregator=None,
|
|
|
|
|
**kwargs
|
|
|
|
|
):
|
2018-02-25 22:58:31 +00:00
|
|
|
super(FP16SgdOptimizer, self).__init__(
|
|
|
|
|
base_learning_rate=base_learning_rate,
|
|
|
|
|
policy=policy,
|
|
|
|
|
momentum=momentum,
|
|
|
|
|
nesterov=nesterov,
|
|
|
|
|
sparse_dedup_aggregator=sparse_dedup_aggregator,
|
|
|
|
|
**kwargs
|
|
|
|
|
)
|
2017-10-24 17:22:41 +00:00
|
|
|
self.weight_decay = weight_decay
|
|
|
|
|
|
|
|
|
|
def _run(self, net, param_init_net, param_info, fp32_update=False):
|
|
|
|
|
|
|
|
|
|
fp32_update_flag = 0
|
|
|
|
|
param_name = str(param_info.blob)
|
|
|
|
|
|
|
|
|
|
# should only be triggered in FP16 training by SpatialBN, which
|
|
|
|
|
# requires FP32 params in CuDNN.
|
|
|
|
|
if param_name.find("spatbn") != -1:
|
|
|
|
|
fp32_update = True
|
|
|
|
|
|
|
|
|
|
if fp32_update:
|
|
|
|
|
# doing a 32bit update
|
|
|
|
|
# Have to assume param_info.blob is FP32 as there is no way
|
|
|
|
|
# (that i currently know of) to query a blob's type in python
|
|
|
|
|
fp32_update_flag = 1
|
|
|
|
|
param = param_info.blob
|
|
|
|
|
param_fp32 = param_info.blob
|
|
|
|
|
else:
|
|
|
|
|
if param_info.blob_copy is None:
|
|
|
|
|
# doing a 32bit update
|
|
|
|
|
# Have to assume param_info.blob is FP32 as there is no way
|
|
|
|
|
# (that i currently know of) to query a blob's type in python
|
|
|
|
|
fp32_update_flag = 1
|
|
|
|
|
param = param_info.blob
|
|
|
|
|
param_fp32 = param_info.blob
|
|
|
|
|
else:
|
|
|
|
|
if core.DataType.FLOAT in param_info.blob_copy:
|
|
|
|
|
param = param_info.blob
|
|
|
|
|
param_fp32 = param_info.blob_copy[core.DataType.FLOAT]
|
|
|
|
|
elif core.DataType.FLOAT16 in param_info.blob_copy:
|
|
|
|
|
param = param_info.blob_copy[core.DataType.FLOAT16]
|
|
|
|
|
param_fp32 = param_info.blob
|
|
|
|
|
else:
|
2020-03-20 15:13:24 +00:00
|
|
|
AssertionError(
|
2017-10-24 17:22:41 +00:00
|
|
|
"Unrecognized parameter format to be updated "
|
|
|
|
|
"by FP16 Optimizer. Parameter: {}".format(param_info.name)
|
|
|
|
|
)
|
|
|
|
|
|
|
|
|
|
grad = param_info.grad
|
|
|
|
|
|
|
|
|
|
if self.base_learning_rate == 0:
|
|
|
|
|
return
|
2020-09-10 02:35:22 +00:00
|
|
|
assert (
|
|
|
|
|
self.base_learning_rate > 0
|
|
|
|
|
), "Expect positive base learning rate, got {}".format(self.base_learning_rate)
|
2017-10-24 17:22:41 +00:00
|
|
|
|
|
|
|
|
lr, _ = self.build_lr(
|
2020-09-10 02:35:22 +00:00
|
|
|
net,
|
|
|
|
|
param_init_net,
|
2017-10-24 17:22:41 +00:00
|
|
|
base_learning_rate=-self.base_learning_rate,
|
|
|
|
|
policy=self.policy,
|
|
|
|
|
**(self.init_kwargs)
|
|
|
|
|
)
|
|
|
|
|
|
|
|
|
|
momentum_data_fp32 = param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
param_fp32, str(param) + "_momentum_fp32", value=0.0
|
|
|
|
|
)
|
2017-10-24 17:22:41 +00:00
|
|
|
|
|
|
|
|
momentum_data = param_init_net.FloatToHalf(
|
2020-09-10 02:35:22 +00:00
|
|
|
momentum_data_fp32, str(param) + "_momentum"
|
|
|
|
|
)
|
2017-10-24 17:22:41 +00:00
|
|
|
|
|
|
|
|
self._aux_params.local.append(momentum_data)
|
|
|
|
|
|
2020-09-10 02:35:22 +00:00
|
|
|
assert not isinstance(
|
|
|
|
|
grad, core.GradientSlice
|
|
|
|
|
), "FP16Sgd does not support sparse gradients"
|
2017-10-24 17:22:41 +00:00
|
|
|
|
|
|
|
|
if fp32_update_flag == 0:
|
|
|
|
|
net.FP16MomentumSGDUpdate(
|
|
|
|
|
[grad, momentum_data, lr, param],
|
|
|
|
|
[grad, momentum_data, param],
|
|
|
|
|
momentum=self.momentum,
|
|
|
|
|
nesterov=self.nesterov,
|
2020-09-10 02:35:22 +00:00
|
|
|
weight_decay=self.weight_decay,
|
|
|
|
|
)
|
2017-10-24 17:22:41 +00:00
|
|
|
else:
|
|
|
|
|
# flag set to 1, therefore doing FP32 update
|
|
|
|
|
net.FP32MomentumSGDUpdate(
|
|
|
|
|
[grad, momentum_data_fp32, lr, param],
|
|
|
|
|
[grad, momentum_data_fp32, param],
|
|
|
|
|
momentum=self.momentum,
|
|
|
|
|
nesterov=self.nesterov,
|
2020-09-10 02:35:22 +00:00
|
|
|
weight_decay=self.weight_decay,
|
|
|
|
|
)
|
2017-10-24 17:22:41 +00:00
|
|
|
|
|
|
|
|
|
2017-06-02 21:15:45 +00:00
|
|
|
class WeightDecayBuilder(Optimizer):
|
|
|
|
|
def __init__(self, weight_decay):
|
|
|
|
|
self.weight_decay = weight_decay
|
|
|
|
|
|
|
|
|
|
def _run(self, net, param_init_net, param_info):
|
|
|
|
|
dev = scope.CurrentDeviceScope()
|
|
|
|
|
if dev is None:
|
|
|
|
|
dev = core.DeviceOption(caffe2_pb2.CPU)
|
|
|
|
|
|
|
|
|
|
ONE = param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
[], "ONE_{}_{}".format(dev.device_type, dev.device_id), shape=[1], value=1.0
|
2017-06-02 21:15:45 +00:00
|
|
|
)
|
|
|
|
|
WD = param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
[],
|
|
|
|
|
"wd_{}_{}".format(dev.device_type, dev.device_id),
|
|
|
|
|
shape=[1],
|
|
|
|
|
value=self.weight_decay,
|
2017-06-02 21:15:45 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
|
|
if isinstance(param_info.grad, core.GradientSlice):
|
2020-09-10 02:35:22 +00:00
|
|
|
raise ValueError("Weight decay does not yet support sparse gradients")
|
2017-06-02 21:15:45 +00:00
|
|
|
else:
|
|
|
|
|
net.WeightedSum(
|
2020-09-10 02:35:22 +00:00
|
|
|
[param_info.grad, ONE, param_info.blob, WD], param_info.grad
|
2017-06-02 21:15:45 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
|
|
|
2017-03-08 02:44:45 +00:00
|
|
|
class AdagradOptimizer(Optimizer):
|
2020-09-10 02:35:22 +00:00
|
|
|
def __init__(
|
|
|
|
|
self,
|
|
|
|
|
alpha=0.01,
|
|
|
|
|
epsilon=1e-4,
|
|
|
|
|
decay=1,
|
|
|
|
|
weight_decay=0.0,
|
|
|
|
|
policy="fixed",
|
|
|
|
|
sparse_dedup_aggregator=None,
|
|
|
|
|
rowWise=False,
|
|
|
|
|
engine="",
|
|
|
|
|
lars=None,
|
|
|
|
|
output_effective_lr=False,
|
|
|
|
|
output_effective_lr_and_update=False,
|
|
|
|
|
pruning_options=None,
|
|
|
|
|
swa_options=None,
|
|
|
|
|
weight_scale=None,
|
|
|
|
|
counter_halflife=-1,
|
|
|
|
|
**kwargs
|
|
|
|
|
):
|
2020-05-03 17:39:29 +00:00
|
|
|
for k, v in locals().items():
|
2020-09-10 02:35:22 +00:00
|
|
|
logger.info("AdagradOptimizer: input arguments: {}: {}".format(k, v))
|
2020-05-03 17:39:29 +00:00
|
|
|
|
2017-04-17 17:06:49 +00:00
|
|
|
super(AdagradOptimizer, self).__init__()
|
2017-03-08 02:44:45 +00:00
|
|
|
self.alpha = alpha
|
|
|
|
|
self.epsilon = epsilon
|
2017-09-15 07:23:08 +00:00
|
|
|
self.decay = decay
|
2020-05-03 17:39:29 +00:00
|
|
|
self.weight_decay = float(weight_decay)
|
2017-03-08 02:44:45 +00:00
|
|
|
self.policy = policy
|
|
|
|
|
self.sparse_dedup_aggregator = sparse_dedup_aggregator
|
2018-03-12 19:22:59 +00:00
|
|
|
self.rowWise = rowWise
|
2017-03-08 02:44:45 +00:00
|
|
|
self.engine = engine
|
2018-03-12 19:22:59 +00:00
|
|
|
self.lars = lars
|
[Caffe2][fbcode=>GH sync] Update from facebook 4323b18ce13c (#7116)
* [fix] Re-enable events in RNN ops
We have earlier added event disabling in RNN ops as back then we didn't use
events, with current use cases this is no longer true
(https://fburl.com/8vd0lp8y)
* use ops with cude impl
* Revert D7729695: [caffe2][fix] Re-enable events in RNN ops
This reverts commit 4b215c7496fb724656ff4c776933a15bdbbcde5e
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* [observer] Clean up observer_config.h
#accept2ship
* [1/n] Refactor dataio_test.py
Replace code duplication with a common function
* Add barrier net that runs before training nets
Add a synchonize barrier net that is run before training nets. With this net, shards that are faster will wait for other shards before start training. This reduce chances of the faster shards timing out during GLOO AllReduce.
Removed explicit data_parallel_model.py.synchronize call in holmes workflow. Similar change in speech/asr_training workflow will come in another diff.
* Support the dnnlowp backend in caffe2_benchmark
This is for SHARE operator latency evaluation
* Migrate integral_image_op to main caffe2
migrate integral_image_op(GPU version) given by https://fburl.com/yvqezigi
to caffe2/caffe2/operators and implement its CPU version. Write up a test
using the hypothesis_test mechanism
* [pos_disc, fbcode] Implement unjoined lr loss
As explained in https://our.intern.facebook.com/intern/wiki/Model_Based_Calibration/, when the dataset is an joined data set, where labels might change later, we need to use unjoined logloss.
The implementation is almost the same as in Sigrid (https://fburl.com/1trngsls), where
loss = y (log(p) - log(1-p)) + (1-y)(log(1-p)) = xy - (1-y)x - (1-y)log(1+exp(-x))
For x < 0, to ensure stability and avoid overflow, we reformulate the above exp as
loss = xy - (1-y)x - (1-y)x + (1-y)log(1+exp(x)) = xy + (1-y)log(1+exp(x))
Then the final expression becomes
loss = xy + (y - 1) x (x >= 0) - (1 - y) log(1 + exp(x - 2 x (x >= 0)))
where y is the true label, x is the dot product and p = logistic(x).
This kind of implementation is align with the current implementation of the original cross entropy in
https://phabricator.intern.facebook.com/diffusion/FBS/browse/master/fbcode/caffe2/caffe2/operators/cross_entropy_op.cc;0bae3b5d0f825897c5e0dd0ff10f489d7271bf25$7-13
* Keep the array to fix the conflict
* [C2] Compute Adagrad effective LR
The AdagradWithLR op outputs an extra blob which is contains the average effective learning rate across all weights in this blob.
* Open-source extractMetaNetDef & runGlobalInitialization, add new Predictor constructor from db file, and add run_map_outputs
1. Open-source extractMetaNetDef and runGlobalInitialization, for use in
2. new Predictor constructor from db file.
3. Add new run function that returns outputs as TensorMap
* Disable eigen cpu
Disable eigen cpu in transpose and reduce
* Introduce request_only/object_only property of ModelLayer
by default this is False
* A simple TC Caffe2 benchmark
We can run tunner, get MappingOptions and then use them to
compare against cuBLAS
currently broken due to LLVM issues. How to run:
hg checkout eec1ab31b59c03b8deded1c755a9abaf8c45be01
add D7401202
add D7434625
add D7506031
add D7540728
buck run @mode/dev-nosan tc/tc/benchmarks_python:caffe2_benchmark
* Move Caffe2 feature_maps_ops to open source
Need feature maps operators in open source project facebookresearch/BlueWhale
* Manually fix the conflicts in channel shuffle op
* Fix the inconsistency between different gh and fbcode
* Skip Adagrad GPU Test (Because some gpu implementation is missing)
* Fix another test to make sure it won't run on gpu when implementation is not available yet
2018-05-02 03:49:00 +00:00
|
|
|
self.output_effective_lr = output_effective_lr
|
|
|
|
|
self.output_effective_lr_and_update = output_effective_lr_and_update
|
2020-06-30 21:34:09 +00:00
|
|
|
self.counter_halflife = counter_halflife
|
2017-03-08 02:44:45 +00:00
|
|
|
self.init_kwargs = kwargs
|
2020-03-21 04:34:39 +00:00
|
|
|
self.weight_scale = weight_scale
|
2017-03-08 02:44:45 +00:00
|
|
|
|
2019-12-31 04:51:34 +00:00
|
|
|
self._process_pruning_options(pruning_options)
|
2020-03-20 15:13:24 +00:00
|
|
|
self._process_swa_options(swa_options)
|
|
|
|
|
|
|
|
|
|
def _process_swa_options(self, swa_options):
|
|
|
|
|
self.swa_enabled = True if swa_options else False
|
|
|
|
|
if self.swa_enabled:
|
|
|
|
|
self.swa_avg_start_it = swa_options.get("swa_avg_start_it", None)
|
|
|
|
|
self.swa_avg_end_it = swa_options.get("swa_avg_end_it", None)
|
|
|
|
|
self.swa_feedback_start_it = swa_options.get("swa_feedback_start_it", None)
|
|
|
|
|
self.swa_feedback_step = swa_options.get("swa_feedback_step", None)
|
|
|
|
|
self.swa_feedback_end_it = swa_options.get("swa_feedback_end_it", None)
|
2019-12-31 04:51:34 +00:00
|
|
|
|
|
|
|
|
def _process_pruning_options(self, pruning_options):
|
2019-12-27 01:14:44 +00:00
|
|
|
self.use_mask = False
|
2019-12-31 04:51:34 +00:00
|
|
|
|
|
|
|
|
if pruning_options is None:
|
|
|
|
|
pruning_options = {}
|
|
|
|
|
else:
|
2020-09-10 02:35:22 +00:00
|
|
|
assert isinstance(pruning_options, dict), (
|
|
|
|
|
"pruning_options can only "
|
2019-12-31 04:51:34 +00:00
|
|
|
"be provided as a dictionary, currently: {}".format(pruning_options)
|
2020-09-10 02:35:22 +00:00
|
|
|
)
|
2019-12-31 04:51:34 +00:00
|
|
|
|
|
|
|
|
self.mask_tensor = pruning_options.get("mask_tensor", None)
|
|
|
|
|
self.mask_db_path = pruning_options.get("mask_db_path", None)
|
|
|
|
|
self.mask_db_type = pruning_options.get("mask_db_type", None)
|
|
|
|
|
self.mask_blob_name = pruning_options.get("mask_blob_name", None)
|
2020-04-09 19:46:58 +00:00
|
|
|
self.prune_delays = pruning_options.get("prune_delays", [])
|
|
|
|
|
self.prune_ratios = pruning_options.get("prune_ratios", [])
|
|
|
|
|
self.prune_block_size = pruning_options.get("prune_block_size", 1)
|
2019-12-31 04:51:34 +00:00
|
|
|
|
2019-12-27 01:14:44 +00:00
|
|
|
if self.mask_tensor is not None:
|
2020-09-10 02:35:22 +00:00
|
|
|
assert (
|
|
|
|
|
type(self.mask_tensor) is np.ndarray
|
|
|
|
|
), "mask_tensor must be a numpy array!"
|
|
|
|
|
assert self.mask_db_path is None, (
|
|
|
|
|
"mask can be provided through either a numpy array "
|
2019-12-31 04:51:34 +00:00
|
|
|
"or a db path, not both"
|
2020-09-10 02:35:22 +00:00
|
|
|
)
|
|
|
|
|
assert self.mask_db_type is None, (
|
|
|
|
|
"mask can be provided through either a numpy array "
|
2019-12-31 04:51:34 +00:00
|
|
|
"or a db path, not both"
|
2020-09-10 02:35:22 +00:00
|
|
|
)
|
|
|
|
|
assert self.mask_blob_name is None, (
|
|
|
|
|
"mask can be provided through either a numpy array "
|
2019-12-31 04:51:34 +00:00
|
|
|
"or a db path, not both"
|
2020-09-10 02:35:22 +00:00
|
|
|
)
|
2019-12-31 04:51:34 +00:00
|
|
|
self.use_mask = True
|
2020-04-09 19:46:58 +00:00
|
|
|
|
|
|
|
|
if self.mask_db_path is not None or self.mask_db_type is not None:
|
2020-09-10 02:35:22 +00:00
|
|
|
assert self.mask_db_path is not None, (
|
|
|
|
|
"when mask is provided through db, "
|
2019-12-31 04:51:34 +00:00
|
|
|
"db path, db type, and blob name are all needed"
|
2020-09-10 02:35:22 +00:00
|
|
|
)
|
|
|
|
|
assert self.mask_db_type is not None, (
|
|
|
|
|
"when mask is provided through db, "
|
2019-12-31 04:51:34 +00:00
|
|
|
"db path, db type, and blob name are all needed"
|
2020-09-10 02:35:22 +00:00
|
|
|
)
|
|
|
|
|
assert self.mask_tensor is None, (
|
|
|
|
|
"mask can be provided through either a numpy array "
|
2019-12-31 04:51:34 +00:00
|
|
|
"or a db path, not both"
|
2020-09-10 02:35:22 +00:00
|
|
|
)
|
2019-12-27 01:14:44 +00:00
|
|
|
self.use_mask = True
|
|
|
|
|
|
2020-04-09 19:46:58 +00:00
|
|
|
if self.prune_delays:
|
2020-09-10 02:35:22 +00:00
|
|
|
assert self.prune_ratios is not None and len(self.prune_delays) == len(
|
|
|
|
|
self.prune_ratios
|
|
|
|
|
), "Prune Delays and prune ratios should be of the same length"
|
|
|
|
|
assert (
|
|
|
|
|
self.mask_tensor is None
|
|
|
|
|
), "Mask Tensor should be None with prune ratios"
|
|
|
|
|
assert (
|
|
|
|
|
self.mask_db_path is None
|
|
|
|
|
), "Mask DB Path should be None with prune ratios"
|
2020-04-09 19:46:58 +00:00
|
|
|
self.use_mask = True
|
|
|
|
|
|
2017-05-26 05:01:54 +00:00
|
|
|
def _run(self, net, param_init_net, param_info):
|
|
|
|
|
param = param_info.blob
|
|
|
|
|
grad = param_info.grad
|
|
|
|
|
|
2017-03-08 02:44:45 +00:00
|
|
|
if self.alpha <= 0:
|
|
|
|
|
return
|
|
|
|
|
|
Update from facebook (#7451)
* [bootcamp] Improve "Shape" operator to support axes specification
To improve .shape operator of Caffe2 to support x.shape(tensor, axes), which takes an optional int array "axes" as input. For example, x.shape(tensor, [1, 0]) will return the dimension for axis 1 and 0 following the specified order. For current version, "axes" input allows duplications and can have arbitrary length.
* Back out "Add barrier net that runs before training nets"
Original commit changeset: b373fdc9c30f. Need additional changes to some callers to support barrier failures.
* Change warning to verbose log to reduce log spam
The `LOG(WARNING)` was a bit spammy for regular use so lets just make it a `VLOG`.
* Extract the shared code from different caffe2_benchmark binaries
The OSS benchmark and Internal benchmark will share most functions in the benchmark.
* Support MFR in sequence training
As titled.
* Make knowledge distillation work with using logged prediction feature as teacher label.
1) Add loading raw dense feature as teacher label.
2) Optional calibration function for teacher label
3) Add teacher label into generic unit test
4) Deprecated TTSN workflow version using feature_options to config teacher label
* [C2/CUDA]: unjoined cross entropy sigmoid
as desc
* Add async_scheduling executor into deferrable_net_exec_test
Add async_scheduling into tests and fix some exception cases
* Fix Event disabled error
When disabling event in RNN ops make sure we don't call Finish on disabled
event from op's RunAsync
* cuda ensure cpu output op can handle both TensorCPU and TensorCUDA
as desc.
* [C2 Core] Infer input device option in C2 hypothesis_test checkers
Improve how we default input blob device options.
Previously it defaults as where op lives but it is not necessarily the case.
For example:
CopyCPUToGPU
* [C2 Op]SplitByLengthsOp CPU/GPU implementation
[C2 Op]SplitByLengthsOp CPU/GPU implementation
* fix undefined symbol error
not sure why we're getting undefined symbol even with link_whole = True
Need to figure out why but need this workaround for now
* Add tools in DAIPlayground platform to help debugging models
Add additional tools to allow Plauground override individual method defined in AnyExp. This will allow user to create module that specificly change certain default method behavior. An example included in this diff is deactivating test model and checkpointing. When debugging any model problems, switching off components helps me quickly narrow down the location of the bug. The technique is extensively used in task T27038712 (Steady memory increase in EDPM, eventually resulting in gloo/cuda.cu:34: out of memory)
* add shape and type inference for int8 conversion operator
* Fix flaky test for group_norm
Fix flaky test for group_norm
* Fix group_norm_op_test flaky
Fix group_norm_op_test flaky
* Implementation of composite learning rate policy
In many state-of-the-arts deep learning works, people use a simple trick to
schedule the learning rate: use a fixed learning rate until error plateaus
and then switch to a different fixed learning rate, and so on. In this diff,
we implemented a simple version of the composite learning rate. The user gives
a set of learning rates policies and corresponding iteration nums, and the
optimizer will change the learning rate policy based on the number of iterations so far.
For example, the user give two learning rate policies, one is FixedLearningRate
and PolyLearningRate, with an iteration number of 1k. Then the first 1k iteration,
we use FixedLearningRate. For the following iterations, we use PolyLearningRate.
* Split two use cases of CachedReader into two classes, DBFileReader and CachedReader
# Use Cases:
1). input: DB file -> output: DatasetReader.
Use DBFileReader.
2). input: Reader -> build cache DB file -> output: DatasetReader.
Use CachedReader.
# Changes to CachedReader:
1). Move db_path to the constructor.
Because in mock reader. cache will always be built ahead.
# Changes to tests:
1). Make a separate TestCase class for CachedReader and DBFileReader.
2). Make it possible to add more test functions by adding setUp, tearDown and _make_temp_path.
3). Make delete db_path more general. `db_path` could be a file for `log_file_db`, but could also be a directory for `leveldb`.
* Back out "On Mobile phones, call GlobalInit with no arguments in predictor in case we need to perform initialization"
Original commit changeset: 4489c6133f11
* Fix LARS bug
Fixed a bug in the LARS implementation which caused all subsequent blobs not using LARS to have the LARS learning rate multiplier applied to them.
* [tum] support sparse init & add uniformFill option
as title
* Propagate exception for async nets
Capture the exception when an exception is thrown in async nets and re-throw it after wait(). This allows exceptions to be propagated up to the caller.
This diff was a part of D7752068. We split the diff so that C2 core files changes are in a separate diff.
* Automatic update of fbcode/onnx to 69894f207dfcd72d1e70497d387201cec327efbc
Previous import was 403ccfbd0161c38f0834413d790bad0874afbf9a
Included changes:
- **[69894f2](https://github.com/onnx/onnx/commit/69894f2)**: Use op schema.all tensor types in random like definitions (#865) <Scott McKay>
- **[b9d6b90](https://github.com/onnx/onnx/commit/b9d6b90)**: Clarify random like operators (#846) <Scott McKay>
- **[fc6b5fb](https://github.com/onnx/onnx/commit/fc6b5fb)**: Refactor shape inference implementation (#855) <anderspapitto>
- **[b7d8dc8](https://github.com/onnx/onnx/commit/b7d8dc8)**: fix cmake warning message (#863) <Eric S. Yu>
- **[f585c5d](https://github.com/onnx/onnx/commit/f585c5d)**: add pytorch-operator test for tile (#831) <Wenhao Hu>
- **[993fe70](https://github.com/onnx/onnx/commit/993fe70)**: add install step (#832) <Eric S. Yu>
- **[68bc26c](https://github.com/onnx/onnx/commit/68bc26c)**: add type inference for traditional ml ops except classifier ops. (#857) <Ke Zhang>
- **[9cc0cda](https://github.com/onnx/onnx/commit/9cc0cda)**: fix string representation of scalar types (#858) <G. Ramalingam>
- **[1078925](https://github.com/onnx/onnx/commit/1078925)**: fix y in pow test case to scalar (#852) <Wenhao Hu>
- **[c66fb6f](https://github.com/onnx/onnx/commit/c66fb6f)**: Add some math function shape inference (#845) <anderspapitto>
- **[ff667d1](https://github.com/onnx/onnx/commit/ff667d1)**: Refactor return type and docs for ONNXIFI_BACKEND_DIRECTX_ID (#853) <Marat Dukhan>
- **[11c6876](https://github.com/onnx/onnx/commit/11c6876)**: clear initializer names when clear initializer (#849) <Wenhao Hu>
- **[73c34ae](https://github.com/onnx/onnx/commit/73c34ae)**: Clarify FeatureVectorizer description. (#843) <Scott McKay>
- **[1befb9b](https://github.com/onnx/onnx/commit/1befb9b)**: Remove useless text in docs (#850) <Lu Fang>
- **[e84788f](https://github.com/onnx/onnx/commit/e84788f)**: Fix SELU attributes' default values (#839) <Lu Fang>
- **[ebac046](https://github.com/onnx/onnx/commit/ebac046)**: Add tile test case (#823) <Wenhao Hu>
- **[8b7a925](https://github.com/onnx/onnx/commit/8b7a925)**: a few more shape inference functions (#772) <anderspapitto>
- **[9718f42](https://github.com/onnx/onnx/commit/9718f42)**: Make the coefficient non optional for LinearClassifier (#836) <Jaliya Ekanayake>
- **[ef083d0](https://github.com/onnx/onnx/commit/ef083d0)**: Add save_tensor and load_tensor functions for Protos (#770) <Lu Fang>
- **[45ceb55](https://github.com/onnx/onnx/commit/45ceb55)**: Check if CMAKE_BUILD_TYPE set before project(). (#812) <Sergii Dymchenko>
- **[4b3d2b0](https://github.com/onnx/onnx/commit/4b3d2b0)**: [WIP] reenable shape inference tests (#834) <anderspapitto>
- **[22d17ee](https://github.com/onnx/onnx/commit/22d17ee)**: RNN tests: LSTM, GRU, SimpleRNN (#739) <Peyman Manikashani>
- **[de65b95](https://github.com/onnx/onnx/commit/de65b95)**: dimension denotation (#443) <Tian Jin>
- **[eccc76e](https://github.com/onnx/onnx/commit/eccc76e)**: fix field number issue in onnx operator proto and enable its build (#829) <Ke Zhang>
- **[d582beb](https://github.com/onnx/onnx/commit/d582beb)**: disable shape inference test to unbreak ci (#830) <Lu Fang>
- **[485b787](https://github.com/onnx/onnx/commit/485b787)**: function proto for composite op. (#802) <Ke Zhang>
- **[cd58928](https://github.com/onnx/onnx/commit/cd58928)**: specify defaults for attributes of Affine op (#820) <G. Ramalingam>
- **[7ee2cf9](https://github.com/onnx/onnx/commit/7ee2cf9)**: merge the dummy backend back into the main one (#743) <anderspapitto>
- **[1c03a5a](https://github.com/onnx/onnx/commit/1c03a5a)**: [Proposal] ONNX Interface for Framework Integration (previously ONNX Backend API) header and docs (#551) <Marat Dukhan>
- **[3769a98](https://github.com/onnx/onnx/commit/3769a98)**: Rename real model test case from VGG-16 to ZFNet (#821) <Lu Fang>
* [C2]ReluN Op
relu n op.
tf reference: https://www.tensorflow.org/api_docs/python/tf/nn/relu6
* Call destructor when assigning a blob value
* Add executor overrides
Add executor overrides flag to enable migration to async_scheduling executor
* Add barrier net that runs before training nets - attempt #2
Add a synchonize barrier net that is run before training nets. With this net, shards that are faster will wait for other shards before start training. This reduce chances of the faster shards timing out during GLOO AllReduce.
Removed explicit data_parallel_model.py.synchronize call in holmes workflow.
This change was landed previously but caused errors for some EDPM workflows - See https://fb.facebook.com/groups/1426530000692545/permalink/1906766366002237/ - because EDPM assumes any call to CreateOrCloneCommonWorld and Gloo ops are wrapped in exception handlers but in this case exception thrown in the barrier init net is not handled.
To address this issue, we add _CreateOrCloneCommonWorld to the param_init_net instead of a new barrier init net. Since errors for param_init_net run is handled gracefully and re-rendezvous, it should fixes the problem.
* Handle empty nets in async_scheduling
Make sure we don't get stuck on empty nets
* use CUDA_ARCH for conditional compile
* [C2 fix] infer function for ensure_cpu_output_op
* Update group_norm test to reduce flaky test
* Fix lr_multiplier for GPU
2018-05-11 06:14:27 +00:00
|
|
|
self._clear_local_lr_multiplier()
|
|
|
|
|
|
2018-03-12 19:22:59 +00:00
|
|
|
if self.lars is not None and not isinstance(grad, core.GradientSlice):
|
2020-09-10 02:35:22 +00:00
|
|
|
assert (
|
|
|
|
|
self.weight_decay == 0
|
|
|
|
|
), "weight decay is not implemented for LARS yet"
|
|
|
|
|
assert self.lars >= 0, "Lars offset must be nonnegative, got {}".format(
|
|
|
|
|
self.lars
|
|
|
|
|
)
|
2018-08-02 18:41:15 +00:00
|
|
|
wd, trust, lr_max = self.create_lars_inputs(
|
2020-09-10 02:35:22 +00:00
|
|
|
param_init_net, 0.0, 1.0, np.finfo(np.float32).max
|
|
|
|
|
)
|
2018-03-12 19:22:59 +00:00
|
|
|
lr_lars_multiplier = net.Lars(
|
2018-08-02 18:41:15 +00:00
|
|
|
[param, grad, wd, trust, lr_max],
|
2018-03-12 19:22:59 +00:00
|
|
|
self.make_unique_blob_name(str(param) + "_lars"),
|
2018-08-02 18:41:15 +00:00
|
|
|
offset=self.lars,
|
2020-09-10 02:35:22 +00:00
|
|
|
lr_min=0.0,
|
|
|
|
|
)
|
2018-08-02 18:41:15 +00:00
|
|
|
|
2018-03-12 19:22:59 +00:00
|
|
|
current_scope = scope.CurrentDeviceScope()
|
Update from facebook (#7451)
* [bootcamp] Improve "Shape" operator to support axes specification
To improve .shape operator of Caffe2 to support x.shape(tensor, axes), which takes an optional int array "axes" as input. For example, x.shape(tensor, [1, 0]) will return the dimension for axis 1 and 0 following the specified order. For current version, "axes" input allows duplications and can have arbitrary length.
* Back out "Add barrier net that runs before training nets"
Original commit changeset: b373fdc9c30f. Need additional changes to some callers to support barrier failures.
* Change warning to verbose log to reduce log spam
The `LOG(WARNING)` was a bit spammy for regular use so lets just make it a `VLOG`.
* Extract the shared code from different caffe2_benchmark binaries
The OSS benchmark and Internal benchmark will share most functions in the benchmark.
* Support MFR in sequence training
As titled.
* Make knowledge distillation work with using logged prediction feature as teacher label.
1) Add loading raw dense feature as teacher label.
2) Optional calibration function for teacher label
3) Add teacher label into generic unit test
4) Deprecated TTSN workflow version using feature_options to config teacher label
* [C2/CUDA]: unjoined cross entropy sigmoid
as desc
* Add async_scheduling executor into deferrable_net_exec_test
Add async_scheduling into tests and fix some exception cases
* Fix Event disabled error
When disabling event in RNN ops make sure we don't call Finish on disabled
event from op's RunAsync
* cuda ensure cpu output op can handle both TensorCPU and TensorCUDA
as desc.
* [C2 Core] Infer input device option in C2 hypothesis_test checkers
Improve how we default input blob device options.
Previously it defaults as where op lives but it is not necessarily the case.
For example:
CopyCPUToGPU
* [C2 Op]SplitByLengthsOp CPU/GPU implementation
[C2 Op]SplitByLengthsOp CPU/GPU implementation
* fix undefined symbol error
not sure why we're getting undefined symbol even with link_whole = True
Need to figure out why but need this workaround for now
* Add tools in DAIPlayground platform to help debugging models
Add additional tools to allow Plauground override individual method defined in AnyExp. This will allow user to create module that specificly change certain default method behavior. An example included in this diff is deactivating test model and checkpointing. When debugging any model problems, switching off components helps me quickly narrow down the location of the bug. The technique is extensively used in task T27038712 (Steady memory increase in EDPM, eventually resulting in gloo/cuda.cu:34: out of memory)
* add shape and type inference for int8 conversion operator
* Fix flaky test for group_norm
Fix flaky test for group_norm
* Fix group_norm_op_test flaky
Fix group_norm_op_test flaky
* Implementation of composite learning rate policy
In many state-of-the-arts deep learning works, people use a simple trick to
schedule the learning rate: use a fixed learning rate until error plateaus
and then switch to a different fixed learning rate, and so on. In this diff,
we implemented a simple version of the composite learning rate. The user gives
a set of learning rates policies and corresponding iteration nums, and the
optimizer will change the learning rate policy based on the number of iterations so far.
For example, the user give two learning rate policies, one is FixedLearningRate
and PolyLearningRate, with an iteration number of 1k. Then the first 1k iteration,
we use FixedLearningRate. For the following iterations, we use PolyLearningRate.
* Split two use cases of CachedReader into two classes, DBFileReader and CachedReader
# Use Cases:
1). input: DB file -> output: DatasetReader.
Use DBFileReader.
2). input: Reader -> build cache DB file -> output: DatasetReader.
Use CachedReader.
# Changes to CachedReader:
1). Move db_path to the constructor.
Because in mock reader. cache will always be built ahead.
# Changes to tests:
1). Make a separate TestCase class for CachedReader and DBFileReader.
2). Make it possible to add more test functions by adding setUp, tearDown and _make_temp_path.
3). Make delete db_path more general. `db_path` could be a file for `log_file_db`, but could also be a directory for `leveldb`.
* Back out "On Mobile phones, call GlobalInit with no arguments in predictor in case we need to perform initialization"
Original commit changeset: 4489c6133f11
* Fix LARS bug
Fixed a bug in the LARS implementation which caused all subsequent blobs not using LARS to have the LARS learning rate multiplier applied to them.
* [tum] support sparse init & add uniformFill option
as title
* Propagate exception for async nets
Capture the exception when an exception is thrown in async nets and re-throw it after wait(). This allows exceptions to be propagated up to the caller.
This diff was a part of D7752068. We split the diff so that C2 core files changes are in a separate diff.
* Automatic update of fbcode/onnx to 69894f207dfcd72d1e70497d387201cec327efbc
Previous import was 403ccfbd0161c38f0834413d790bad0874afbf9a
Included changes:
- **[69894f2](https://github.com/onnx/onnx/commit/69894f2)**: Use op schema.all tensor types in random like definitions (#865) <Scott McKay>
- **[b9d6b90](https://github.com/onnx/onnx/commit/b9d6b90)**: Clarify random like operators (#846) <Scott McKay>
- **[fc6b5fb](https://github.com/onnx/onnx/commit/fc6b5fb)**: Refactor shape inference implementation (#855) <anderspapitto>
- **[b7d8dc8](https://github.com/onnx/onnx/commit/b7d8dc8)**: fix cmake warning message (#863) <Eric S. Yu>
- **[f585c5d](https://github.com/onnx/onnx/commit/f585c5d)**: add pytorch-operator test for tile (#831) <Wenhao Hu>
- **[993fe70](https://github.com/onnx/onnx/commit/993fe70)**: add install step (#832) <Eric S. Yu>
- **[68bc26c](https://github.com/onnx/onnx/commit/68bc26c)**: add type inference for traditional ml ops except classifier ops. (#857) <Ke Zhang>
- **[9cc0cda](https://github.com/onnx/onnx/commit/9cc0cda)**: fix string representation of scalar types (#858) <G. Ramalingam>
- **[1078925](https://github.com/onnx/onnx/commit/1078925)**: fix y in pow test case to scalar (#852) <Wenhao Hu>
- **[c66fb6f](https://github.com/onnx/onnx/commit/c66fb6f)**: Add some math function shape inference (#845) <anderspapitto>
- **[ff667d1](https://github.com/onnx/onnx/commit/ff667d1)**: Refactor return type and docs for ONNXIFI_BACKEND_DIRECTX_ID (#853) <Marat Dukhan>
- **[11c6876](https://github.com/onnx/onnx/commit/11c6876)**: clear initializer names when clear initializer (#849) <Wenhao Hu>
- **[73c34ae](https://github.com/onnx/onnx/commit/73c34ae)**: Clarify FeatureVectorizer description. (#843) <Scott McKay>
- **[1befb9b](https://github.com/onnx/onnx/commit/1befb9b)**: Remove useless text in docs (#850) <Lu Fang>
- **[e84788f](https://github.com/onnx/onnx/commit/e84788f)**: Fix SELU attributes' default values (#839) <Lu Fang>
- **[ebac046](https://github.com/onnx/onnx/commit/ebac046)**: Add tile test case (#823) <Wenhao Hu>
- **[8b7a925](https://github.com/onnx/onnx/commit/8b7a925)**: a few more shape inference functions (#772) <anderspapitto>
- **[9718f42](https://github.com/onnx/onnx/commit/9718f42)**: Make the coefficient non optional for LinearClassifier (#836) <Jaliya Ekanayake>
- **[ef083d0](https://github.com/onnx/onnx/commit/ef083d0)**: Add save_tensor and load_tensor functions for Protos (#770) <Lu Fang>
- **[45ceb55](https://github.com/onnx/onnx/commit/45ceb55)**: Check if CMAKE_BUILD_TYPE set before project(). (#812) <Sergii Dymchenko>
- **[4b3d2b0](https://github.com/onnx/onnx/commit/4b3d2b0)**: [WIP] reenable shape inference tests (#834) <anderspapitto>
- **[22d17ee](https://github.com/onnx/onnx/commit/22d17ee)**: RNN tests: LSTM, GRU, SimpleRNN (#739) <Peyman Manikashani>
- **[de65b95](https://github.com/onnx/onnx/commit/de65b95)**: dimension denotation (#443) <Tian Jin>
- **[eccc76e](https://github.com/onnx/onnx/commit/eccc76e)**: fix field number issue in onnx operator proto and enable its build (#829) <Ke Zhang>
- **[d582beb](https://github.com/onnx/onnx/commit/d582beb)**: disable shape inference test to unbreak ci (#830) <Lu Fang>
- **[485b787](https://github.com/onnx/onnx/commit/485b787)**: function proto for composite op. (#802) <Ke Zhang>
- **[cd58928](https://github.com/onnx/onnx/commit/cd58928)**: specify defaults for attributes of Affine op (#820) <G. Ramalingam>
- **[7ee2cf9](https://github.com/onnx/onnx/commit/7ee2cf9)**: merge the dummy backend back into the main one (#743) <anderspapitto>
- **[1c03a5a](https://github.com/onnx/onnx/commit/1c03a5a)**: [Proposal] ONNX Interface for Framework Integration (previously ONNX Backend API) header and docs (#551) <Marat Dukhan>
- **[3769a98](https://github.com/onnx/onnx/commit/3769a98)**: Rename real model test case from VGG-16 to ZFNet (#821) <Lu Fang>
* [C2]ReluN Op
relu n op.
tf reference: https://www.tensorflow.org/api_docs/python/tf/nn/relu6
* Call destructor when assigning a blob value
* Add executor overrides
Add executor overrides flag to enable migration to async_scheduling executor
* Add barrier net that runs before training nets - attempt #2
Add a synchonize barrier net that is run before training nets. With this net, shards that are faster will wait for other shards before start training. This reduce chances of the faster shards timing out during GLOO AllReduce.
Removed explicit data_parallel_model.py.synchronize call in holmes workflow.
This change was landed previously but caused errors for some EDPM workflows - See https://fb.facebook.com/groups/1426530000692545/permalink/1906766366002237/ - because EDPM assumes any call to CreateOrCloneCommonWorld and Gloo ops are wrapped in exception handlers but in this case exception thrown in the barrier init net is not handled.
To address this issue, we add _CreateOrCloneCommonWorld to the param_init_net instead of a new barrier init net. Since errors for param_init_net run is handled gracefully and re-rendezvous, it should fixes the problem.
* Handle empty nets in async_scheduling
Make sure we don't get stuck on empty nets
* use CUDA_ARCH for conditional compile
* [C2 fix] infer function for ensure_cpu_output_op
* Update group_norm test to reduce flaky test
* Fix lr_multiplier for GPU
2018-05-11 06:14:27 +00:00
|
|
|
self._add_local_lr_multiplier(
|
2018-03-12 19:22:59 +00:00
|
|
|
lr_lars_multiplier,
|
2020-09-10 02:35:22 +00:00
|
|
|
is_gpu_blob=(
|
|
|
|
|
current_scope is not None
|
|
|
|
|
and core.IsGPUDeviceType(current_scope.device_type)
|
|
|
|
|
),
|
2018-03-12 19:22:59 +00:00
|
|
|
)
|
|
|
|
|
|
2020-03-21 04:34:39 +00:00
|
|
|
lr, lr_iteration = self.build_lr(
|
2020-09-10 02:35:22 +00:00
|
|
|
net,
|
|
|
|
|
param_init_net,
|
2017-03-08 02:44:45 +00:00
|
|
|
base_learning_rate=self.alpha,
|
|
|
|
|
policy=self.policy,
|
|
|
|
|
**(self.init_kwargs)
|
|
|
|
|
)
|
2020-06-30 21:34:09 +00:00
|
|
|
iteration = lr_iteration
|
|
|
|
|
if self.counter_halflife > 0:
|
|
|
|
|
self._aux_params.shared.append(iteration)
|
2017-03-08 02:44:45 +00:00
|
|
|
|
2017-09-26 20:32:01 +00:00
|
|
|
if self.rowWise:
|
2020-04-14 14:39:43 +00:00
|
|
|
logger.info(
|
|
|
|
|
"Using engine {} for rowWise Adagrad to train param {}".format(
|
|
|
|
|
self.engine, param
|
|
|
|
|
)
|
|
|
|
|
)
|
2018-09-14 09:36:26 +00:00
|
|
|
|
2017-11-06 08:38:05 +00:00
|
|
|
shapes, types = workspace.InferShapesAndTypes([param_init_net])
|
|
|
|
|
if str(param) not in shapes:
|
|
|
|
|
# Type/shape inference is not available for this param, fallback
|
|
|
|
|
# on Shape/Slice logic
|
|
|
|
|
shape = param_init_net.Shape(param, str(param) + "_shape")
|
|
|
|
|
num_rows = param_init_net.Slice(
|
2020-09-10 02:35:22 +00:00
|
|
|
[shape], str(shape) + "_numrows", starts=[0], ends=[1]
|
2017-11-06 08:38:05 +00:00
|
|
|
)
|
|
|
|
|
param_squared_sum = param_init_net.ConstantFill(
|
|
|
|
|
num_rows,
|
|
|
|
|
str(param) + "_avg_squared_sum",
|
|
|
|
|
input_as_shape=1,
|
2020-09-10 02:35:22 +00:00
|
|
|
value=0.0,
|
2017-11-06 08:38:05 +00:00
|
|
|
)
|
|
|
|
|
else:
|
|
|
|
|
param_squared_sum = param_init_net.ConstantFill(
|
|
|
|
|
[],
|
|
|
|
|
str(param) + "_avg_squared_sum",
|
|
|
|
|
shape=[shapes[str(param)][0]],
|
2020-09-10 02:35:22 +00:00
|
|
|
value=0.0,
|
2017-11-06 08:38:05 +00:00
|
|
|
)
|
2017-09-26 20:32:01 +00:00
|
|
|
else:
|
2020-04-14 14:39:43 +00:00
|
|
|
logger.info(
|
|
|
|
|
"Using engine {} for regular Adagrad to train param {}".format(
|
|
|
|
|
self.engine, param
|
|
|
|
|
)
|
|
|
|
|
)
|
2018-11-28 10:13:21 +00:00
|
|
|
|
2018-09-30 04:43:14 +00:00
|
|
|
if self.engine in FP16_ENGINES:
|
2020-09-10 02:35:22 +00:00
|
|
|
assert (
|
|
|
|
|
self.weight_decay == 0
|
|
|
|
|
), "weight decay is not tested for engine: {}".format(self.engine)
|
2020-05-03 17:39:29 +00:00
|
|
|
|
2018-09-14 09:36:26 +00:00
|
|
|
shapes, types = workspace.InferShapesAndTypes([param_init_net])
|
|
|
|
|
assert str(param) in shapes, shapes
|
|
|
|
|
shape = shapes[str(param)]
|
|
|
|
|
|
|
|
|
|
param_squared_sum = param_init_net.Float16ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
[], str(param) + "_squared_sum", value=0.0, shape=shape
|
2018-09-14 09:36:26 +00:00
|
|
|
)
|
|
|
|
|
else:
|
|
|
|
|
param_squared_sum = param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
[param], str(param) + "_squared_sum", value=0.0
|
2018-09-14 09:36:26 +00:00
|
|
|
)
|
2017-09-26 20:32:01 +00:00
|
|
|
|
2019-12-28 02:39:15 +00:00
|
|
|
if self.use_mask is True:
|
2020-09-10 02:35:22 +00:00
|
|
|
assert (
|
|
|
|
|
self.weight_decay == 0
|
|
|
|
|
), "weight decay is not implemented for use_mask yet"
|
2020-05-03 17:39:29 +00:00
|
|
|
|
2019-12-31 04:51:34 +00:00
|
|
|
if self.mask_tensor is not None:
|
|
|
|
|
if not isinstance(grad, core.GradientSlice):
|
2020-09-10 02:35:22 +00:00
|
|
|
mask_blob = param_init_net.GivenTensorFill(
|
|
|
|
|
[],
|
|
|
|
|
[str(param) + "_mask"],
|
|
|
|
|
values=self.mask_tensor,
|
|
|
|
|
shape=self.mask_tensor.shape,
|
|
|
|
|
)
|
2019-12-31 04:51:34 +00:00
|
|
|
else:
|
|
|
|
|
self.mask_tensor = self.mask_tensor.astype(np.uint8)
|
2020-09-10 02:35:22 +00:00
|
|
|
mask_blob = param_init_net.GivenTensorBoolFill(
|
|
|
|
|
[],
|
|
|
|
|
[str(param) + "_mask"],
|
|
|
|
|
values=self.mask_tensor,
|
|
|
|
|
shape=self.mask_tensor.shape,
|
|
|
|
|
)
|
2019-12-31 04:51:34 +00:00
|
|
|
mask_blob = param_init_net.Cast(mask_blob, to=core.DataType.UINT8)
|
2020-09-10 02:35:22 +00:00
|
|
|
mask_changed_blob = param_init_net.ConstantFill(
|
|
|
|
|
[],
|
|
|
|
|
[str(param) + "_mask_changed_blob"],
|
|
|
|
|
value=False,
|
|
|
|
|
dtype=core.DataType.BOOL,
|
|
|
|
|
shape=[1],
|
|
|
|
|
)
|
|
|
|
|
elif (
|
|
|
|
|
self.mask_db_path is not None or self.mask_db_type is not None
|
|
|
|
|
): # mask is provided through a db file
|
2020-04-09 19:46:58 +00:00
|
|
|
# if mask_blob_name is not given use the param name to derive mask name
|
|
|
|
|
self.mask_blob_name = self.mask_blob_name or str(param) + "_mask"
|
|
|
|
|
|
2019-12-31 04:51:34 +00:00
|
|
|
mask_blob = param_init_net.Load(
|
2020-09-10 02:35:22 +00:00
|
|
|
[],
|
|
|
|
|
self.mask_blob_name,
|
|
|
|
|
db=self.mask_db_path,
|
|
|
|
|
db_type=self.mask_db_type,
|
|
|
|
|
absolute_path=True,
|
2019-12-31 04:51:34 +00:00
|
|
|
)
|
2020-04-09 19:46:58 +00:00
|
|
|
|
2019-12-31 04:51:34 +00:00
|
|
|
if isinstance(grad, core.GradientSlice):
|
2020-09-10 02:35:22 +00:00
|
|
|
mask_changed_blob = param_init_net.ConstantFill(
|
|
|
|
|
[],
|
|
|
|
|
[str(param) + "_mask_changed_blob"],
|
|
|
|
|
value=False,
|
|
|
|
|
dtype=core.DataType.BOOL,
|
|
|
|
|
shape=[1],
|
|
|
|
|
)
|
2020-04-09 19:46:58 +00:00
|
|
|
elif self.prune_delays:
|
2020-09-10 02:35:22 +00:00
|
|
|
last_mask_updated_iter = param_init_net.ConstantFill(
|
|
|
|
|
[],
|
|
|
|
|
[str(param) + "_last_mask_updated_iter"],
|
|
|
|
|
value=-1,
|
|
|
|
|
dtype=core.DataType.INT64,
|
|
|
|
|
shape=[1],
|
|
|
|
|
)
|
2020-04-09 19:46:58 +00:00
|
|
|
|
|
|
|
|
if isinstance(grad, core.GradientSlice):
|
|
|
|
|
AssertionError(
|
|
|
|
|
"Prune Delays and Prune Ratios are currently not supported"
|
|
|
|
|
"for sparse operators"
|
|
|
|
|
)
|
|
|
|
|
else:
|
2020-09-10 02:35:22 +00:00
|
|
|
mask_blob = param_init_net.GivenTensorFill(
|
|
|
|
|
[],
|
|
|
|
|
[str(param) + "_empty_mask"],
|
|
|
|
|
values=[],
|
|
|
|
|
dtype=core.DataType.FLOAT,
|
|
|
|
|
shape=[0],
|
|
|
|
|
)
|
2019-12-28 02:39:15 +00:00
|
|
|
else:
|
2020-09-10 02:35:22 +00:00
|
|
|
raise NotImplementedError(
|
|
|
|
|
"If mask is used, it needs a numpy array or a db file or"
|
|
|
|
|
"a delay iter needs to be provided"
|
|
|
|
|
)
|
2019-12-27 01:14:44 +00:00
|
|
|
|
2017-04-17 17:06:49 +00:00
|
|
|
self._aux_params.local.append(param_squared_sum)
|
2020-06-30 21:34:09 +00:00
|
|
|
if self.counter_halflife > 0:
|
|
|
|
|
shapes, types = workspace.InferShapesAndTypes([param_init_net])
|
|
|
|
|
if str(param) not in shapes:
|
|
|
|
|
shape = param_init_net.Shape(param, str(param) + "_shape")
|
|
|
|
|
num_rows = param_init_net.Slice(
|
2020-09-10 02:35:22 +00:00
|
|
|
[shape], str(shape) + "_numrows", starts=[0], ends=[1]
|
2020-06-30 21:34:09 +00:00
|
|
|
)
|
|
|
|
|
update_counter = param_init_net.ConstantFill(
|
|
|
|
|
num_rows,
|
|
|
|
|
str(param) + "_update_counter",
|
|
|
|
|
input_as_shape=1,
|
|
|
|
|
value=0.0,
|
2020-08-06 03:39:18 +00:00
|
|
|
dtype=core.DataType.DOUBLE,
|
2020-06-30 21:34:09 +00:00
|
|
|
)
|
|
|
|
|
prev_update_iter = param_init_net.ConstantFill(
|
|
|
|
|
num_rows,
|
|
|
|
|
str(param) + "_prev_update_iter",
|
|
|
|
|
input_as_shape=1,
|
|
|
|
|
value=0,
|
|
|
|
|
dtype=core.DataType.INT64,
|
|
|
|
|
)
|
|
|
|
|
else:
|
|
|
|
|
update_counter = param_init_net.ConstantFill(
|
|
|
|
|
[],
|
|
|
|
|
str(param) + "_update_counter",
|
|
|
|
|
shape=[shapes[str(param)][0]],
|
|
|
|
|
value=0.0,
|
2020-08-06 03:39:18 +00:00
|
|
|
dtype=core.DataType.DOUBLE,
|
2020-06-30 21:34:09 +00:00
|
|
|
)
|
|
|
|
|
prev_update_iter = param_init_net.ConstantFill(
|
|
|
|
|
[],
|
|
|
|
|
str(param) + "_prev_update_iter",
|
|
|
|
|
shape=[shapes[str(param)][0]],
|
|
|
|
|
value=0,
|
|
|
|
|
dtype=core.DataType.INT64,
|
|
|
|
|
)
|
|
|
|
|
self._aux_params.local.append(update_counter)
|
|
|
|
|
self._aux_params.local.append(prev_update_iter)
|
2017-03-08 02:44:45 +00:00
|
|
|
|
2017-09-26 20:32:01 +00:00
|
|
|
if self.rowWise:
|
2020-09-10 02:35:22 +00:00
|
|
|
assert isinstance(grad, core.GradientSlice), (
|
|
|
|
|
"If SparseAdagrad with rowWise=True, gradient must be "
|
|
|
|
|
"a gradientslice. PLease ensure that rowWise is not enabled "
|
|
|
|
|
"for the dense Adagrad optimizer, as it is not supported."
|
|
|
|
|
)
|
2020-05-03 17:39:29 +00:00
|
|
|
|
|
|
|
|
shapes, _ = workspace.InferShapesAndTypes([param_init_net])
|
|
|
|
|
param_shape = shapes[str(param)]
|
2020-09-10 02:35:22 +00:00
|
|
|
weight_decay = 0.0
|
2020-05-03 17:39:29 +00:00
|
|
|
if isinstance(grad, core.GradientSlice):
|
|
|
|
|
if len(param_shape) == 1:
|
2020-09-10 02:35:22 +00:00
|
|
|
weight_decay = 0.0
|
|
|
|
|
logger.warn(
|
|
|
|
|
"SKIPPING weight decay on 1d sparse param: {}.shape is {}".format(
|
|
|
|
|
str(param), param_shape
|
|
|
|
|
)
|
|
|
|
|
)
|
2020-06-11 18:18:20 +00:00
|
|
|
else:
|
|
|
|
|
weight_decay = self.weight_decay
|
2020-05-03 17:39:29 +00:00
|
|
|
else:
|
|
|
|
|
# Skip weight decay for 1d parameters
|
|
|
|
|
if len(param_shape) == 1:
|
2020-09-10 02:35:22 +00:00
|
|
|
weight_decay = 0.0
|
|
|
|
|
logger.warn(
|
|
|
|
|
"SKIPPING weight decay on 1d dense param: {}.shape is {}".format(
|
|
|
|
|
str(param), param_shape
|
|
|
|
|
)
|
|
|
|
|
)
|
2020-05-03 17:39:29 +00:00
|
|
|
else:
|
|
|
|
|
weight_decay = self.weight_decay
|
2020-09-10 02:35:22 +00:00
|
|
|
logger.info(
|
|
|
|
|
"weight_decay for {} (shape:{}): {}".format(
|
|
|
|
|
str(param), param_shape, weight_decay
|
|
|
|
|
)
|
|
|
|
|
)
|
2020-05-03 17:39:29 +00:00
|
|
|
|
2017-03-08 02:44:45 +00:00
|
|
|
if isinstance(grad, core.GradientSlice):
|
2020-09-10 02:35:22 +00:00
|
|
|
assert (
|
|
|
|
|
self.decay == 1.0
|
|
|
|
|
), "Decay is not implemented for SparseAdagrad and must be set to 1"
|
2017-03-08 02:44:45 +00:00
|
|
|
grad = self.dedup(net, self.sparse_dedup_aggregator, grad)
|
2019-12-28 02:39:15 +00:00
|
|
|
|
|
|
|
|
input_args = [param, param_squared_sum, grad.indices, grad.values, lr]
|
2020-04-09 19:46:58 +00:00
|
|
|
output_args = [param, param_squared_sum]
|
2017-09-26 20:32:01 +00:00
|
|
|
if self.rowWise:
|
2019-12-28 02:39:15 +00:00
|
|
|
if self.use_mask is True:
|
2020-09-10 02:35:22 +00:00
|
|
|
op = "MaskedRowWiseSparseAdagrad"
|
|
|
|
|
assert (
|
|
|
|
|
weight_decay == 0
|
|
|
|
|
), "weight decay is not implemented for {} yet".format(op)
|
2019-12-28 02:39:15 +00:00
|
|
|
input_args += [mask_blob, mask_changed_blob]
|
|
|
|
|
else:
|
2020-09-10 02:35:22 +00:00
|
|
|
op = "RowWiseSparseAdagrad"
|
2017-09-26 20:32:01 +00:00
|
|
|
else:
|
2019-12-28 02:39:15 +00:00
|
|
|
if self.use_mask is True:
|
2020-09-10 02:35:22 +00:00
|
|
|
op = "MaskedSparseAdagrad"
|
|
|
|
|
assert (
|
|
|
|
|
weight_decay == 0
|
|
|
|
|
), "weight decay is not implemented for {} yet".format(op)
|
2019-12-28 02:39:15 +00:00
|
|
|
input_args += [mask_blob, mask_changed_blob]
|
|
|
|
|
else:
|
2020-09-10 02:35:22 +00:00
|
|
|
op = "SparseAdagrad"
|
2020-05-08 02:30:47 +00:00
|
|
|
logger.info("using {} for {}".format(op, str(param)))
|
2020-04-09 19:46:58 +00:00
|
|
|
|
|
|
|
|
if self.prune_delays:
|
|
|
|
|
input_args += [lr_iteration, last_mask_updated_iter]
|
|
|
|
|
output_args += [mask_blob, last_mask_updated_iter]
|
|
|
|
|
|
2020-05-03 17:39:29 +00:00
|
|
|
if weight_decay > 0:
|
|
|
|
|
net.__getattr__(op)(
|
|
|
|
|
input_args,
|
|
|
|
|
output_args,
|
|
|
|
|
epsilon=self.epsilon,
|
|
|
|
|
weight_decay=weight_decay,
|
|
|
|
|
engine=self.engine,
|
|
|
|
|
)
|
|
|
|
|
else:
|
|
|
|
|
net.__getattr__(op)(
|
2020-09-10 02:35:22 +00:00
|
|
|
input_args, output_args, epsilon=self.epsilon, engine=self.engine
|
2020-05-03 17:39:29 +00:00
|
|
|
)
|
2020-06-30 21:34:09 +00:00
|
|
|
if self.counter_halflife > 0:
|
|
|
|
|
net.RowWiseCounter(
|
|
|
|
|
[prev_update_iter, update_counter, grad.indices, iteration],
|
|
|
|
|
[prev_update_iter, update_counter],
|
|
|
|
|
counter_halflife=self.counter_halflife,
|
|
|
|
|
)
|
2017-03-08 02:44:45 +00:00
|
|
|
else:
|
2020-04-09 19:46:58 +00:00
|
|
|
input_args = [param, param_squared_sum, grad, lr]
|
[Caffe2][fbcode=>GH sync] Update from facebook 4323b18ce13c (#7116)
* [fix] Re-enable events in RNN ops
We have earlier added event disabling in RNN ops as back then we didn't use
events, with current use cases this is no longer true
(https://fburl.com/8vd0lp8y)
* use ops with cude impl
* Revert D7729695: [caffe2][fix] Re-enable events in RNN ops
This reverts commit 4b215c7496fb724656ff4c776933a15bdbbcde5e
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* [observer] Clean up observer_config.h
#accept2ship
* [1/n] Refactor dataio_test.py
Replace code duplication with a common function
* Add barrier net that runs before training nets
Add a synchonize barrier net that is run before training nets. With this net, shards that are faster will wait for other shards before start training. This reduce chances of the faster shards timing out during GLOO AllReduce.
Removed explicit data_parallel_model.py.synchronize call in holmes workflow. Similar change in speech/asr_training workflow will come in another diff.
* Support the dnnlowp backend in caffe2_benchmark
This is for SHARE operator latency evaluation
* Migrate integral_image_op to main caffe2
migrate integral_image_op(GPU version) given by https://fburl.com/yvqezigi
to caffe2/caffe2/operators and implement its CPU version. Write up a test
using the hypothesis_test mechanism
* [pos_disc, fbcode] Implement unjoined lr loss
As explained in https://our.intern.facebook.com/intern/wiki/Model_Based_Calibration/, when the dataset is an joined data set, where labels might change later, we need to use unjoined logloss.
The implementation is almost the same as in Sigrid (https://fburl.com/1trngsls), where
loss = y (log(p) - log(1-p)) + (1-y)(log(1-p)) = xy - (1-y)x - (1-y)log(1+exp(-x))
For x < 0, to ensure stability and avoid overflow, we reformulate the above exp as
loss = xy - (1-y)x - (1-y)x + (1-y)log(1+exp(x)) = xy + (1-y)log(1+exp(x))
Then the final expression becomes
loss = xy + (y - 1) x (x >= 0) - (1 - y) log(1 + exp(x - 2 x (x >= 0)))
where y is the true label, x is the dot product and p = logistic(x).
This kind of implementation is align with the current implementation of the original cross entropy in
https://phabricator.intern.facebook.com/diffusion/FBS/browse/master/fbcode/caffe2/caffe2/operators/cross_entropy_op.cc;0bae3b5d0f825897c5e0dd0ff10f489d7271bf25$7-13
* Keep the array to fix the conflict
* [C2] Compute Adagrad effective LR
The AdagradWithLR op outputs an extra blob which is contains the average effective learning rate across all weights in this blob.
* Open-source extractMetaNetDef & runGlobalInitialization, add new Predictor constructor from db file, and add run_map_outputs
1. Open-source extractMetaNetDef and runGlobalInitialization, for use in
2. new Predictor constructor from db file.
3. Add new run function that returns outputs as TensorMap
* Disable eigen cpu
Disable eigen cpu in transpose and reduce
* Introduce request_only/object_only property of ModelLayer
by default this is False
* A simple TC Caffe2 benchmark
We can run tunner, get MappingOptions and then use them to
compare against cuBLAS
currently broken due to LLVM issues. How to run:
hg checkout eec1ab31b59c03b8deded1c755a9abaf8c45be01
add D7401202
add D7434625
add D7506031
add D7540728
buck run @mode/dev-nosan tc/tc/benchmarks_python:caffe2_benchmark
* Move Caffe2 feature_maps_ops to open source
Need feature maps operators in open source project facebookresearch/BlueWhale
* Manually fix the conflicts in channel shuffle op
* Fix the inconsistency between different gh and fbcode
* Skip Adagrad GPU Test (Because some gpu implementation is missing)
* Fix another test to make sure it won't run on gpu when implementation is not available yet
2018-05-02 03:49:00 +00:00
|
|
|
output_args = [param, param_squared_sum]
|
2020-04-09 19:46:58 +00:00
|
|
|
|
[Caffe2][fbcode=>GH sync] Update from facebook 4323b18ce13c (#7116)
* [fix] Re-enable events in RNN ops
We have earlier added event disabling in RNN ops as back then we didn't use
events, with current use cases this is no longer true
(https://fburl.com/8vd0lp8y)
* use ops with cude impl
* Revert D7729695: [caffe2][fix] Re-enable events in RNN ops
This reverts commit 4b215c7496fb724656ff4c776933a15bdbbcde5e
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* [observer] Clean up observer_config.h
#accept2ship
* [1/n] Refactor dataio_test.py
Replace code duplication with a common function
* Add barrier net that runs before training nets
Add a synchonize barrier net that is run before training nets. With this net, shards that are faster will wait for other shards before start training. This reduce chances of the faster shards timing out during GLOO AllReduce.
Removed explicit data_parallel_model.py.synchronize call in holmes workflow. Similar change in speech/asr_training workflow will come in another diff.
* Support the dnnlowp backend in caffe2_benchmark
This is for SHARE operator latency evaluation
* Migrate integral_image_op to main caffe2
migrate integral_image_op(GPU version) given by https://fburl.com/yvqezigi
to caffe2/caffe2/operators and implement its CPU version. Write up a test
using the hypothesis_test mechanism
* [pos_disc, fbcode] Implement unjoined lr loss
As explained in https://our.intern.facebook.com/intern/wiki/Model_Based_Calibration/, when the dataset is an joined data set, where labels might change later, we need to use unjoined logloss.
The implementation is almost the same as in Sigrid (https://fburl.com/1trngsls), where
loss = y (log(p) - log(1-p)) + (1-y)(log(1-p)) = xy - (1-y)x - (1-y)log(1+exp(-x))
For x < 0, to ensure stability and avoid overflow, we reformulate the above exp as
loss = xy - (1-y)x - (1-y)x + (1-y)log(1+exp(x)) = xy + (1-y)log(1+exp(x))
Then the final expression becomes
loss = xy + (y - 1) x (x >= 0) - (1 - y) log(1 + exp(x - 2 x (x >= 0)))
where y is the true label, x is the dot product and p = logistic(x).
This kind of implementation is align with the current implementation of the original cross entropy in
https://phabricator.intern.facebook.com/diffusion/FBS/browse/master/fbcode/caffe2/caffe2/operators/cross_entropy_op.cc;0bae3b5d0f825897c5e0dd0ff10f489d7271bf25$7-13
* Keep the array to fix the conflict
* [C2] Compute Adagrad effective LR
The AdagradWithLR op outputs an extra blob which is contains the average effective learning rate across all weights in this blob.
* Open-source extractMetaNetDef & runGlobalInitialization, add new Predictor constructor from db file, and add run_map_outputs
1. Open-source extractMetaNetDef and runGlobalInitialization, for use in
2. new Predictor constructor from db file.
3. Add new run function that returns outputs as TensorMap
* Disable eigen cpu
Disable eigen cpu in transpose and reduce
* Introduce request_only/object_only property of ModelLayer
by default this is False
* A simple TC Caffe2 benchmark
We can run tunner, get MappingOptions and then use them to
compare against cuBLAS
currently broken due to LLVM issues. How to run:
hg checkout eec1ab31b59c03b8deded1c755a9abaf8c45be01
add D7401202
add D7434625
add D7506031
add D7540728
buck run @mode/dev-nosan tc/tc/benchmarks_python:caffe2_benchmark
* Move Caffe2 feature_maps_ops to open source
Need feature maps operators in open source project facebookresearch/BlueWhale
* Manually fix the conflicts in channel shuffle op
* Fix the inconsistency between different gh and fbcode
* Skip Adagrad GPU Test (Because some gpu implementation is missing)
* Fix another test to make sure it won't run on gpu when implementation is not available yet
2018-05-02 03:49:00 +00:00
|
|
|
if self.output_effective_lr_and_update:
|
2020-09-10 02:35:22 +00:00
|
|
|
assert (
|
|
|
|
|
self.use_mask is False
|
|
|
|
|
), "MaskedAdagrad doesn't support outputting effective_lr_and_update"
|
|
|
|
|
output_args.append(str(param) + "_effective_lr")
|
|
|
|
|
output_args.append(str(param) + "_update")
|
[Caffe2][fbcode=>GH sync] Update from facebook 4323b18ce13c (#7116)
* [fix] Re-enable events in RNN ops
We have earlier added event disabling in RNN ops as back then we didn't use
events, with current use cases this is no longer true
(https://fburl.com/8vd0lp8y)
* use ops with cude impl
* Revert D7729695: [caffe2][fix] Re-enable events in RNN ops
This reverts commit 4b215c7496fb724656ff4c776933a15bdbbcde5e
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* [observer] Clean up observer_config.h
#accept2ship
* [1/n] Refactor dataio_test.py
Replace code duplication with a common function
* Add barrier net that runs before training nets
Add a synchonize barrier net that is run before training nets. With this net, shards that are faster will wait for other shards before start training. This reduce chances of the faster shards timing out during GLOO AllReduce.
Removed explicit data_parallel_model.py.synchronize call in holmes workflow. Similar change in speech/asr_training workflow will come in another diff.
* Support the dnnlowp backend in caffe2_benchmark
This is for SHARE operator latency evaluation
* Migrate integral_image_op to main caffe2
migrate integral_image_op(GPU version) given by https://fburl.com/yvqezigi
to caffe2/caffe2/operators and implement its CPU version. Write up a test
using the hypothesis_test mechanism
* [pos_disc, fbcode] Implement unjoined lr loss
As explained in https://our.intern.facebook.com/intern/wiki/Model_Based_Calibration/, when the dataset is an joined data set, where labels might change later, we need to use unjoined logloss.
The implementation is almost the same as in Sigrid (https://fburl.com/1trngsls), where
loss = y (log(p) - log(1-p)) + (1-y)(log(1-p)) = xy - (1-y)x - (1-y)log(1+exp(-x))
For x < 0, to ensure stability and avoid overflow, we reformulate the above exp as
loss = xy - (1-y)x - (1-y)x + (1-y)log(1+exp(x)) = xy + (1-y)log(1+exp(x))
Then the final expression becomes
loss = xy + (y - 1) x (x >= 0) - (1 - y) log(1 + exp(x - 2 x (x >= 0)))
where y is the true label, x is the dot product and p = logistic(x).
This kind of implementation is align with the current implementation of the original cross entropy in
https://phabricator.intern.facebook.com/diffusion/FBS/browse/master/fbcode/caffe2/caffe2/operators/cross_entropy_op.cc;0bae3b5d0f825897c5e0dd0ff10f489d7271bf25$7-13
* Keep the array to fix the conflict
* [C2] Compute Adagrad effective LR
The AdagradWithLR op outputs an extra blob which is contains the average effective learning rate across all weights in this blob.
* Open-source extractMetaNetDef & runGlobalInitialization, add new Predictor constructor from db file, and add run_map_outputs
1. Open-source extractMetaNetDef and runGlobalInitialization, for use in
2. new Predictor constructor from db file.
3. Add new run function that returns outputs as TensorMap
* Disable eigen cpu
Disable eigen cpu in transpose and reduce
* Introduce request_only/object_only property of ModelLayer
by default this is False
* A simple TC Caffe2 benchmark
We can run tunner, get MappingOptions and then use them to
compare against cuBLAS
currently broken due to LLVM issues. How to run:
hg checkout eec1ab31b59c03b8deded1c755a9abaf8c45be01
add D7401202
add D7434625
add D7506031
add D7540728
buck run @mode/dev-nosan tc/tc/benchmarks_python:caffe2_benchmark
* Move Caffe2 feature_maps_ops to open source
Need feature maps operators in open source project facebookresearch/BlueWhale
* Manually fix the conflicts in channel shuffle op
* Fix the inconsistency between different gh and fbcode
* Skip Adagrad GPU Test (Because some gpu implementation is missing)
* Fix another test to make sure it won't run on gpu when implementation is not available yet
2018-05-02 03:49:00 +00:00
|
|
|
elif self.output_effective_lr:
|
2020-09-10 02:35:22 +00:00
|
|
|
assert (
|
|
|
|
|
self.use_mask is False
|
|
|
|
|
), "MaskedAdagrad doesn't support outputting effective_lr"
|
|
|
|
|
output_args.append(str(param) + "_effective_lr")
|
[Caffe2][fbcode=>GH sync] Update from facebook 4323b18ce13c (#7116)
* [fix] Re-enable events in RNN ops
We have earlier added event disabling in RNN ops as back then we didn't use
events, with current use cases this is no longer true
(https://fburl.com/8vd0lp8y)
* use ops with cude impl
* Revert D7729695: [caffe2][fix] Re-enable events in RNN ops
This reverts commit 4b215c7496fb724656ff4c776933a15bdbbcde5e
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
* [observer] Clean up observer_config.h
#accept2ship
* [1/n] Refactor dataio_test.py
Replace code duplication with a common function
* Add barrier net that runs before training nets
Add a synchonize barrier net that is run before training nets. With this net, shards that are faster will wait for other shards before start training. This reduce chances of the faster shards timing out during GLOO AllReduce.
Removed explicit data_parallel_model.py.synchronize call in holmes workflow. Similar change in speech/asr_training workflow will come in another diff.
* Support the dnnlowp backend in caffe2_benchmark
This is for SHARE operator latency evaluation
* Migrate integral_image_op to main caffe2
migrate integral_image_op(GPU version) given by https://fburl.com/yvqezigi
to caffe2/caffe2/operators and implement its CPU version. Write up a test
using the hypothesis_test mechanism
* [pos_disc, fbcode] Implement unjoined lr loss
As explained in https://our.intern.facebook.com/intern/wiki/Model_Based_Calibration/, when the dataset is an joined data set, where labels might change later, we need to use unjoined logloss.
The implementation is almost the same as in Sigrid (https://fburl.com/1trngsls), where
loss = y (log(p) - log(1-p)) + (1-y)(log(1-p)) = xy - (1-y)x - (1-y)log(1+exp(-x))
For x < 0, to ensure stability and avoid overflow, we reformulate the above exp as
loss = xy - (1-y)x - (1-y)x + (1-y)log(1+exp(x)) = xy + (1-y)log(1+exp(x))
Then the final expression becomes
loss = xy + (y - 1) x (x >= 0) - (1 - y) log(1 + exp(x - 2 x (x >= 0)))
where y is the true label, x is the dot product and p = logistic(x).
This kind of implementation is align with the current implementation of the original cross entropy in
https://phabricator.intern.facebook.com/diffusion/FBS/browse/master/fbcode/caffe2/caffe2/operators/cross_entropy_op.cc;0bae3b5d0f825897c5e0dd0ff10f489d7271bf25$7-13
* Keep the array to fix the conflict
* [C2] Compute Adagrad effective LR
The AdagradWithLR op outputs an extra blob which is contains the average effective learning rate across all weights in this blob.
* Open-source extractMetaNetDef & runGlobalInitialization, add new Predictor constructor from db file, and add run_map_outputs
1. Open-source extractMetaNetDef and runGlobalInitialization, for use in
2. new Predictor constructor from db file.
3. Add new run function that returns outputs as TensorMap
* Disable eigen cpu
Disable eigen cpu in transpose and reduce
* Introduce request_only/object_only property of ModelLayer
by default this is False
* A simple TC Caffe2 benchmark
We can run tunner, get MappingOptions and then use them to
compare against cuBLAS
currently broken due to LLVM issues. How to run:
hg checkout eec1ab31b59c03b8deded1c755a9abaf8c45be01
add D7401202
add D7434625
add D7506031
add D7540728
buck run @mode/dev-nosan tc/tc/benchmarks_python:caffe2_benchmark
* Move Caffe2 feature_maps_ops to open source
Need feature maps operators in open source project facebookresearch/BlueWhale
* Manually fix the conflicts in channel shuffle op
* Fix the inconsistency between different gh and fbcode
* Skip Adagrad GPU Test (Because some gpu implementation is missing)
* Fix another test to make sure it won't run on gpu when implementation is not available yet
2018-05-02 03:49:00 +00:00
|
|
|
|
2020-04-09 19:46:58 +00:00
|
|
|
if self.use_mask is True:
|
|
|
|
|
input_args += [mask_blob]
|
|
|
|
|
|
|
|
|
|
if self.prune_delays:
|
|
|
|
|
input_args += [lr_iteration, last_mask_updated_iter]
|
|
|
|
|
output_args += [mask_blob, last_mask_updated_iter]
|
|
|
|
|
|
2019-12-27 01:14:44 +00:00
|
|
|
if self.use_mask:
|
2020-09-10 02:35:22 +00:00
|
|
|
assert (
|
|
|
|
|
weight_decay == 0
|
|
|
|
|
), "weight decay is not implemented for use_mask yet"
|
2019-12-27 01:14:44 +00:00
|
|
|
net.MaskedAdagrad(
|
2020-04-09 19:46:58 +00:00
|
|
|
input_args,
|
2019-12-27 01:14:44 +00:00
|
|
|
output_args,
|
|
|
|
|
epsilon=self.epsilon,
|
|
|
|
|
decay=float(self.decay),
|
2020-04-09 19:46:58 +00:00
|
|
|
block_size=self.prune_block_size,
|
|
|
|
|
delays=self.prune_delays,
|
|
|
|
|
prune_ratios=self.prune_ratios,
|
|
|
|
|
engine=self.engine,
|
2019-12-27 01:14:44 +00:00
|
|
|
)
|
|
|
|
|
else:
|
2020-05-03 17:39:29 +00:00
|
|
|
if weight_decay > 0:
|
|
|
|
|
net.Adagrad(
|
|
|
|
|
input_args,
|
|
|
|
|
output_args,
|
|
|
|
|
epsilon=self.epsilon,
|
|
|
|
|
decay=float(self.decay),
|
|
|
|
|
weight_decay=weight_decay,
|
2020-09-10 02:35:22 +00:00
|
|
|
engine=self.engine,
|
2020-05-03 17:39:29 +00:00
|
|
|
)
|
|
|
|
|
else:
|
|
|
|
|
net.Adagrad(
|
|
|
|
|
input_args,
|
|
|
|
|
output_args,
|
|
|
|
|
epsilon=self.epsilon,
|
|
|
|
|
decay=float(self.decay),
|
2020-09-10 02:35:22 +00:00
|
|
|
engine=self.engine,
|
2020-05-03 17:39:29 +00:00
|
|
|
)
|
2017-03-08 02:44:45 +00:00
|
|
|
|
2020-03-20 15:13:24 +00:00
|
|
|
if self.swa_enabled:
|
|
|
|
|
param_swa = str(param) + "_swa"
|
|
|
|
|
if not param_init_net.BlobIsDefined(param_swa):
|
2020-09-10 02:35:22 +00:00
|
|
|
param_init_net.ConstantFill([param], param_swa, value=0.0)
|
2020-04-09 19:46:58 +00:00
|
|
|
self._aux_params.local.append(param_swa)
|
2020-03-20 15:13:24 +00:00
|
|
|
|
|
|
|
|
net.SWA(
|
2020-03-21 04:34:39 +00:00
|
|
|
[param, param_swa, lr_iteration],
|
2020-03-20 15:13:24 +00:00
|
|
|
[param, param_swa],
|
|
|
|
|
avg_start=self.swa_avg_start_it,
|
|
|
|
|
avg_end=self.swa_avg_end_it,
|
|
|
|
|
feedback_start=self.swa_feedback_start_it,
|
|
|
|
|
feedback_step=self.swa_feedback_step,
|
|
|
|
|
feedback_end=self.swa_feedback_end_it,
|
2020-04-09 19:46:58 +00:00
|
|
|
)
|
2020-03-21 04:34:39 +00:00
|
|
|
if self.weight_scale:
|
|
|
|
|
net.WeightScale(
|
|
|
|
|
[param, lr_iteration],
|
|
|
|
|
[param],
|
|
|
|
|
stepsize=self.weight_scale.stepsize,
|
|
|
|
|
upper_bound_iter=self.weight_scale.upper_bound_iter,
|
2020-09-10 02:35:22 +00:00
|
|
|
scale=float(self.weight_scale.scale),
|
|
|
|
|
)
|
2020-03-21 04:34:39 +00:00
|
|
|
if self.weight_scale.to_aux:
|
|
|
|
|
net.WeightScale(
|
|
|
|
|
[param_squared_sum, lr_iteration],
|
|
|
|
|
[param_squared_sum],
|
|
|
|
|
stepsize=self.weight_scale.stepsize,
|
|
|
|
|
upper_bound_iter=self.weight_scale.upper_bound_iter,
|
2020-09-10 02:35:22 +00:00
|
|
|
scale=float(self.weight_scale.scale),
|
|
|
|
|
)
|
2020-03-20 15:13:24 +00:00
|
|
|
|
2017-05-09 20:14:07 +00:00
|
|
|
def scale_learning_rate(self, scale):
|
|
|
|
|
self.alpha *= scale
|
|
|
|
|
return
|
|
|
|
|
|
2017-03-08 02:44:45 +00:00
|
|
|
|
2018-07-14 01:40:56 +00:00
|
|
|
class WngradOptimizer(Optimizer):
|
2020-09-10 02:35:22 +00:00
|
|
|
def __init__(
|
|
|
|
|
self,
|
|
|
|
|
alpha=1.0,
|
|
|
|
|
epsilon=1e-9,
|
|
|
|
|
policy="fixed",
|
|
|
|
|
sparse_dedup_aggregator=None,
|
|
|
|
|
engine="",
|
|
|
|
|
moment_init=100.0,
|
|
|
|
|
lars=None,
|
|
|
|
|
output_effective_lr=False,
|
|
|
|
|
output_effective_lr_and_update=False,
|
|
|
|
|
**kwargs
|
|
|
|
|
):
|
2018-07-14 01:40:56 +00:00
|
|
|
super(WngradOptimizer, self).__init__()
|
|
|
|
|
self.alpha = alpha
|
|
|
|
|
self.epsilon = epsilon
|
|
|
|
|
self.policy = policy
|
|
|
|
|
self.sparse_dedup_aggregator = sparse_dedup_aggregator
|
|
|
|
|
self.engine = engine
|
|
|
|
|
self.moment_init = moment_init
|
|
|
|
|
self.lars = lars
|
|
|
|
|
self.output_effective_lr = output_effective_lr
|
|
|
|
|
self.output_effective_lr_and_update = output_effective_lr_and_update
|
|
|
|
|
self.init_kwargs = kwargs
|
|
|
|
|
|
|
|
|
|
def _run(self, net, param_init_net, param_info):
|
|
|
|
|
param = param_info.blob
|
|
|
|
|
grad = param_info.grad
|
|
|
|
|
|
|
|
|
|
if self.alpha <= 0:
|
|
|
|
|
return
|
|
|
|
|
|
|
|
|
|
self._clear_local_lr_multiplier()
|
|
|
|
|
|
|
|
|
|
if self.lars is not None and not isinstance(grad, core.GradientSlice):
|
2020-09-10 02:35:22 +00:00
|
|
|
assert self.lars >= 0, "Lars offset must be nonnegative, got {}".format(
|
|
|
|
|
self.lars
|
|
|
|
|
)
|
2018-08-02 18:41:15 +00:00
|
|
|
wd, trust, lr_max = self.create_lars_inputs(
|
2020-09-10 02:35:22 +00:00
|
|
|
param_init_net, 0.0, 1.0, np.finfo(np.float32).max
|
|
|
|
|
)
|
2018-07-14 01:40:56 +00:00
|
|
|
lr_lars_multiplier = net.Lars(
|
2018-08-02 18:41:15 +00:00
|
|
|
[param, grad, wd, trust, lr_max],
|
2018-07-14 01:40:56 +00:00
|
|
|
self.make_unique_blob_name(str(param) + "_lars"),
|
2018-08-02 18:41:15 +00:00
|
|
|
offset=self.lars,
|
2020-09-10 02:35:22 +00:00
|
|
|
lr_min=0.0,
|
|
|
|
|
)
|
2018-07-14 01:40:56 +00:00
|
|
|
current_scope = scope.CurrentDeviceScope()
|
|
|
|
|
self._add_local_lr_multiplier(
|
|
|
|
|
lr_lars_multiplier,
|
2020-09-10 02:35:22 +00:00
|
|
|
is_gpu_blob=(
|
|
|
|
|
current_scope is not None
|
|
|
|
|
and core.IsGPUDeviceType(current_scope.device_type)
|
|
|
|
|
),
|
2018-07-14 01:40:56 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
|
|
lr, _ = self.build_lr(
|
2020-09-10 02:35:22 +00:00
|
|
|
net,
|
|
|
|
|
param_init_net,
|
2018-07-14 01:40:56 +00:00
|
|
|
base_learning_rate=self.alpha,
|
|
|
|
|
policy=self.policy,
|
|
|
|
|
**(self.init_kwargs)
|
|
|
|
|
)
|
|
|
|
|
|
|
|
|
|
moment = param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
[], str(param) + "_moment", shape=[1], value=self.moment_init
|
2018-07-14 01:40:56 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
|
|
self._aux_params.local.append(moment)
|
|
|
|
|
|
|
|
|
|
if isinstance(grad, core.GradientSlice):
|
|
|
|
|
grad = self.dedup(net, self.sparse_dedup_aggregator, grad)
|
|
|
|
|
net.SparseWngrad(
|
|
|
|
|
[param, moment, grad.indices, grad.values, lr],
|
|
|
|
|
[param, moment],
|
|
|
|
|
epsilon=self.epsilon,
|
2020-09-10 02:35:22 +00:00
|
|
|
engine=self.engine,
|
2018-07-14 01:40:56 +00:00
|
|
|
)
|
|
|
|
|
else:
|
|
|
|
|
output_args = [param, moment]
|
|
|
|
|
if self.output_effective_lr_and_update:
|
2020-09-10 02:35:22 +00:00
|
|
|
output_args.append(str(param) + "_effective_lr")
|
|
|
|
|
output_args.append(str(param) + "_update")
|
2018-07-14 01:40:56 +00:00
|
|
|
elif self.output_effective_lr:
|
2020-09-10 02:35:22 +00:00
|
|
|
output_args.append(str(param) + "_effective_lr")
|
2018-07-14 01:40:56 +00:00
|
|
|
|
|
|
|
|
net.Wngrad(
|
|
|
|
|
[param, moment, grad, lr],
|
|
|
|
|
output_args,
|
|
|
|
|
epsilon=self.epsilon,
|
2020-09-10 02:35:22 +00:00
|
|
|
engine=self.engine,
|
2018-07-14 01:40:56 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
|
|
def scale_learning_rate(self, scale):
|
|
|
|
|
self.alpha *= scale
|
|
|
|
|
return
|
|
|
|
|
|
|
|
|
|
|
2020-04-15 06:01:58 +00:00
|
|
|
class StormOptimizer(Optimizer):
|
2020-09-10 02:35:22 +00:00
|
|
|
def __init__(
|
|
|
|
|
self,
|
|
|
|
|
lr=0.1,
|
|
|
|
|
momentum=10.0,
|
|
|
|
|
beta=0.1,
|
|
|
|
|
grad_sq_init=0.01,
|
|
|
|
|
policy="fixed",
|
|
|
|
|
sparse_dedup_aggregator=None,
|
|
|
|
|
lars=None,
|
|
|
|
|
**kwargs
|
|
|
|
|
):
|
2020-04-15 06:01:58 +00:00
|
|
|
"""Constructor function to add STORM Optimizer
|
|
|
|
|
|
|
|
|
|
Args:
|
|
|
|
|
lr: learning rate scaling (called k in the original paper)
|
|
|
|
|
momentum: momentum scaling (called c in the original paper)
|
|
|
|
|
beta: initial value of denominator in adaptive learning rate (
|
|
|
|
|
called c in the original paper)
|
|
|
|
|
grad_sq_init: initial value of gradient squared accumulator.
|
|
|
|
|
policy: specifies how learning rate should be applied, options are
|
|
|
|
|
'fixed', 'step', 'exp', etc.
|
|
|
|
|
sparse_dedup_aggregator: specifies deduplication strategy for
|
|
|
|
|
gradient slices. Works while using sparse gradients. Options
|
|
|
|
|
include 'mean' and 'sum'.
|
|
|
|
|
lars: lars offset.
|
|
|
|
|
"""
|
|
|
|
|
super(StormOptimizer, self).__init__()
|
|
|
|
|
self.lr = lr
|
|
|
|
|
self.momentum = momentum
|
|
|
|
|
self.beta = beta
|
|
|
|
|
self.grad_sq_init = grad_sq_init
|
|
|
|
|
self.policy = policy
|
|
|
|
|
self.sparse_dedup_aggregator = sparse_dedup_aggregator
|
|
|
|
|
self.lars = lars
|
|
|
|
|
self.init_kwargs = kwargs
|
|
|
|
|
|
|
|
|
|
def _run(self, net, param_init_net, param_info):
|
|
|
|
|
param = param_info.blob
|
|
|
|
|
grad = param_info.grad
|
|
|
|
|
|
|
|
|
|
if self.lr <= 0:
|
|
|
|
|
return
|
|
|
|
|
|
|
|
|
|
self._clear_local_lr_multiplier()
|
|
|
|
|
|
|
|
|
|
if self.lars is not None and not isinstance(grad, core.GradientSlice):
|
2020-09-10 02:35:22 +00:00
|
|
|
assert self.lars >= 0, "Lars offset must be nonnegative, got {}".format(
|
|
|
|
|
self.lars
|
|
|
|
|
)
|
2020-04-15 06:01:58 +00:00
|
|
|
wd, trust, lr_max = self.create_lars_inputs(
|
2020-09-10 02:35:22 +00:00
|
|
|
param_init_net, 0.0, 1.0, np.finfo(np.float32).max
|
|
|
|
|
)
|
2020-04-15 06:01:58 +00:00
|
|
|
lr_lars_multiplier = net.Lars(
|
|
|
|
|
[param, grad, wd, trust, lr_max],
|
2020-09-10 02:35:22 +00:00
|
|
|
self.make_unique_blob_name(str(param) + "_lars"),
|
2020-04-15 06:01:58 +00:00
|
|
|
offset=self.lars,
|
2020-09-10 02:35:22 +00:00
|
|
|
lr_min=0.0,
|
|
|
|
|
)
|
2020-04-15 06:01:58 +00:00
|
|
|
current_scope = scope.CurrentDeviceScope()
|
|
|
|
|
self._add_local_lr_multiplier(
|
|
|
|
|
lr_lars_multiplier,
|
2020-09-10 02:35:22 +00:00
|
|
|
is_gpu_blob=(
|
|
|
|
|
current_scope is not None
|
|
|
|
|
and core.IsGPUDeviceType(current_scope.device_type)
|
|
|
|
|
),
|
2020-04-15 06:01:58 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
|
|
lr, _ = self.build_lr(
|
2020-09-10 02:35:22 +00:00
|
|
|
net,
|
|
|
|
|
param_init_net,
|
2020-04-15 06:01:58 +00:00
|
|
|
base_learning_rate=self.lr,
|
|
|
|
|
policy=self.policy,
|
|
|
|
|
**(self.init_kwargs)
|
|
|
|
|
)
|
|
|
|
|
|
2020-09-10 02:35:22 +00:00
|
|
|
moment = param_init_net.ConstantFill(param, str(param) + "_moment", value=0.0)
|
2020-04-15 06:01:58 +00:00
|
|
|
self._aux_params.local.append(moment)
|
|
|
|
|
|
|
|
|
|
grad_sq_sum = param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
[], str(param) + "_grad_sq_sum", shape=[1], value=self.grad_sq_init
|
|
|
|
|
)
|
2020-04-15 06:01:58 +00:00
|
|
|
self._aux_params.local.append(grad_sq_sum)
|
|
|
|
|
|
|
|
|
|
if isinstance(grad, core.GradientSlice):
|
|
|
|
|
grad = self.dedup(net, self.sparse_dedup_aggregator, grad)
|
|
|
|
|
net.SparseStorm(
|
|
|
|
|
[param, moment, grad_sq_sum, grad.values, grad.indices, lr],
|
|
|
|
|
[param, moment, grad_sq_sum],
|
|
|
|
|
momentum=self.momentum,
|
2020-09-10 02:35:22 +00:00
|
|
|
beta=self.beta,
|
2020-04-15 06:01:58 +00:00
|
|
|
)
|
|
|
|
|
else:
|
|
|
|
|
net.Storm(
|
|
|
|
|
[param, moment, grad_sq_sum, grad, lr],
|
|
|
|
|
[param, moment, grad_sq_sum],
|
|
|
|
|
momentum=self.momentum,
|
2020-09-10 02:35:22 +00:00
|
|
|
beta=self.beta,
|
2020-04-15 06:01:58 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
|
|
def scale_learning_rate(self, scale):
|
|
|
|
|
self.lr *= scale
|
|
|
|
|
|
|
|
|
|
|
2018-07-25 03:01:20 +00:00
|
|
|
class AdadeltaOptimizer(Optimizer):
|
2020-09-10 02:35:22 +00:00
|
|
|
def __init__(
|
|
|
|
|
self,
|
|
|
|
|
alpha=0.01,
|
|
|
|
|
epsilon=1e-4,
|
|
|
|
|
decay=0.95,
|
|
|
|
|
policy="fixed",
|
|
|
|
|
sparse_dedup_aggregator=None,
|
|
|
|
|
engine="",
|
|
|
|
|
**kwargs
|
|
|
|
|
):
|
2018-07-25 03:01:20 +00:00
|
|
|
"""Constructor function to add Adadelta Optimizer
|
|
|
|
|
|
|
|
|
|
Args:
|
|
|
|
|
alpha: learning rate
|
|
|
|
|
epsilon: attribute of Adadelta to avoid numerical issues
|
|
|
|
|
decay: attribute of Adadelta to decay the squared gradient sum
|
|
|
|
|
policy: specifies how learning rate should be applied, options are
|
|
|
|
|
"fixed", "step", "exp", etc.
|
|
|
|
|
sparse_dedup_aggregator: specifies deduplication strategy for
|
|
|
|
|
gradient slices. Works while using sparse gradients. Options
|
|
|
|
|
include "mean" and "sum".
|
|
|
|
|
engine: the engine used, options include "", "CUDNN", etc.
|
|
|
|
|
"""
|
|
|
|
|
super(AdadeltaOptimizer, self).__init__()
|
|
|
|
|
self.alpha = alpha
|
|
|
|
|
self.epsilon = epsilon
|
|
|
|
|
self.decay = decay
|
|
|
|
|
self.policy = policy
|
|
|
|
|
self.sparse_dedup_aggregator = sparse_dedup_aggregator
|
|
|
|
|
self.engine = engine
|
|
|
|
|
self.init_kwargs = kwargs
|
|
|
|
|
|
|
|
|
|
def _run(self, net, param_init_net, param_info):
|
|
|
|
|
param = param_info.blob
|
|
|
|
|
grad = param_info.grad
|
|
|
|
|
|
|
|
|
|
if self.alpha <= 0:
|
|
|
|
|
return
|
|
|
|
|
|
|
|
|
|
lr, _ = self.build_lr(
|
2020-09-10 02:35:22 +00:00
|
|
|
net,
|
|
|
|
|
param_init_net,
|
2018-07-25 03:01:20 +00:00
|
|
|
base_learning_rate=self.alpha,
|
|
|
|
|
policy=self.policy,
|
|
|
|
|
**(self.init_kwargs)
|
|
|
|
|
)
|
|
|
|
|
|
|
|
|
|
moment = param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
[param], str(param) + "_squared_moment", value=0.0
|
|
|
|
|
)
|
2018-07-25 03:01:20 +00:00
|
|
|
|
|
|
|
|
moment_update = param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
[param], str(param) + "_squared_moment_update", value=0.0
|
|
|
|
|
)
|
2018-07-25 03:01:20 +00:00
|
|
|
|
|
|
|
|
self._aux_params.local.append(moment)
|
|
|
|
|
self._aux_params.local.append(moment_update)
|
|
|
|
|
|
|
|
|
|
if isinstance(grad, core.GradientSlice):
|
|
|
|
|
grad = self.dedup(net, self.sparse_dedup_aggregator, grad)
|
|
|
|
|
net.SparseAdadelta(
|
2020-09-10 02:35:22 +00:00
|
|
|
[param, moment, moment_update, grad.indices, grad.values, lr],
|
|
|
|
|
[param, moment, moment_update],
|
2018-07-25 03:01:20 +00:00
|
|
|
epsilon=self.epsilon,
|
|
|
|
|
decay=self.decay,
|
2020-09-10 02:35:22 +00:00
|
|
|
engine=self.engine,
|
|
|
|
|
)
|
2018-07-25 03:01:20 +00:00
|
|
|
else:
|
|
|
|
|
net.Adadelta(
|
|
|
|
|
[param, moment, moment_update, grad, lr],
|
|
|
|
|
[param, moment, moment_update],
|
|
|
|
|
epsilon=self.epsilon,
|
|
|
|
|
decay=self.decay,
|
2020-09-10 02:35:22 +00:00
|
|
|
engine=self.engine,
|
2018-07-25 03:01:20 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
|
|
def scale_learning_rate(self, scale):
|
|
|
|
|
self.alpha *= scale
|
|
|
|
|
return
|
|
|
|
|
|
|
|
|
|
|
2017-03-08 02:44:45 +00:00
|
|
|
class FtrlOptimizer(Optimizer):
|
2020-09-10 02:35:22 +00:00
|
|
|
def __init__(
|
|
|
|
|
self,
|
|
|
|
|
alpha=0.01,
|
|
|
|
|
beta=1e-4,
|
|
|
|
|
lambda1=0,
|
|
|
|
|
lambda2=0,
|
|
|
|
|
sparse_dedup_aggregator=None,
|
|
|
|
|
engine="",
|
|
|
|
|
):
|
2017-04-17 17:06:49 +00:00
|
|
|
super(FtrlOptimizer, self).__init__()
|
2017-03-08 02:44:45 +00:00
|
|
|
self.alpha = alpha
|
|
|
|
|
self.beta = beta
|
|
|
|
|
self.lambda1 = lambda1
|
|
|
|
|
self.lambda2 = lambda2
|
|
|
|
|
self.sparse_dedup_aggregator = sparse_dedup_aggregator
|
|
|
|
|
self.engine = engine
|
|
|
|
|
|
2017-05-26 05:01:54 +00:00
|
|
|
def _run(self, net, param_init_net, param_info):
|
|
|
|
|
param = param_info.blob
|
|
|
|
|
grad = param_info.grad
|
|
|
|
|
|
2017-03-08 02:44:45 +00:00
|
|
|
if self.alpha <= 0:
|
|
|
|
|
return
|
|
|
|
|
|
|
|
|
|
nz = param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
[param], str(param) + "_ftrl_nz", extra_shape=[2], value=0.0
|
2017-03-08 02:44:45 +00:00
|
|
|
)
|
2017-04-17 17:06:49 +00:00
|
|
|
self._aux_params.local.append(nz)
|
2017-03-08 02:44:45 +00:00
|
|
|
if isinstance(grad, core.GradientSlice):
|
|
|
|
|
grad = self.dedup(net, self.sparse_dedup_aggregator, grad)
|
|
|
|
|
net.SparseFtrl(
|
|
|
|
|
[param, nz, grad.indices, grad.values],
|
|
|
|
|
[param, nz],
|
|
|
|
|
engine=self.engine,
|
|
|
|
|
alpha=self.alpha,
|
|
|
|
|
beta=self.beta,
|
|
|
|
|
lambda1=self.lambda1,
|
2020-09-10 02:35:22 +00:00
|
|
|
lambda2=self.lambda2,
|
2017-03-08 02:44:45 +00:00
|
|
|
)
|
|
|
|
|
else:
|
|
|
|
|
net.Ftrl(
|
|
|
|
|
[param, nz, grad],
|
|
|
|
|
[param, nz],
|
|
|
|
|
engine=self.engine,
|
|
|
|
|
alpha=self.alpha,
|
|
|
|
|
beta=self.beta,
|
|
|
|
|
lambda1=self.lambda1,
|
2020-09-10 02:35:22 +00:00
|
|
|
lambda2=self.lambda2,
|
2017-03-08 02:44:45 +00:00
|
|
|
)
|
|
|
|
|
|
2017-05-09 20:14:07 +00:00
|
|
|
def scale_learning_rate(self, scale):
|
|
|
|
|
self.alpha *= scale
|
|
|
|
|
return
|
|
|
|
|
|
|
|
|
|
|
2018-07-06 20:38:36 +00:00
|
|
|
class GFtrlOptimizer(Optimizer):
|
|
|
|
|
"""Group Lasso FTRL Optimizer."""
|
|
|
|
|
|
2020-09-10 02:35:22 +00:00
|
|
|
def __init__(
|
|
|
|
|
self,
|
|
|
|
|
alpha=0.01,
|
|
|
|
|
beta=1e-4,
|
|
|
|
|
lambda1=0,
|
|
|
|
|
lambda2=0,
|
|
|
|
|
sparse_dedup_aggregator=None,
|
|
|
|
|
engine="",
|
|
|
|
|
):
|
2018-07-06 20:38:36 +00:00
|
|
|
super(GFtrlOptimizer, self).__init__()
|
|
|
|
|
self.alpha = alpha
|
|
|
|
|
self.beta = beta
|
|
|
|
|
self.lambda1 = lambda1
|
|
|
|
|
self.lambda2 = lambda2
|
|
|
|
|
self.sparse_dedup_aggregator = sparse_dedup_aggregator
|
|
|
|
|
self.engine = engine
|
|
|
|
|
|
|
|
|
|
def _run(self, net, param_init_net, param_info):
|
|
|
|
|
param = param_info.blob
|
|
|
|
|
grad = param_info.grad
|
|
|
|
|
|
|
|
|
|
if self.alpha <= 0:
|
|
|
|
|
return
|
|
|
|
|
|
|
|
|
|
nz = param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
[param], str(param) + "_gftrl_nz", extra_shape=[2], value=0.0
|
2018-07-06 20:38:36 +00:00
|
|
|
)
|
|
|
|
|
self._aux_params.local.append(nz)
|
|
|
|
|
net.GFtrl(
|
|
|
|
|
[param, nz, grad],
|
|
|
|
|
[param, nz],
|
|
|
|
|
engine=self.engine,
|
|
|
|
|
alpha=self.alpha,
|
|
|
|
|
beta=self.beta,
|
|
|
|
|
lambda1=self.lambda1,
|
2020-09-10 02:35:22 +00:00
|
|
|
lambda2=self.lambda2,
|
2018-07-06 20:38:36 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
|
|
def scale_learning_rate(self, scale):
|
|
|
|
|
self.alpha *= scale
|
|
|
|
|
return
|
|
|
|
|
|
|
|
|
|
|
2017-03-08 02:44:45 +00:00
|
|
|
class AdamOptimizer(Optimizer):
|
2020-09-10 02:35:22 +00:00
|
|
|
def __init__(
|
|
|
|
|
self,
|
|
|
|
|
alpha=0.001,
|
|
|
|
|
beta1=0.9,
|
|
|
|
|
beta2=0.999,
|
|
|
|
|
epsilon=1e-8,
|
|
|
|
|
policy="fixed",
|
|
|
|
|
use_lr_adaption=False,
|
|
|
|
|
lr_alpha=0.01,
|
|
|
|
|
normalized_lr_adaption=True,
|
|
|
|
|
sparse_dedup_aggregator=None,
|
|
|
|
|
rowWise=False,
|
|
|
|
|
engine="",
|
|
|
|
|
enableRAdam=False,
|
|
|
|
|
**kwargs
|
|
|
|
|
):
|
2017-04-17 17:06:49 +00:00
|
|
|
super(AdamOptimizer, self).__init__()
|
2017-03-08 02:44:45 +00:00
|
|
|
self.alpha = alpha
|
|
|
|
|
self.beta1 = beta1
|
|
|
|
|
self.beta2 = beta2
|
|
|
|
|
self.epsilon = epsilon
|
|
|
|
|
self.policy = policy
|
Update from facebook (#7696)
* Fix handling of empty batches in SumReduceDimsOp
As titled
* Deferrable async_scheduling finishRun fix
Proper order of finishing run operations in deferrable_async_scheduling net
* Simplify exception handling in async_scheduling
Simplify exception handling, no need to busy wait, thread that processes the
last task can finish the run
* [C2]worker_coordinator_memorize_worker_ids
As titled. This is related to T28689868, where the number of blobs we want to create is equal to the number of worker ids
* Add unit test for nets with no type set
* Ignore total length argument in sympolic_pad_packed_sequence
1- There was a mistake in the code that total_length was added to the wrong symbolic function (pack_padded_sequence) instead of (pad_packed_sequence)
2- No need to throw an exception if total_length is given since it is only used to enable data_parallel training on multi-gpus and doesn't have anything to do with onnx export, so just ignore it. https://fburl.com/tk4gciqp
* Add support for MKLDNN to async_scheduling
Just add MKLDNN as a possible CPU option to async_scheduling's pool function
* [AuFL][ensemble] support branch output for prediction
This diff supports using predictions from different branches and thus enables model ensembling (not fully independent).
* Fix a bug in add_loss in layer_model_helper
As titled.
* Support lradaption for adam
1.lr adaption operator
2.apply to dense adam
* Perf tweaks for async_scheduling
Restore single pool option + remove unnecessary (no-ops) calls
* add quantization to SparseSimdAdagradOp
add a bunch of quantization signatures to SparseSimdAdagradOp, implementations to come next
* [sr] [codemod] Change all SR callsites to use new API
@allow-large-files
This diff refactors all callsites of SR to use the slightly changed API introduced in the diff below. Really what this means is that you need to include the correct header. Also if you were using `ClientFactory::newFactory` you need to not prefix it with `ClientFactory::`.
```
cd ~/fbsource/fbcode
find ./ -type f -exec sed -i -e 's:#include "servicerouter/client/cpp2/ClientFactory.h":#include "servicerouter/client/cpp2/ServiceRouter.h":' -e 's:#include <servicerouter/client/cpp2/ClientFactory.h>:#include <servicerouter/client/cpp2/ServiceRouter.h>:' -e 's/ClientFactory::newFactory(/newFactory(/g' {} \;
```
Also manually fixed spots that couldn't be done automatically (or broke because they depended on transitive includes).
* Back out "Fix handling of empty batches in SumReduceDimsOp"
Original commit changeset: 282da1730cc2 This commit is blocking the
Github->fbcode sync, which really needs to get merged ASAP. D7881937 which this
diff depends on will be reverted in the sync D7990948 which causes this to
break. The sync diff cannot be patched with this reversion because it must be
landed against base revision 5c8c099 , and D7881937 must not be included in the
sync diff because it is breaking GPU tests that are not available in sandcastle
: https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-cuda8.0-cudnn6-ubuntu16.04-test/3638/console
for one example.
* Add the flow to support operator benchmark
1) generate model with the operator 2) upload to everstore 3) generate model spec into json file 4) start running the benchmark
* [tum][gpu] Connect DPM trainer with flow and unit tests
This diff:
- Fix some small bugs for Yiming's recent changes to parallelizer, so it suits real use cases.
- Add correct tags to the TUM code, so we can do data parallel transform
- pass extra info when instantiation.
- add unit test for using DPM in TUM model
After this diff, we can do simple box, multi-gpu fully-sync trainer for TUM in Fblearner workflow, but may still need to do speed benchmarking.
* w/o normalized lradaption for adam dense only
The previous lr adaption includes a normalization step when performing the dot product operation. This is not exactly same as what is proposed in the paper. I add normalization as an option. Without it, the operator performs exactly what the paper proposed. With the option, we add the normalization step
* [fb] Use SharedPromise in DeferrableAsyncSchedulingNet
This code is to simplify DeferrableAsyncSchedulingNet by removing condition
variable + small fixes
* [tum] implement cuda sparseLengthsMean and LengthsMean
as title
* Adding an optional parameter to allow use of protobufs in InferShapesAndTypes function.
Adding an optional parameter to allow use of protobufs in InferShapesAndTypes function.
* Move feature_to_index to FeatureSpec.feature_to_index
move feature_to_index to FeatureSpec.feature_to_index to avoid override other fields
* [Caffe2] Rename bytes_moved to bytes_written
Just a rename in preparation for supporting bytes_read.
* [c2] fix ReduceFrontSumOp for empty case by setting 0
otherwise, it may use the results from last iteration when it's empty batch.
* [Caffe2] [Int8] Improve Intel CPU performance
* [Easy] Improve PrependDim op logging
as titled
* DBFileReader expand db_path using os.path.expanduser(..)
Since there are a lot of possible use cases of `DBFileReader` to read from user home path, like `~/local/sample.db`, I want to save people's trouble of calling `os.path.expanduser(db_path)` themselves.
* [Caffe2] Add bytes_read to cost structure
We're adding analytical read bytes to cost functions. This extends the structure accordingly for all CostInference defined operators.
Additionally, some small bug fixes were performed:
1) Cost functions now extract type information of operands instead of assuming float
* Fix sleef on aarch64 for hhvm
@bypass-lint
Rename flag
* Remove duplicated part in caffe2/ideep/operators/conv_op.cc
should be sync error
* Rename test helper function test_adagrad_sparse_helper to adagrad_sparse_test_helper to avoid confusing pytest
2018-05-20 06:10:48 +00:00
|
|
|
self.use_lr_adaption = use_lr_adaption
|
|
|
|
|
self.lr_alpha = lr_alpha
|
|
|
|
|
self.normalized_lr_adaption = normalized_lr_adaption
|
2017-03-08 02:44:45 +00:00
|
|
|
self.sparse_dedup_aggregator = sparse_dedup_aggregator
|
2018-01-17 03:23:25 +00:00
|
|
|
self.rowWise = rowWise
|
2017-03-08 02:44:45 +00:00
|
|
|
self.engine = engine
|
2019-11-18 23:20:27 +00:00
|
|
|
self.enableRAdam = enableRAdam
|
2017-03-08 02:44:45 +00:00
|
|
|
self.init_kwargs = kwargs
|
|
|
|
|
|
2017-05-26 05:01:54 +00:00
|
|
|
def _run(self, net, param_init_net, param_info):
|
|
|
|
|
param = param_info.blob
|
|
|
|
|
grad = param_info.grad
|
|
|
|
|
|
2017-03-08 02:44:45 +00:00
|
|
|
if self.alpha <= 0:
|
|
|
|
|
return
|
|
|
|
|
|
2017-04-17 17:06:49 +00:00
|
|
|
lr, iteration = self.build_lr(
|
2020-09-10 02:35:22 +00:00
|
|
|
net,
|
|
|
|
|
param_init_net,
|
2017-03-08 02:44:45 +00:00
|
|
|
base_learning_rate=self.alpha,
|
|
|
|
|
policy=self.policy,
|
|
|
|
|
**(self.init_kwargs)
|
|
|
|
|
)
|
|
|
|
|
|
2020-09-10 02:35:22 +00:00
|
|
|
m1 = param_init_net.ConstantFill([param], param + "_first_moment", value=0.0)
|
Update from facebook (#7696)
* Fix handling of empty batches in SumReduceDimsOp
As titled
* Deferrable async_scheduling finishRun fix
Proper order of finishing run operations in deferrable_async_scheduling net
* Simplify exception handling in async_scheduling
Simplify exception handling, no need to busy wait, thread that processes the
last task can finish the run
* [C2]worker_coordinator_memorize_worker_ids
As titled. This is related to T28689868, where the number of blobs we want to create is equal to the number of worker ids
* Add unit test for nets with no type set
* Ignore total length argument in sympolic_pad_packed_sequence
1- There was a mistake in the code that total_length was added to the wrong symbolic function (pack_padded_sequence) instead of (pad_packed_sequence)
2- No need to throw an exception if total_length is given since it is only used to enable data_parallel training on multi-gpus and doesn't have anything to do with onnx export, so just ignore it. https://fburl.com/tk4gciqp
* Add support for MKLDNN to async_scheduling
Just add MKLDNN as a possible CPU option to async_scheduling's pool function
* [AuFL][ensemble] support branch output for prediction
This diff supports using predictions from different branches and thus enables model ensembling (not fully independent).
* Fix a bug in add_loss in layer_model_helper
As titled.
* Support lradaption for adam
1.lr adaption operator
2.apply to dense adam
* Perf tweaks for async_scheduling
Restore single pool option + remove unnecessary (no-ops) calls
* add quantization to SparseSimdAdagradOp
add a bunch of quantization signatures to SparseSimdAdagradOp, implementations to come next
* [sr] [codemod] Change all SR callsites to use new API
@allow-large-files
This diff refactors all callsites of SR to use the slightly changed API introduced in the diff below. Really what this means is that you need to include the correct header. Also if you were using `ClientFactory::newFactory` you need to not prefix it with `ClientFactory::`.
```
cd ~/fbsource/fbcode
find ./ -type f -exec sed -i -e 's:#include "servicerouter/client/cpp2/ClientFactory.h":#include "servicerouter/client/cpp2/ServiceRouter.h":' -e 's:#include <servicerouter/client/cpp2/ClientFactory.h>:#include <servicerouter/client/cpp2/ServiceRouter.h>:' -e 's/ClientFactory::newFactory(/newFactory(/g' {} \;
```
Also manually fixed spots that couldn't be done automatically (or broke because they depended on transitive includes).
* Back out "Fix handling of empty batches in SumReduceDimsOp"
Original commit changeset: 282da1730cc2 This commit is blocking the
Github->fbcode sync, which really needs to get merged ASAP. D7881937 which this
diff depends on will be reverted in the sync D7990948 which causes this to
break. The sync diff cannot be patched with this reversion because it must be
landed against base revision 5c8c099 , and D7881937 must not be included in the
sync diff because it is breaking GPU tests that are not available in sandcastle
: https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-cuda8.0-cudnn6-ubuntu16.04-test/3638/console
for one example.
* Add the flow to support operator benchmark
1) generate model with the operator 2) upload to everstore 3) generate model spec into json file 4) start running the benchmark
* [tum][gpu] Connect DPM trainer with flow and unit tests
This diff:
- Fix some small bugs for Yiming's recent changes to parallelizer, so it suits real use cases.
- Add correct tags to the TUM code, so we can do data parallel transform
- pass extra info when instantiation.
- add unit test for using DPM in TUM model
After this diff, we can do simple box, multi-gpu fully-sync trainer for TUM in Fblearner workflow, but may still need to do speed benchmarking.
* w/o normalized lradaption for adam dense only
The previous lr adaption includes a normalization step when performing the dot product operation. This is not exactly same as what is proposed in the paper. I add normalization as an option. Without it, the operator performs exactly what the paper proposed. With the option, we add the normalization step
* [fb] Use SharedPromise in DeferrableAsyncSchedulingNet
This code is to simplify DeferrableAsyncSchedulingNet by removing condition
variable + small fixes
* [tum] implement cuda sparseLengthsMean and LengthsMean
as title
* Adding an optional parameter to allow use of protobufs in InferShapesAndTypes function.
Adding an optional parameter to allow use of protobufs in InferShapesAndTypes function.
* Move feature_to_index to FeatureSpec.feature_to_index
move feature_to_index to FeatureSpec.feature_to_index to avoid override other fields
* [Caffe2] Rename bytes_moved to bytes_written
Just a rename in preparation for supporting bytes_read.
* [c2] fix ReduceFrontSumOp for empty case by setting 0
otherwise, it may use the results from last iteration when it's empty batch.
* [Caffe2] [Int8] Improve Intel CPU performance
* [Easy] Improve PrependDim op logging
as titled
* DBFileReader expand db_path using os.path.expanduser(..)
Since there are a lot of possible use cases of `DBFileReader` to read from user home path, like `~/local/sample.db`, I want to save people's trouble of calling `os.path.expanduser(db_path)` themselves.
* [Caffe2] Add bytes_read to cost structure
We're adding analytical read bytes to cost functions. This extends the structure accordingly for all CostInference defined operators.
Additionally, some small bug fixes were performed:
1) Cost functions now extract type information of operands instead of assuming float
* Fix sleef on aarch64 for hhvm
@bypass-lint
Rename flag
* Remove duplicated part in caffe2/ideep/operators/conv_op.cc
should be sync error
* Rename test helper function test_adagrad_sparse_helper to adagrad_sparse_test_helper to avoid confusing pytest
2018-05-20 06:10:48 +00:00
|
|
|
|
2018-01-17 03:23:25 +00:00
|
|
|
if self.rowWise:
|
|
|
|
|
shapes, types = workspace.InferShapesAndTypes([param_init_net])
|
|
|
|
|
m2 = param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
[], param + "_avg_second_moment", shape=[shapes[param][0]], value=0.0
|
2018-01-17 03:23:25 +00:00
|
|
|
)
|
|
|
|
|
else:
|
|
|
|
|
m2 = param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
[param], param + "_second_moment", value=0.0
|
2018-01-17 03:23:25 +00:00
|
|
|
)
|
|
|
|
|
|
2017-04-17 17:06:49 +00:00
|
|
|
self._aux_params.shared.append(iteration)
|
|
|
|
|
self._aux_params.local.append(m1)
|
|
|
|
|
self._aux_params.local.append(m2)
|
2018-01-17 03:23:25 +00:00
|
|
|
|
|
|
|
|
if self.rowWise:
|
2020-09-10 02:35:22 +00:00
|
|
|
assert isinstance(grad, core.GradientSlice), (
|
|
|
|
|
"If SparseAdam with rowWise=True, gradient must be "
|
|
|
|
|
"a gradientslice. PLease ensure that rowWise is not enabled "
|
|
|
|
|
"for the dense Adam optimizer, as it is not supported."
|
|
|
|
|
)
|
2018-09-17 17:14:08 +00:00
|
|
|
|
|
|
|
|
output_blobs = [param, m1, m2]
|
|
|
|
|
if self.use_lr_adaption:
|
2020-09-10 02:35:22 +00:00
|
|
|
effective_grad = str(param) + "_effective_grad"
|
2018-09-17 17:14:08 +00:00
|
|
|
output_blobs.append(effective_grad)
|
|
|
|
|
|
2017-03-08 02:44:45 +00:00
|
|
|
if isinstance(grad, core.GradientSlice):
|
|
|
|
|
grad = self.dedup(net, self.sparse_dedup_aggregator, grad)
|
2018-01-17 03:23:25 +00:00
|
|
|
if self.rowWise:
|
2020-09-10 02:35:22 +00:00
|
|
|
op = "RowWiseSparseAdam"
|
2018-01-17 03:23:25 +00:00
|
|
|
else:
|
2020-09-10 02:35:22 +00:00
|
|
|
op = "SparseAdam"
|
2018-09-17 17:14:08 +00:00
|
|
|
|
2019-11-18 23:20:27 +00:00
|
|
|
# Currently, only SparseAdam support RAdam, other Adam Ops will support later
|
2020-09-10 02:35:22 +00:00
|
|
|
if op == "SparseAdam":
|
2019-11-18 23:20:27 +00:00
|
|
|
net.__getattr__(op)(
|
|
|
|
|
[param, m1, m2, grad.indices, grad.values, lr, iteration],
|
|
|
|
|
output_blobs,
|
|
|
|
|
beta1=self.beta1,
|
|
|
|
|
beta2=self.beta2,
|
|
|
|
|
epsilon=self.epsilon,
|
2020-09-10 02:35:22 +00:00
|
|
|
enableRAdam=self.enableRAdam,
|
|
|
|
|
)
|
2019-11-18 23:20:27 +00:00
|
|
|
else:
|
2020-09-10 02:35:22 +00:00
|
|
|
assert (
|
|
|
|
|
not self.enableRAdam
|
|
|
|
|
), "Currently, RowWiseSparseAdam is not supported by RAdam!"
|
2019-11-18 23:20:27 +00:00
|
|
|
net.__getattr__(op)(
|
|
|
|
|
[param, m1, m2, grad.indices, grad.values, lr, iteration],
|
|
|
|
|
output_blobs,
|
|
|
|
|
beta1=self.beta1,
|
|
|
|
|
beta2=self.beta2,
|
2020-09-10 02:35:22 +00:00
|
|
|
epsilon=self.epsilon,
|
|
|
|
|
)
|
2019-11-18 23:20:27 +00:00
|
|
|
|
2018-09-17 17:14:08 +00:00
|
|
|
if self.use_lr_adaption:
|
|
|
|
|
net.LearningRateAdaption(
|
|
|
|
|
[lr, grad.values, effective_grad],
|
|
|
|
|
[lr],
|
|
|
|
|
lr_alpha=self.lr_alpha,
|
2020-09-10 02:35:22 +00:00
|
|
|
normalized_lr_adaption=self.normalized_lr_adaption,
|
|
|
|
|
)
|
2017-03-08 02:44:45 +00:00
|
|
|
|
|
|
|
|
else:
|
2018-09-17 17:14:08 +00:00
|
|
|
net.Adam(
|
|
|
|
|
[param, m1, m2, grad, lr, iteration],
|
|
|
|
|
output_blobs,
|
|
|
|
|
beta1=self.beta1,
|
|
|
|
|
beta2=self.beta2,
|
2020-09-10 02:35:22 +00:00
|
|
|
epsilon=self.epsilon,
|
|
|
|
|
)
|
Update from facebook (#7696)
* Fix handling of empty batches in SumReduceDimsOp
As titled
* Deferrable async_scheduling finishRun fix
Proper order of finishing run operations in deferrable_async_scheduling net
* Simplify exception handling in async_scheduling
Simplify exception handling, no need to busy wait, thread that processes the
last task can finish the run
* [C2]worker_coordinator_memorize_worker_ids
As titled. This is related to T28689868, where the number of blobs we want to create is equal to the number of worker ids
* Add unit test for nets with no type set
* Ignore total length argument in sympolic_pad_packed_sequence
1- There was a mistake in the code that total_length was added to the wrong symbolic function (pack_padded_sequence) instead of (pad_packed_sequence)
2- No need to throw an exception if total_length is given since it is only used to enable data_parallel training on multi-gpus and doesn't have anything to do with onnx export, so just ignore it. https://fburl.com/tk4gciqp
* Add support for MKLDNN to async_scheduling
Just add MKLDNN as a possible CPU option to async_scheduling's pool function
* [AuFL][ensemble] support branch output for prediction
This diff supports using predictions from different branches and thus enables model ensembling (not fully independent).
* Fix a bug in add_loss in layer_model_helper
As titled.
* Support lradaption for adam
1.lr adaption operator
2.apply to dense adam
* Perf tweaks for async_scheduling
Restore single pool option + remove unnecessary (no-ops) calls
* add quantization to SparseSimdAdagradOp
add a bunch of quantization signatures to SparseSimdAdagradOp, implementations to come next
* [sr] [codemod] Change all SR callsites to use new API
@allow-large-files
This diff refactors all callsites of SR to use the slightly changed API introduced in the diff below. Really what this means is that you need to include the correct header. Also if you were using `ClientFactory::newFactory` you need to not prefix it with `ClientFactory::`.
```
cd ~/fbsource/fbcode
find ./ -type f -exec sed -i -e 's:#include "servicerouter/client/cpp2/ClientFactory.h":#include "servicerouter/client/cpp2/ServiceRouter.h":' -e 's:#include <servicerouter/client/cpp2/ClientFactory.h>:#include <servicerouter/client/cpp2/ServiceRouter.h>:' -e 's/ClientFactory::newFactory(/newFactory(/g' {} \;
```
Also manually fixed spots that couldn't be done automatically (or broke because they depended on transitive includes).
* Back out "Fix handling of empty batches in SumReduceDimsOp"
Original commit changeset: 282da1730cc2 This commit is blocking the
Github->fbcode sync, which really needs to get merged ASAP. D7881937 which this
diff depends on will be reverted in the sync D7990948 which causes this to
break. The sync diff cannot be patched with this reversion because it must be
landed against base revision 5c8c099 , and D7881937 must not be included in the
sync diff because it is breaking GPU tests that are not available in sandcastle
: https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-cuda8.0-cudnn6-ubuntu16.04-test/3638/console
for one example.
* Add the flow to support operator benchmark
1) generate model with the operator 2) upload to everstore 3) generate model spec into json file 4) start running the benchmark
* [tum][gpu] Connect DPM trainer with flow and unit tests
This diff:
- Fix some small bugs for Yiming's recent changes to parallelizer, so it suits real use cases.
- Add correct tags to the TUM code, so we can do data parallel transform
- pass extra info when instantiation.
- add unit test for using DPM in TUM model
After this diff, we can do simple box, multi-gpu fully-sync trainer for TUM in Fblearner workflow, but may still need to do speed benchmarking.
* w/o normalized lradaption for adam dense only
The previous lr adaption includes a normalization step when performing the dot product operation. This is not exactly same as what is proposed in the paper. I add normalization as an option. Without it, the operator performs exactly what the paper proposed. With the option, we add the normalization step
* [fb] Use SharedPromise in DeferrableAsyncSchedulingNet
This code is to simplify DeferrableAsyncSchedulingNet by removing condition
variable + small fixes
* [tum] implement cuda sparseLengthsMean and LengthsMean
as title
* Adding an optional parameter to allow use of protobufs in InferShapesAndTypes function.
Adding an optional parameter to allow use of protobufs in InferShapesAndTypes function.
* Move feature_to_index to FeatureSpec.feature_to_index
move feature_to_index to FeatureSpec.feature_to_index to avoid override other fields
* [Caffe2] Rename bytes_moved to bytes_written
Just a rename in preparation for supporting bytes_read.
* [c2] fix ReduceFrontSumOp for empty case by setting 0
otherwise, it may use the results from last iteration when it's empty batch.
* [Caffe2] [Int8] Improve Intel CPU performance
* [Easy] Improve PrependDim op logging
as titled
* DBFileReader expand db_path using os.path.expanduser(..)
Since there are a lot of possible use cases of `DBFileReader` to read from user home path, like `~/local/sample.db`, I want to save people's trouble of calling `os.path.expanduser(db_path)` themselves.
* [Caffe2] Add bytes_read to cost structure
We're adding analytical read bytes to cost functions. This extends the structure accordingly for all CostInference defined operators.
Additionally, some small bug fixes were performed:
1) Cost functions now extract type information of operands instead of assuming float
* Fix sleef on aarch64 for hhvm
@bypass-lint
Rename flag
* Remove duplicated part in caffe2/ideep/operators/conv_op.cc
should be sync error
* Rename test helper function test_adagrad_sparse_helper to adagrad_sparse_test_helper to avoid confusing pytest
2018-05-20 06:10:48 +00:00
|
|
|
if self.use_lr_adaption:
|
2018-09-17 17:14:08 +00:00
|
|
|
net.LearningRateAdaption(
|
|
|
|
|
[lr, grad, effective_grad],
|
|
|
|
|
[lr],
|
|
|
|
|
lr_alpha=self.lr_alpha,
|
2020-09-10 02:35:22 +00:00
|
|
|
normalized_lr_adaption=self.normalized_lr_adaption,
|
|
|
|
|
)
|
2017-03-08 02:44:45 +00:00
|
|
|
|
2017-05-09 20:14:07 +00:00
|
|
|
def scale_learning_rate(self, scale):
|
|
|
|
|
self.alpha *= scale
|
|
|
|
|
return
|
2017-03-08 02:44:45 +00:00
|
|
|
|
2017-05-30 18:54:51 +00:00
|
|
|
|
2017-08-31 01:26:41 +00:00
|
|
|
class YellowFinOptimizer(Optimizer):
|
|
|
|
|
"""YellowFin: An automatic tuner for momentum SGD
|
|
|
|
|
|
|
|
|
|
See https://arxiv.org/abs/1706.03471 for more details. This implementation
|
|
|
|
|
has separate learning rate and momentum per each parameter."""
|
|
|
|
|
|
2020-09-10 02:35:22 +00:00
|
|
|
def __init__(
|
|
|
|
|
self,
|
|
|
|
|
alpha=0.1,
|
|
|
|
|
mu=0.0,
|
|
|
|
|
beta=0.999,
|
|
|
|
|
curv_win_width=20,
|
|
|
|
|
zero_debias=True,
|
|
|
|
|
epsilon=0.1 ** 6,
|
|
|
|
|
policy="fixed",
|
|
|
|
|
sparse_dedup_aggregator=None,
|
|
|
|
|
**kwargs
|
|
|
|
|
):
|
2017-08-31 01:26:41 +00:00
|
|
|
super(YellowFinOptimizer, self).__init__()
|
|
|
|
|
self.alpha = alpha
|
|
|
|
|
self.mu = mu
|
|
|
|
|
self.beta = beta
|
|
|
|
|
self.curv_win_width = curv_win_width
|
|
|
|
|
self.zero_debias = zero_debias
|
|
|
|
|
self.epsilon = epsilon
|
|
|
|
|
self.policy = policy
|
|
|
|
|
self.sparse_dedup_aggregator = sparse_dedup_aggregator
|
|
|
|
|
self.init_kwargs = kwargs
|
|
|
|
|
|
|
|
|
|
def _run(self, net, param_init_net, param_info):
|
|
|
|
|
|
|
|
|
|
# Note: This is number of persistent scalars in YellowFin optimizer.
|
|
|
|
|
# It should always be the number of scalars being used. The same
|
|
|
|
|
# number should be used in class for the operation.
|
|
|
|
|
SCALARS_MEMORY_SIZE = 5
|
|
|
|
|
|
|
|
|
|
param = param_info.blob
|
|
|
|
|
grad = param_info.grad
|
2020-09-10 02:35:22 +00:00
|
|
|
moment = param_init_net.ConstantFill([param], param + "_moment", value=0.0)
|
2017-08-31 01:26:41 +00:00
|
|
|
curv_win = param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
[], param + "_curv_win", shape=[self.curv_win_width], value=0.0
|
2017-08-31 01:26:41 +00:00
|
|
|
)
|
2020-09-10 02:35:22 +00:00
|
|
|
g_avg = param_init_net.ConstantFill([param], param + "_g_avg", value=0.0)
|
|
|
|
|
g2_avg = param_init_net.ConstantFill([param], param + "_g2_avg", value=0.0)
|
2017-08-31 01:26:41 +00:00
|
|
|
lr_avg = param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
[], param + "_lr_avg", shape=[1], value=self.alpha
|
2017-08-31 01:26:41 +00:00
|
|
|
)
|
|
|
|
|
mu_avg = param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
[], param + "_mu_avg", shape=[1], value=self.mu
|
2017-08-31 01:26:41 +00:00
|
|
|
)
|
|
|
|
|
scalars_memory = param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
[], param + "_scalars_memory", shape=[SCALARS_MEMORY_SIZE], value=0.0
|
2017-08-31 01:26:41 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
|
|
assert self.alpha > 0
|
2020-09-10 02:35:22 +00:00
|
|
|
assert not isinstance(
|
|
|
|
|
grad, core.GradientSlice
|
|
|
|
|
), "YellowFin does not support sparse gradients"
|
2017-08-31 01:26:41 +00:00
|
|
|
|
2020-09-10 02:35:22 +00:00
|
|
|
iteration = utils.BuildUniqueMutexIter(param_init_net, net, iter_val=0)
|
2017-08-31 01:26:41 +00:00
|
|
|
|
|
|
|
|
self._aux_params.shared.append(iteration)
|
|
|
|
|
self._aux_params.local.append(moment)
|
|
|
|
|
self._aux_params.local.append(lr_avg)
|
|
|
|
|
self._aux_params.local.append(mu_avg)
|
|
|
|
|
self._aux_params.local.append(curv_win)
|
|
|
|
|
self._aux_params.local.append(g_avg)
|
|
|
|
|
self._aux_params.local.append(g2_avg)
|
|
|
|
|
self._aux_params.local.append(scalars_memory)
|
|
|
|
|
|
|
|
|
|
yf_in_out_args = [
|
|
|
|
|
param,
|
|
|
|
|
moment,
|
|
|
|
|
lr_avg,
|
|
|
|
|
mu_avg,
|
|
|
|
|
curv_win,
|
|
|
|
|
g_avg,
|
|
|
|
|
g2_avg,
|
2020-09-10 02:35:22 +00:00
|
|
|
scalars_memory,
|
2017-08-31 01:26:41 +00:00
|
|
|
]
|
|
|
|
|
|
|
|
|
|
net.YellowFin(
|
|
|
|
|
yf_in_out_args + [grad, iteration],
|
|
|
|
|
yf_in_out_args,
|
|
|
|
|
beta=self.beta,
|
|
|
|
|
epsilon=self.epsilon,
|
|
|
|
|
curv_win_width=self.curv_win_width,
|
2020-09-10 02:35:22 +00:00
|
|
|
zero_debias=self.zero_debias,
|
|
|
|
|
)
|
2017-08-31 01:26:41 +00:00
|
|
|
|
|
|
|
|
def scale_learning_rate(self, scale):
|
|
|
|
|
self.alpha *= scale
|
|
|
|
|
return
|
|
|
|
|
|
|
|
|
|
|
2017-11-09 00:32:19 +00:00
|
|
|
class RmsPropOptimizer(Optimizer):
|
|
|
|
|
def __init__(
|
|
|
|
|
self,
|
|
|
|
|
alpha=0.01,
|
|
|
|
|
decay=0.9,
|
|
|
|
|
momentum=0.0,
|
|
|
|
|
epsilon=1e-5,
|
2020-09-10 02:35:22 +00:00
|
|
|
policy="fixed",
|
|
|
|
|
engine="",
|
2017-11-09 00:32:19 +00:00
|
|
|
**kwargs
|
|
|
|
|
):
|
|
|
|
|
super(RmsPropOptimizer, self).__init__()
|
|
|
|
|
self.alpha = alpha
|
|
|
|
|
self.decay = decay
|
|
|
|
|
self.momentum = momentum
|
|
|
|
|
self.epsilon = epsilon
|
|
|
|
|
self.policy = policy
|
|
|
|
|
self.engine = engine
|
|
|
|
|
self.init_kwargs = kwargs
|
|
|
|
|
|
|
|
|
|
def _run(self, net, param_init_net, param_info):
|
|
|
|
|
param = param_info.blob
|
|
|
|
|
grad = param_info.grad
|
|
|
|
|
|
|
|
|
|
assert self.alpha > 0
|
2020-09-10 02:35:22 +00:00
|
|
|
assert not isinstance(
|
|
|
|
|
grad, core.GradientSlice
|
|
|
|
|
), "RmsPropOptimizer doesn't support sparse gradients"
|
2017-11-09 00:32:19 +00:00
|
|
|
|
|
|
|
|
dev = scope.CurrentDeviceScope()
|
|
|
|
|
if dev is None:
|
|
|
|
|
dev = core.DeviceOption(caffe2_pb2.CPU)
|
|
|
|
|
|
|
|
|
|
ONE = param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
[], "ONE_{}_{}".format(dev.device_type, dev.device_id), shape=[1], value=1.0
|
2017-11-09 00:32:19 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
|
|
lr, _ = self.build_lr(
|
|
|
|
|
net,
|
|
|
|
|
param_init_net,
|
|
|
|
|
base_learning_rate=-self.alpha,
|
|
|
|
|
policy=self.policy,
|
|
|
|
|
**(self.init_kwargs)
|
|
|
|
|
)
|
|
|
|
|
|
|
|
|
|
grad_o = param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
[param], str(param) + "_grad_o", values=0.0
|
2017-11-09 00:32:19 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
|
|
ms = param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
[param], str(param) + "_mean_squares", values=0.0
|
2017-11-09 00:32:19 +00:00
|
|
|
)
|
|
|
|
|
|
2020-09-10 02:35:22 +00:00
|
|
|
mom = param_init_net.ConstantFill([param], str(param) + "_momentum", values=0.0)
|
2017-11-09 00:32:19 +00:00
|
|
|
|
|
|
|
|
self._aux_params.local.append(ms)
|
|
|
|
|
self._aux_params.local.append(mom)
|
|
|
|
|
|
|
|
|
|
net.RmsProp(
|
|
|
|
|
[grad, ms, mom, ONE],
|
|
|
|
|
[grad_o, ms, mom],
|
|
|
|
|
decay=self.decay,
|
|
|
|
|
momentum=self.momentum,
|
|
|
|
|
epsilon=self.epsilon,
|
|
|
|
|
engine=self.engine,
|
|
|
|
|
)
|
|
|
|
|
|
2020-09-10 02:35:22 +00:00
|
|
|
net.MomentumSGDUpdate([grad_o, mom, lr, param], [grad_o, mom, param])
|
2017-11-09 00:32:19 +00:00
|
|
|
|
|
|
|
|
def scale_learning_rate(self, scale):
|
|
|
|
|
self.alpha *= scale
|
|
|
|
|
return
|
|
|
|
|
|
|
|
|
|
|
2017-06-02 21:15:45 +00:00
|
|
|
def _get_param_to_device(model):
|
2017-05-30 18:54:51 +00:00
|
|
|
# Infer blob devices by going through the net and param_init_net
|
|
|
|
|
# ops and observing the device used to create or use the blob.
|
|
|
|
|
param_to_device = core.InferBlobDevices(model.net)
|
|
|
|
|
param_to_device.update(core.InferBlobDevices(model.param_init_net))
|
2017-06-02 21:15:45 +00:00
|
|
|
return param_to_device
|
|
|
|
|
|
|
|
|
|
|
2017-06-28 04:51:40 +00:00
|
|
|
def get_param_device(param_name, grad, param_to_device=None, default_device=None):
|
|
|
|
|
device = default_device
|
|
|
|
|
param_to_device = param_to_device or {}
|
|
|
|
|
# We first check if parameter's device has been inferred. If not,
|
|
|
|
|
# we check the gradient. This can happen if parameter is not output
|
|
|
|
|
# by any blob but created by a FetchBlob.
|
|
|
|
|
if param_name in param_to_device:
|
|
|
|
|
device = param_to_device[param_name]
|
|
|
|
|
else:
|
|
|
|
|
if isinstance(grad, core.GradientSlice):
|
|
|
|
|
grad = grad
|
|
|
|
|
if str(grad.values) in param_to_device:
|
|
|
|
|
device = param_to_device[str(grad.values)]
|
|
|
|
|
elif str(grad.indices) in param_to_device:
|
|
|
|
|
device = param_to_device[str(grad.indices)]
|
|
|
|
|
else:
|
|
|
|
|
grad_name = str(grad)
|
|
|
|
|
if grad_name in param_to_device:
|
|
|
|
|
device = param_to_device[grad_name]
|
|
|
|
|
|
2020-09-10 02:35:22 +00:00
|
|
|
assert device is not None, "Cannot infer device for {}: no op creates it".format(
|
|
|
|
|
param_name
|
|
|
|
|
)
|
2017-06-28 04:51:40 +00:00
|
|
|
return device
|
|
|
|
|
|
|
|
|
|
|
2017-08-26 01:53:20 +00:00
|
|
|
def get_lr_injection():
|
|
|
|
|
"""
|
|
|
|
|
Gets current value for lr_injection, a multiplier for all base
|
|
|
|
|
learning rates.
|
|
|
|
|
Must set allow_lr_injection=True when building optimizer, as it
|
|
|
|
|
relies on synchronization over CPU.
|
|
|
|
|
"""
|
|
|
|
|
return workspace.FetchBlob(_LEARNING_RATE_INJECTION)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
def set_lr_injection(lr_injection_value):
|
|
|
|
|
"""
|
|
|
|
|
Sets lr_injection, a multiplier for all base learning rates.
|
|
|
|
|
Must set allow_lr_injection=True when building optimizer, as it
|
|
|
|
|
relies on synchronization over CPU.
|
|
|
|
|
"""
|
|
|
|
|
workspace.FeedBlob(
|
|
|
|
|
_LEARNING_RATE_INJECTION,
|
2020-09-10 02:35:22 +00:00
|
|
|
np.array([float(lr_injection_value)], dtype=np.float32),
|
2017-08-26 01:53:20 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
|
|
|
2020-09-10 02:35:22 +00:00
|
|
|
def _calc_norm_ratio(model, params, name_scope, param_to_device, max_gradient_norm):
|
2017-08-24 17:10:58 +00:00
|
|
|
with core.NameScope(name_scope):
|
|
|
|
|
grad_squared_sums = []
|
|
|
|
|
for i, param in enumerate(params):
|
2020-09-10 02:35:22 +00:00
|
|
|
device = get_param_device(str(param.blob), param.grad, param_to_device)
|
2017-08-24 17:10:58 +00:00
|
|
|
|
|
|
|
|
with core.DeviceScope(device):
|
|
|
|
|
grad = (
|
|
|
|
|
param.grad
|
2020-09-10 02:35:22 +00:00
|
|
|
if not isinstance(param.grad, core.GradientSlice)
|
|
|
|
|
else param.grad.values
|
2017-08-24 17:10:58 +00:00
|
|
|
)
|
|
|
|
|
|
2020-09-10 02:35:22 +00:00
|
|
|
grad_squared_sum_name = "grad_{}_squared_sum".format(i)
|
|
|
|
|
grad_squared_sum = model.net.SumSqrElements(grad, grad_squared_sum_name)
|
|
|
|
|
grad_squared_sum_cpu = model.net.EnsureCPUOutput(grad_squared_sum)
|
2017-08-24 17:10:58 +00:00
|
|
|
grad_squared_sums.append(grad_squared_sum_cpu)
|
|
|
|
|
|
|
|
|
|
with core.DeviceScope(core.DeviceOption(caffe2_pb2.CPU)):
|
|
|
|
|
grad_squared_full_sum = model.net.Sum(
|
2020-09-10 02:35:22 +00:00
|
|
|
grad_squared_sums, "grad_squared_full_sum"
|
2017-08-24 17:10:58 +00:00
|
|
|
)
|
|
|
|
|
global_norm = model.net.Pow(
|
2020-09-10 02:35:22 +00:00
|
|
|
grad_squared_full_sum, "global_norm", exponent=0.5
|
2017-08-24 17:10:58 +00:00
|
|
|
)
|
|
|
|
|
clip_norm = model.param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
[], "clip_norm", shape=[], value=float(max_gradient_norm)
|
2017-08-24 17:10:58 +00:00
|
|
|
)
|
2020-09-10 02:35:22 +00:00
|
|
|
max_norm = model.net.Max([global_norm, clip_norm], "max_norm")
|
|
|
|
|
norm_ratio = model.net.Div([clip_norm, max_norm], "norm_ratio")
|
2017-08-24 17:10:58 +00:00
|
|
|
return norm_ratio
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
def _build(
|
|
|
|
|
model,
|
|
|
|
|
optimizer,
|
|
|
|
|
weights_only=False,
|
|
|
|
|
use_param_info_optim=True,
|
|
|
|
|
max_gradient_norm=None,
|
2017-08-26 01:53:20 +00:00
|
|
|
allow_lr_injection=False,
|
2017-08-24 17:10:58 +00:00
|
|
|
):
|
2017-06-02 21:15:45 +00:00
|
|
|
param_to_device = _get_param_to_device(model)
|
2017-05-30 18:54:51 +00:00
|
|
|
|
2017-06-01 09:25:21 +00:00
|
|
|
# Validate there are no duplicate params
|
|
|
|
|
model.Validate()
|
|
|
|
|
|
2017-08-24 17:10:58 +00:00
|
|
|
params = []
|
2017-05-30 18:54:51 +00:00
|
|
|
for param_info in model.GetOptimizationParamInfo():
|
2017-08-24 17:10:58 +00:00
|
|
|
if weights_only and param_info.blob not in model.weights:
|
|
|
|
|
continue
|
|
|
|
|
params.append(param_info)
|
|
|
|
|
|
2017-08-26 01:53:20 +00:00
|
|
|
lr_multiplier = None
|
2017-08-24 17:10:58 +00:00
|
|
|
if max_gradient_norm is not None:
|
2017-08-26 01:53:20 +00:00
|
|
|
lr_multiplier = _calc_norm_ratio(
|
|
|
|
|
model,
|
|
|
|
|
params,
|
2020-09-10 02:35:22 +00:00
|
|
|
"norm_clipped_grad_update",
|
2017-08-24 17:10:58 +00:00
|
|
|
param_to_device,
|
|
|
|
|
max_gradient_norm,
|
|
|
|
|
)
|
2017-08-26 01:53:20 +00:00
|
|
|
|
|
|
|
|
if allow_lr_injection:
|
|
|
|
|
if not model.net.BlobIsDefined(_LEARNING_RATE_INJECTION):
|
|
|
|
|
lr_injection = model.param_init_net.ConstantFill(
|
2020-09-10 02:35:22 +00:00
|
|
|
[], _LEARNING_RATE_INJECTION, shape=[1], value=1.0
|
2017-08-26 01:53:20 +00:00
|
|
|
)
|
|
|
|
|
else:
|
|
|
|
|
lr_injection = _LEARNING_RATE_INJECTION
|
|
|
|
|
|
|
|
|
|
if lr_multiplier is None:
|
|
|
|
|
lr_multiplier = lr_injection
|
|
|
|
|
else:
|
|
|
|
|
lr_multiplier = model.net.Mul(
|
2020-09-10 02:35:22 +00:00
|
|
|
[lr_multiplier, lr_injection], "lr_multiplier", broadcast=1
|
2017-08-26 01:53:20 +00:00
|
|
|
)
|
|
|
|
|
optimizer.add_lr_multiplier(lr_multiplier)
|
2017-08-24 17:10:58 +00:00
|
|
|
|
|
|
|
|
for param_info in params:
|
2017-05-30 18:54:51 +00:00
|
|
|
param_name = str(param_info.blob)
|
2017-06-28 04:51:40 +00:00
|
|
|
device = get_param_device(param_name, param_info.grad, param_to_device)
|
2017-05-30 18:54:51 +00:00
|
|
|
with core.DeviceScope(device):
|
2017-07-12 15:32:28 +00:00
|
|
|
if param_info.optimizer and use_param_info_optim:
|
2020-09-10 02:35:22 +00:00
|
|
|
param_info.optimizer(model.net, model.param_init_net, param_info)
|
2017-07-12 15:32:28 +00:00
|
|
|
else:
|
|
|
|
|
optimizer(model.net, model.param_init_net, param_info)
|
2017-05-30 18:54:51 +00:00
|
|
|
return optimizer
|
|
|
|
|
|
|
|
|
|
|
2017-06-02 21:15:45 +00:00
|
|
|
def add_weight_decay(model, weight_decay):
|
|
|
|
|
"""Adds a decay to weights in the model.
|
|
|
|
|
|
|
|
|
|
This is a form of L2 regularization.
|
|
|
|
|
|
|
|
|
|
Args:
|
|
|
|
|
weight_decay: strength of the regularization
|
|
|
|
|
"""
|
|
|
|
|
_build(
|
|
|
|
|
model,
|
|
|
|
|
WeightDecayBuilder(weight_decay=weight_decay),
|
|
|
|
|
weights_only=True,
|
2017-07-12 15:32:28 +00:00
|
|
|
use_param_info_optim=False,
|
2017-06-02 21:15:45 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
|
|
|
2017-08-26 01:53:20 +00:00
|
|
|
def build_sgd(
|
|
|
|
|
model,
|
|
|
|
|
base_learning_rate,
|
|
|
|
|
max_gradient_norm=None,
|
|
|
|
|
allow_lr_injection=False,
|
|
|
|
|
**kwargs
|
|
|
|
|
):
|
2017-03-08 02:44:45 +00:00
|
|
|
sgd_optimizer = SgdOptimizer(base_learning_rate, **kwargs)
|
2017-08-26 01:53:20 +00:00
|
|
|
return _build(
|
|
|
|
|
model,
|
|
|
|
|
sgd_optimizer,
|
|
|
|
|
max_gradient_norm=max_gradient_norm,
|
|
|
|
|
allow_lr_injection=allow_lr_injection,
|
|
|
|
|
)
|
2017-03-08 02:44:45 +00:00
|
|
|
|
2017-06-02 21:15:45 +00:00
|
|
|
|
2017-08-24 17:10:58 +00:00
|
|
|
def build_multi_precision_sgd(
|
2017-08-26 01:53:20 +00:00
|
|
|
model,
|
|
|
|
|
base_learning_rate,
|
|
|
|
|
max_gradient_norm=None,
|
|
|
|
|
allow_lr_injection=False,
|
|
|
|
|
**kwargs
|
2017-08-24 17:10:58 +00:00
|
|
|
):
|
2020-09-10 02:35:22 +00:00
|
|
|
multi_prec_sgd_optimizer = MultiPrecisionSgdOptimizer(base_learning_rate, **kwargs)
|
2017-08-24 17:10:58 +00:00
|
|
|
return _build(
|
2017-08-26 01:53:20 +00:00
|
|
|
model,
|
|
|
|
|
multi_prec_sgd_optimizer,
|
|
|
|
|
max_gradient_norm=max_gradient_norm,
|
|
|
|
|
allow_lr_injection=allow_lr_injection,
|
2017-08-24 17:10:58 +00:00
|
|
|
)
|
2017-06-01 15:31:33 +00:00
|
|
|
|
2017-03-08 02:44:45 +00:00
|
|
|
|
2017-10-24 17:22:41 +00:00
|
|
|
def build_fp16_sgd(model, base_learning_rate, **kwargs):
|
2020-09-10 02:35:22 +00:00
|
|
|
fp16_sgd_optimizer = FP16SgdOptimizer(base_learning_rate, **kwargs)
|
2017-10-24 17:22:41 +00:00
|
|
|
return _build(model, fp16_sgd_optimizer)
|
|
|
|
|
|
|
|
|
|
|
2017-03-08 02:44:45 +00:00
|
|
|
def build_ftrl(model, engine="SIMD", **kwargs):
|
|
|
|
|
if engine == "SIMD":
|
2020-09-10 02:35:22 +00:00
|
|
|
assert core.IsOperator("Ftrl_ENGINE_SIMD")
|
|
|
|
|
assert core.IsOperator("SparseFtrl_ENGINE_SIMD")
|
2017-03-08 02:44:45 +00:00
|
|
|
ftrl_optimizer = FtrlOptimizer(engine=engine, **kwargs)
|
2017-05-30 18:54:51 +00:00
|
|
|
return _build(model, ftrl_optimizer)
|
2017-03-08 02:44:45 +00:00
|
|
|
|
|
|
|
|
|
2018-07-06 20:38:36 +00:00
|
|
|
def build_gftrl(model, engine="", **kwargs):
|
2018-07-30 22:24:32 +00:00
|
|
|
if engine == "SIMD":
|
2020-09-10 02:35:22 +00:00
|
|
|
assert core.IsOperator("GFtrl_ENGINE_SIMD")
|
2018-07-06 20:38:36 +00:00
|
|
|
gftrl_optimizer = GFtrlOptimizer(engine=engine, **kwargs)
|
|
|
|
|
return _build(model, gftrl_optimizer)
|
|
|
|
|
|
|
|
|
|
|
2017-08-24 17:10:58 +00:00
|
|
|
def build_adagrad(
|
|
|
|
|
model,
|
|
|
|
|
base_learning_rate,
|
|
|
|
|
parameters=None,
|
|
|
|
|
max_gradient_norm=None,
|
2017-08-26 01:53:20 +00:00
|
|
|
allow_lr_injection=False,
|
2017-08-24 17:10:58 +00:00
|
|
|
**kwargs
|
|
|
|
|
):
|
2017-03-08 02:44:45 +00:00
|
|
|
adagrad_optimizer = AdagradOptimizer(alpha=base_learning_rate, **kwargs)
|
2017-08-26 01:53:20 +00:00
|
|
|
return _build(
|
|
|
|
|
model,
|
|
|
|
|
adagrad_optimizer,
|
|
|
|
|
max_gradient_norm=max_gradient_norm,
|
|
|
|
|
allow_lr_injection=allow_lr_injection,
|
|
|
|
|
)
|
2017-03-08 02:44:45 +00:00
|
|
|
|
|
|
|
|
|
2018-07-14 01:40:56 +00:00
|
|
|
def build_wngrad(
|
|
|
|
|
model,
|
|
|
|
|
base_learning_rate,
|
|
|
|
|
parameters=None,
|
|
|
|
|
max_gradient_norm=None,
|
|
|
|
|
allow_lr_injection=False,
|
|
|
|
|
**kwargs
|
|
|
|
|
):
|
|
|
|
|
wngrad_optimizer = WngradOptimizer(alpha=base_learning_rate, **kwargs)
|
|
|
|
|
return _build(
|
|
|
|
|
model,
|
|
|
|
|
wngrad_optimizer,
|
|
|
|
|
max_gradient_norm=max_gradient_norm,
|
|
|
|
|
allow_lr_injection=allow_lr_injection,
|
|
|
|
|
)
|
|
|
|
|
|
2018-07-25 03:01:20 +00:00
|
|
|
|
2020-04-15 06:01:58 +00:00
|
|
|
def build_storm(
|
|
|
|
|
model,
|
|
|
|
|
base_learning_rate,
|
|
|
|
|
parameters=None,
|
|
|
|
|
max_gradient_norm=None,
|
|
|
|
|
allow_lr_injection=False,
|
|
|
|
|
**kwargs
|
|
|
|
|
):
|
|
|
|
|
storm_optimizer = StormOptimizer(lr=base_learning_rate, **kwargs)
|
|
|
|
|
return _build(
|
|
|
|
|
model,
|
|
|
|
|
storm_optimizer,
|
|
|
|
|
max_gradient_norm=max_gradient_norm,
|
2020-09-10 02:35:22 +00:00
|
|
|
allow_lr_injection=allow_lr_injection,
|
2020-04-15 06:01:58 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
|
|
|
2018-07-25 03:01:20 +00:00
|
|
|
def build_adadelta(
|
|
|
|
|
model,
|
|
|
|
|
base_learning_rate,
|
|
|
|
|
parameters=None,
|
|
|
|
|
max_gradient_norm=None,
|
|
|
|
|
allow_lr_injection=False,
|
|
|
|
|
**kwargs
|
|
|
|
|
):
|
|
|
|
|
adadelta_optimizer = AdadeltaOptimizer(alpha=base_learning_rate, **kwargs)
|
|
|
|
|
return _build(
|
|
|
|
|
model,
|
|
|
|
|
adadelta_optimizer,
|
|
|
|
|
max_gradient_norm=max_gradient_norm,
|
|
|
|
|
allow_lr_injection=allow_lr_injection,
|
|
|
|
|
)
|
|
|
|
|
|
|
|
|
|
|
2017-08-26 01:53:20 +00:00
|
|
|
def build_adam(
|
|
|
|
|
model,
|
|
|
|
|
base_learning_rate,
|
|
|
|
|
max_gradient_norm=None,
|
|
|
|
|
allow_lr_injection=False,
|
|
|
|
|
**kwargs
|
|
|
|
|
):
|
2017-03-08 02:44:45 +00:00
|
|
|
adam_optimizer = AdamOptimizer(alpha=base_learning_rate, **kwargs)
|
2017-08-26 01:53:20 +00:00
|
|
|
return _build(
|
|
|
|
|
model,
|
|
|
|
|
adam_optimizer,
|
|
|
|
|
max_gradient_norm=max_gradient_norm,
|
|
|
|
|
allow_lr_injection=allow_lr_injection,
|
|
|
|
|
)
|
2017-08-31 01:26:41 +00:00
|
|
|
|
|
|
|
|
|
|
|
|
|
def build_yellowfin(model, base_learning_rate=0.1, **kwargs):
|
2020-09-10 02:35:22 +00:00
|
|
|
yellowfin_optimizer = YellowFinOptimizer(alpha=base_learning_rate, **kwargs)
|
2017-08-31 01:26:41 +00:00
|
|
|
return _build(model, yellowfin_optimizer)
|
2017-11-09 00:32:19 +00:00
|
|
|
|
|
|
|
|
|
|
|
|
|
def build_rms_prop(
|
|
|
|
|
model,
|
|
|
|
|
base_learning_rate,
|
|
|
|
|
max_gradient_norm=None,
|
|
|
|
|
allow_lr_injection=False,
|
|
|
|
|
**kwargs
|
|
|
|
|
):
|
|
|
|
|
rms_prop_optimizer = RmsPropOptimizer(alpha=base_learning_rate, **kwargs)
|
|
|
|
|
return _build(
|
|
|
|
|
model,
|
|
|
|
|
rms_prop_optimizer,
|
|
|
|
|
max_gradient_norm=max_gradient_norm,
|
|
|
|
|
allow_lr_injection=allow_lr_injection,
|
|
|
|
|
)
|