### Description
Add a new install_shared_deps.sh
### Motivation and Context
Azcopy, Ninja, Node.js and CCache are all needed, but they are copied
everywhere.
### Description
For compilation in container, ADO Cache task doesn't work directly.
The workaround is to mount the cache directory to the container, and let
CCache in container to read/write cache data.
In short, we just leverage ADO API to download/upload cache data.
The Post-jobs works in stack-mode, So the PostBuildCleanUp Tasks should
be defined first.
Thus, The PostBuildCleanUp would be executed lastly.
Else, Cache Task would fail to upload cache because the Agent Directory
is cleaned.
### Description
<!-- Describe your changes. -->
Update protobuf version to 3.18.3 in
tools/ci_build/github/linux/docker/scripts/requirements.txt.
### Motivation and Context
Address component governance alert CVE-2022-1941
## Description
1. Convert some git submodules to cmake external projects
2. Update nsync from
[1.23.0](https://github.com/google/nsync/releases/tag/1.23.0) to
[1.25.0](https://github.com/google/nsync/releases/tag/1.25.0)
3. Update re2 from 2021-06-01 to 2022-06-01
4. Update wil from an old commit to 1.0.220914.1 tag
5. Update gtest to a newer commit so that it can optionally leverage
absl/re2 for parsing command line flags.
The following git submodules are deleted:
1. FP16
2. safeint
3. XNNPACK
4. cxxopts
5. dlpack
7. flatbuffers
8. googlebenchmark
9. json
10. mimalloc
11. mp11
12. pthreadpool
More will come.
## Motivation and Context
There are 3 ways of integrating 3rd party C/C++ libraries into ONNX
Runtime:
1. Install them to a system location, then use cmake's find_package
module to locate them.
2. Use git submodules
6. Use cmake's external projects(externalproject_add).
At first when this project was just started, we considered both option 2
and option 3. We preferred option 2 because:
1. It's easier to handle authentication. At first this project was not
open source, and it had some other non-public dependencies. If we use
git submodule, ADO will handle authentication smoothly. Otherwise we
need to manually pass tokens around and be very careful on not exposing
them in build logs.
2. At that time, cmake fetched dependencies after "cmake" finished
generating vcprojects/makefiles. So it was very difficult to make cflags
consistent. Since cmake 3.11, it has a new command: FetchContent, which
fetches dependencies when it generates vcprojects/makefiles just before
add_subdirectories, so the parent project's variables/settings can be
easily passed to the child projects.
And when the project went on, we had some new concerns:
1. As we started to have more and more EPs and build configs, the number
of submodules grew quickly. For more developers, most ORT submodules are
not relevant to them. They shouldn't need to download all of them.
2. It is impossible to let two different build configs use two different
versions of the same dependency. For example, right now we have protobuf
3.18.3 in the submodules. Then every EP must use the same version.
Whenever we have a need to upgrade protobuf, we need to coordinate
across the whole team and many external developers. I can't manage it
anymore.
3. Some projects want to manage the dependencies in a different way,
either because of their preference or because of compliance
requirements. For example, some Microsoft teams want to use vcpkg, but
we don't want to force every user of onnxruntime using vcpkg.
7. Someone wants to dynamically link to protobuf, but our build script
only does static link.
8. Hard to handle security vulnerabilities. For example, whenever
protobuf has a security patch, we have a lot of things to do. But if we
allowed people to build ORT with a different version of protobuf without
changing ORT"s source code, the customer who build ORT from source will
be able to act on such things in a quicker way. They will not need to
wait ORT having a patch release.
9. Every time we do a release, github will also publish a source file
zip file and a source file tarball for us. But they are not usable,
because they miss submodules.
### New features
After this change, users will be able to:
1. Build the dependencies in the way they want, then install them to
somewhere(for example, /usr or a temp folder).
2. Or download the dependencies by using cmake commands from these
dependencies official website
3. Similar to the above, but use your private mirrors to migrate supply
chain risks.
4. Use different versions of the dependencies, as long as our source
code is compatible with them. For example, you may use you can't use
protobuf 3.20.x as they need code changes in ONNX Runtime.
6. Only download the things the current build needs.
10. Avoid building external dependencies again and again in every build.
### Breaking change
The onnxruntime_PREFER_SYSTEM_LIB build option is removed you could think from now
it is default ON. If you don't like the new behavior, you can set FETCHCONTENT_TRY_FIND_PACKAGE_MODE to NEVER.
Besides, for who relied on the onnxruntime_PREFER_SYSTEM_LIB build
option, please be aware that this PR will change find_package calls from
Module mode to Config mode. For example, in the past if you have
installed protobuf from apt-get from ubuntu 20.04's official repo,
find_package can find it and use it. But after this PR, it won't. This
is because that protobuf version provided by Ubuntu 20.04 is too old to
support the "config mode". It can be resolved by getting a newer version
of protobuf from somewhere.
Pytorch was added to inference pipelines in PR #8027. But, actually
these pipelines do not use PyTorch. PyTorch is huge, here we need to
install it for 4 different Python versions. If we remove PyTorch, we
will significantly reduce the image size. And, now downloading a pytorch
package often takes more than 1 hour. If we do it 4 times, it may take 4
hours.
Valgrind was added by me long time back, and it was not used too. Now we
run Linux tests outside of docker containers. So, when we have the need,
we could install it through apt-get on Ubuntu instead of doing it in the
CentOS container.
### Description
Upgrade cmake version to 3.24 because I need to use a new feature that
is only provided in that version and later. Starting from cmake 3.24,
the
[FetchContent](https://cmake.org/cmake/help/latest/module/FetchContent.html#module:FetchContent)
module and the
[find_package()](https://cmake.org/cmake/help/latest/command/find_package.html#command:find_package)
command now support integration capabilities, which means calls to
"FetchContent" can be implicitly redirected to "find_package", and vice
versa. Users can use a cmake variable to control the behavior. So, we
don't need to provide such a build option. We can delete our
"onnxruntime_PREFER_SYSTEM_LIB" build option and let cmake handle it.
And it would be easier for who wants to use vcpkg.
### Motivation and Context
Provide a unified package management method, and get aligned with the
community. This change is split from #13523 for easier review.
This PR enables ORT to execute graphs captured by TorchDynamo. Major compilation code is in `OrtBackend.compile` in ort_backend.py. `register_backend.py` is for plugging `OrtBackend` into TorchDynamo as a compiler.
`python setup.py develop` doesn't install PyTorch as a normal package in
site-packages anymore, and the user must stay at PyTorch's root
directory to call `import torch`. This will break LORT tests because
LORT tests contains `import torch` and are called outside PyTorch root
directory. To make PyTorch a normal package again, this PR build PyTorch
with `python setup.py install`.
### Description
We fix iGPU Unit and Python tests with this PR
We add packaging pip pkg to build Many Linux DockerFile
### Motivation and Context
This change is required to make sure iGPU Unit Test/Python Tests with OV
are fixed
- If it fixes an open issue, please link to the issue here. -->
Co-authored-by: shamaksx <shamax.kshirsagar@intel.com>
Co-authored-by: mayavijx <mayax.vijayan@intel.com>
Co-authored-by: pratiksha <pratikshax.bapusaheb.vanse@intel.com>
Co-authored-by: pratiksha <mohsinx.mohammad@intel.com>
Co-authored-by: Sahar Fatima <sfatima.3001@gmail.com>
Co-authored-by: Preetha Veeramalai <preetha.veeramalai@intel.com>
Co-authored-by: nmaajidk <n.maajid.khan@intel.com>
Co-authored-by: Mateusz Tabaka <mateusz.tabaka@intel.com>
1. Update CUDA version from 11.4 to 11.6.
2. Update Manylinux version
3. Upgrade GCC version from 10 to 11 for most x86_64 pipelines. CentOS 7 ARM64 doesn't have GCC 11 yet.
4. Refactor python packaging pipeline:
a. Split Linux GPU build job to two parts, build and test, so that the
build part doesn't need to use a GPU machine
b. Make the Linux GPU build job and Linux CPU build job more similar: share the same bash script and yaml file.
5. Temporarily disable Attention_Mask1D_Fp16_B2_FusedNoPadding because it is causing one of our packaging pipeline to fail. I have created an ADO task for this.
* upgrade cuda version on ci pipelines
* keeping folder name same
* keeping folder name same
* setting manual seed for primitive test case
* resolving comments
* changing atol and rtrol only for test case
Co-authored-by: Adam Louly <adamlouly@microsoft.com@orttrainingdev7.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
* moving training pipelines from cuda 11.5 to 11.6 and deprecating cuda 11.3
* change to cuda 11.6.2
* change pytorch's & torchvision's cuda version to 11.6
* specify deps version to 11.6.2
* update pytorch and torch text version
* torch 1.12.1
* change torchvision and torchtext version to be compatible with torch 1.12.1
* change cuda to 11.6 for cuda_home comaptibility
Co-authored-by: Adam Louly <adamlouly@microsoft.com@orttrainingdev7.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
* Make ORT as Pytorch JIT backend
LORT likely doesn't work with aten fallback so we only test LORT in its own CI.
* Revert changes to enable external CUDA allocator. Will add it later.
Revert "Revert changes to enable external CUDA allocator. Will add it later."
This reverts commit d5487f2e193014c805505afae8fb577c53667658.
Fix external allocator
* Relax tolerance and remove commented code
* Print more information in CI
* Fix pointer
* Address comments.
1. Reuse ORT-eager mode's environment.
2. Remove unused ctor.
* Use Pytorch master branch as all PRs are merged
Fix
* Refine based on cpplint feedbacks
* Revert changes to allow custom CUDA allocator in public APIs
* Use torch.testing.assert_close
* Use unittest framework
* Switch docker repo
* Rename *.cpp to *.cc
* Address comments
* Add comment
* Use same pipeline file for eager and lort pipelines
* Address comments
* Add yaml comment
* Fix cmake files
* Address comments
* Rename flags, remove printing code, remove dead comment
1. Delete the build scripts that were copied from manylinux project. Use "git checkout" instead.
2. Update manylinux version to get python 3.11. Related issue: Python 3.11 support #12343
3. Change the cuda version of linux gpu build job of nuget packaging pipeline from cuda 11.4 to cuda 11.6 to match the TRT job within the same pipeline.. (A lot other places need be updated as well, but I'd prefer to put them in another PR)
4. Make dockerfile names static. For example, replace tools/ci_build/github/linux/docker/$(DockerFile) to tools/ci_build/github/linux/docker/Dockerfile.manylinux2014_cpu . The former one relies on a runtime variable $(DockerFile), Template Parameters are expanded early in processing a pipeline run when most variables are not available. It like C++ macros vs variables.
* update to 2022
* Update the VS version
* Rolling back to gcc 10
* Rolling back
* Update cuda home
* remove "CMAKE_CUDA_ARCHITECTURES=52"
* update cuda Architure to 70
* Delete cuda 10.2 training pipeline
* rolling back a mistake
* Update win-gpu-reduce-op-ci-pipeline.yml
* Update win-gpu-reduce-op-ci-pipeline.yml
* Update win-gpu-reduce-op-ci-pipeline.yml
* Delete tools/ci_build/github/linux/docker/scripts/training/ortmodule/stage1/requirements_torch1.10.0_cu10.2 directory
* Delete tools/ci_build/github/linux/docker/scripts/training/ortmodule/stage1/requirements_torch1.11.0_cu10.2 directory
* Add tests for all uniary aten ops supported in eager mode
* fixing the PR draft
* fixing the merge
* changing eval to be at compile time
* adding requirements for eager
* 1.adding function to {ops}_out
2.cleaning the code
and adding comments
* editing the code according to code review
Co-authored-by: root <root@AHA-LIRONKESE-1>
* aten op for inference
* fix build error
* more some code to training only
* remove domain from operator name
* move aten_op_executor ext out from ortmodule
* add pipeline
* add exec mode
* fix script
* fix ut script
* fix test pipeline
* failure test
* rollback
* bugfix
* resolve comments
* enable aten for python build only
* fix win build
* use target_compile_definitions
* support io binding
* turn off aten by default
* fix ut
Co-authored-by: Vincent Wang <weicwang@microsoft.com>
Co-authored-by: zhijxu <zhijxu@microsoft.com>
* update TVM
* get alignment constant from TVM
* update TVM_VM_SetInputs to upstream with TVM API
* fix CI issue: update TVM EP dependencies
* add sudo
* revert changes needed to install missing package
* add package for TVM EP CI
Co-authored-by: Valery Chernov <valery.chernov@deelvin.com>
Co-authored-by: KJlaccHoeUM9l <wotpricol@mail.ru>
Description:
Add the extra param to match gelu in PyTorch in the contrib symbolic function
Motivation and Context
Why is this change required? What problem does it solve?
The symbolic function in /onnxruntime/python/tools/pytorch_export_contrib_ops.py is missing a recently added parameter approximate. We add this parameter and use the exporter defined gelu if approximate is "tanh".
* remove rocm42 CI
* update torch to v1.11.0
Co-authored-by: Ethan Tao <ettao@microsoft.com@orttrainingdev7.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
* Update orttraining release pipelines to use torch 1.11.0
* Change requirements_torch...txt to requirements.txt
* Update cuda cmake architectures and clean up old files
* update to torch 1.10
* update torchvision version
* update torchtext version
* remove deprecated option enable_onnx_checker
* add unit test to test gradient of GatherElements
* add ORTMODULE_ONNX_OPSET_VERSION in a docker file
* add ortmodule and eager mode test
* add ortmodule dependency
* fix eager pipeline
* skip tthe ortmodule test for windows due to win ci issue
* remove useless win ci change
* add torch
Co-authored-by: Abhishek Jindal <abjindal@microsoft.com>
* updates for picking pnnx commit
* add tests filter to c# tests
* plus test fixes
* fix versioning for contrib ops
* fix tests
* test filter for optional ops
* more versioning related updates
* fix test
* fix layernorm spec
* more updates
* update docs
* add more test filters
* more filters
* update binary size threshold
* update docs
* draft - enable model local function
* enable model local functions in ORT
* update to latest rel onnx commit
* plus tests
* plus more updates
* plus updates
* test updates
* Fix for nested functions + shape inference
* plus bug fix and updates per review
* plus fixes per review
* plus test updates
* plus updates per review
* plus fixes
* fix a test
* updates for picking pnnx commit
* add tests filter to c# tests
* plus test fixes
* fix versioning for contrib ops
* fix tests
* test filter for optional ops
* more versioning related updates
* fix test
* fix layernorm spec
* more updates
* update docs
* add more test filters
* more filters
* update binary size threshold
* update docs
* plus more fixes
* updates per review
* update to release commit
* add filters for optional type tests
* plus updates
1. Update SDLNativeRules from v2 to v3. The new one allows us setting excluded paths.
2. Update TSAUpload from v1 to v2. And add a config file ".gdn/.gdntsa" for it.
3. Fix some parentheses warnings
4. Update cmake to the latest.
5. Remove "--x86" build option from pipeline yaml files. Now we can auto-detect cpu architecture from python. So we don't need to ask user to specify it.
* first attempt share docker image across python and torch versons
* set dependency between jobs
* fix yaml grammer
* remove python version from first stage
* clean deepspeed directroy
* split into two images according torch version
* fix yaml syntax
* invalidate cache
* remove DS to prevent torch 1.9.0 upgrade
ORTModule requires two PyTorch CPP extensions that are currently JIT compiled. The runtime compilation can cause issues in some environments without all build requirements or in environments with multiple instances of ORTModule running in parallel
This PR creates a custom command to compile such extensions that must be manually executed before ORTModule is executed for the first time. When users try to use ORTModule before the extensions are compiled, an error with instructions are raised
PyTorch CPP Extensions for ORTModule can be compiled by running:
python -m onnxruntime.training.ortmodule.torch_cpp_extensions.install
Full build environment is needed for this
1. Remove some unused code and simplify tools/ci_build/github/linux/run_dockerbuild.sh.
2. Enable Nuget CUDA tests. The original design was we could leverage Directory.Build.props and let cmake generate the required properties(USE_CUDA/...) there. However, in nuget packaging pipeline we test the package on a different host that doesn't run cmake command and doesn't have the auto-generated Directory.Build.props file.
* Register Torch Custom autograd.Function
* Add flag to supress pybind11 warning
* Avoid unnecessary include in cmake
* Add missing reference
* Add getter for registerred functions
* Format for making subsquent changes cleaner
* Fix interop feature build failure
* Forward pass, run PyOP on CPU EP
* clean up the code
* Fix build
* Define new ops
* refactor pyop - extract PyOpLibProxy class
* Hacks to run example
* implement the kernel compute func
* add back PyOP for comparision experiments
* debug info - thread id
* refine the kernels
* Polish code
(cherry picked from commit 4ed606f9a0)
* Fix a the Tensor address mismatch in C++ side
* PythonOpGrad compute
* add distributed test case
* refine test cases
* get dist.get_rank() in Autograd forward pass
* Add CUDA kernels
* Store float, int, and tuple of them as PythonOp's attributes
* Populate local changes
* Fix bugs
* PythonOp/PythonOpGrad CUDA kernels
* Support non-tensor inputs
* Single GPU FP16 Run Pass
(cherry picked from commit e539989e91e18ee997900292d3493b97d3eafa8a)
* Fix segement
* add basic test cases
* Save progress
* fix gradient builder for a Add op who have same inputs
* add test cases for auto grad fallback feature
* fix ref cnt issue. add thread id for debugging
* POC: remove interface class
* Remove interface classes
* Clean a bit
* Coarse-grained clean up after rebase master
* reset pyop and language_interop_ops to latest master
* Fix missing part during merge
* re-structure torch related language interop files
* Fix build
* Fix tests and build
* Fix build and basic unit tests
* Fix most of uts
* remove unnecessary import
* clean up and fix build when enabling language_interop_ops
* Fix single-GPU UTs
* Move runner register into ORT package
* Update dist UTs to new style
* Also fix distributed UTs and leaf gradient problem
* Static generation for constant args
* Move arg_positions_ to static field
* Rename some functions
* Move arg ceration into a function
* Clean output logic in PythonOp
* Move PythonOp's ctor
* Revise PythonOpGrad
* Fix "ORT only supports contiguous tensor for now" for inputs
* Fix evaulation mode error, add test & clean up
* clean up codes
* Fix issues introduced by recent master change (enabled symbolic shape infer)
* automatically register forward/backward function pointers && clean up
* Fix multi-output case
* Add a test back
* fix build and clean up
* RAII for function params PyObject
* Use new exporter
* Clean full name in new exporter
* Fix UTs
* Format a file
* Add "inplace" back
Remove a legacy comment
* Refine TorchProxy
1. Make TorchProxy a formal singleton class.
2. Remove unused Scope class.
3. Simplify the call to Forward and Backward. The two functions now
automatically acquire and release GIL state, so user doesn't need
any GIL-related calls.
* Format
* Add lock to avoid racing condition when registering Python objs
* Fix Python call param ref issues && Add RefcountTracker for debug build && Clean up
* clean up print
* Resolve part of comments && clean up
* Fix a potential bug
* track pyobject consistently
* move kernels to cpu provider as base class
* Refactor - 1. Extract PythonOpBase/PythonOpGradBase 2. Implement CPU kernels 3. Test coverage for CPU kernels
* Refine register code
* Add a missing macro
* Release python call result objects with PythonObjectPtr && Add UnRegisterContext && Track PyObject for Debugging && Clena up
* Fix random segfault issue - relasing a wrong ctx pointer for inplace cases
* put ref count in debug macro
* Move GIL out
* Refine tests
* Fix memory leak issue && forward output lifecycle issue:
1. Unregister the OrtValue PythonObject. Currently, the OrtValue shared same buffer with PythonOp/PythonOpGrad's output. So after those kernels outputs are released, the "leaked" OrtValue caused the shared buffer cannot be released.
2. According PyTorch forward+backward execution. The forward outputs (e.g. torch tensors) maintains the context/saved variables/dirty inputs, etc, which are used for backward execution, so its life should be after the backward runs. This change added such a depencencies between PythonOpGrad on PythonOp.
* Move dlpack->ortvalue into C++ to avoid temp object registration
* Fix the over released Py_False/Py_True && refine tests
* Clean up unused functions
* Always assume the first forward output is context so we don't need to test unused cases.
* Fix a memory leak
* move-copy unique_ptr & avoid C-style casting
* Use inplace attribute to determine if input tensors are copied
* Move DlpackCapsuleDestructor's to a common place
* Thread-safe TorchProxy
* Use OrtValue instead of OrtValue*
* Only keep checks for Debug build
* Wrap some long line per comment
* onnx_export_type --> kwargs
* Use requires_grads to create PythonOpGrad's inputs
* add missing files during master merge
* Fix build issue after merge
* Address two comments.
1. Internalize DlpackCapsuleDestructor
2. Change "(" to "]" for describing closed interval.
* Address some comments.
1. "override" -> "overwrite" to avoid using reserved keyword.
2. Call DLPack's helper to create OrtValue for avoiding repeated code.
* Address comments.
1. Pass std::mutex to registeration helpers so their callers don't
have to lock the mutex expclicitly.
2. Rename "func_context_pool_mutex_" to "mutex_". This mutex is the global mutex for OrtTorchFunctionPool.
* Add bridging code to make cuda kernels work with merged master
* put debue macro check within RefCountTracker && use default logger for debug info && remove useless ortvalue_ptr interface && typos && revert unncessary blank line changes
* fix some comments
* Resolve more comments
* Capitalize a word
* use unique_ptr instead of ObjectPointer for PyObject management && add converntion
* Support symbolic shape
* Remove unused variable
* fix build
* Enable function registration for training only && rectify ToDlpack/FromDlpack merge with master.
* Don't add context for non-PythonOp opeartors (for example AtenOp)
* Fix build error
* Polish frontend part.
1. Avoid adding kwargs to ORTModule's ctor
2. Use onnx_export_type rather than kwargs for type safty
3. Fix some build bugs.
* Resolve simpler comments
* Resolve export related comments
* sync master && fix tests && fix non-training build error
* Fix build errors
* add target link lib
* windows build error
* Fix orttraining-linux-ci build
* disable autograd test && clean up
* fix linux orttraining ci build
* try fixing win build error
* Revise append calls in runner
* Enable custom function using a function
* Rename to avoid using reservied keyword
* Use list comprehension
* Set ORT random seed in tests
* Remove print code and fix ctx shape
* [] -> list()
* Move autograd.Function and nn.Module into corresponding functions
* Move test helpers
* Polish dist test a bit. Tried move helpers to helper file but it causes a deadlock.
* trying fix undefined reference
* Context is not managed by global pool
* Polish dist test
* Polish dist test
* Add enable_custom_autograd_function
* Remove enable_custom_autograd_function from ctors
* Add doc strings
* Shorter code
* Address comments
* Add one empty line
* revert a minor and not needed change
* Address comments
* Back to reference
* Fix windows builds
* Fix windows debug build fail to find "'python39_d.lib'"
* fix mac build error
* revert _to_contiguous change
* add debugging tag for orttraining-cpu-ci
* Fix the wrong PYTHON_LIBRARIES which is affected by PYTHON_LIBRARY given in build command
* add debugging info
* Fix the build in this case: PYTHON_LIBDIR: /opt/_internal/cpython-3.7.10/lib, PYTHON_EXECUTABLE: /opt/python/cp37-cp37m/bin/python3, PYTHON_MULTIARCH: x86_64-linux-gnu
PYTHON_LIBRARY_PATH python3.7m
* fix build error due to python lib not found
* Fixes
1. Release PyObject's
2. Not useing deepcopy because we assume autograd.Function's
non-tensor inputs are static (constants) so there should
be no side effect after calling any autograd.Function
multiple times.
* Revert dtoc for decreasing refcnt
* add debugging log
* add debugging tag
* Fix a small leak
* Remove ONNX_FALLTHROUGH flag
* debug tag
* debug tag
* fix builds
* remove debug tag
* fix build
* fix builds
* fix build
* install python3 in centos, in case there is no libpython3.xm.so
* build python so for redhat
* add training cpu specific docker, build python so inside
* revert build-cpython change
* try fixing numpy include issue
* install_deps after re-installing cpython
* fix build && remove debug tag
* install openssl before cpython
* let's say: builds pass!
* add build flag for torch iterop, only enable it when training+Python is enabled
* skip ComputeBroadcastBackwardAxesDynamic for the shared inputs
* fix build
* add debug info for padgrad test
* Fix builds
* Split dlpack_converter into C++ and Python interfaces respecitively. Then different build use them as needed.
* clean up the changes
* fix addsubgradient builder
* Fix builds
* clean up
* clean up
* Address some comments.
1. Use pointer wraper to avoid calling Py_DECREF
2. Remove unregister_* functions
3. Allow repeated registration by skipping those with existing keys
4. Unregister context in PythonOpGrad
* Fix over-released Py_Boolean
Co-authored-by: Wei-Sheng Chin <wschin@outlook.com>