* gpt2 training perf
* gpt2 training perf
* debug
* debug
* debug
* fix bug
* minor
* on comments
* dynamic sql
* fix build
* minor
* linked hash
* on comments
* minor
* mem
* minor
Co-authored-by: Ethan Tao <ettao@microsoft.com>
* Initial update of readme
* Readme updates
* Review of consolidated README (#3930)
* Proposed updates for readme (#3953)
I found some of the information was duplicated within the doc, so attempted to streamline
* Fix links
* More updates
- fix build instructions
- nodejs doc reorganization
- roadmap update
- version fixes
* Update ORT Server build instructions
* More doc cleanup
* fix python dev notes name
* Update nodejs and some links
* sync eigen version back to master
* Minor fixes
* add nodsjs to sample table of content
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* address PR feedback
* address PR feedback
* nodejs build instruction
* Update Java instructions to include gradle
* Roadmap refresh
Reformat some data, fix link, minor rewording
* Clarify Visual C++ runtime req
Co-authored-by: Nat Kershaw (MSFT) <nakersha@microsoft.com>
Co-authored-by: Prasanth Pulavarthi <prasantp@microsoft.com>
Co-authored-by: manashgoswami <magoswam@microsoft.com>
Update Android build instructions to provide more information.
Add info on testing directly on Android
Update build.py to better support using Ninja generator to build Android on Windows.
Update install_deps.sh to use relative path from script directory to symbolic_opset10.py. This allows install_deps.sh to be called from different working directories.
* [java] - adding a cuda enabled test.
* Adding --build_java to the windows gpu ci pipeline.
* Removing a stray line from the unit tests that always enabled CUDA for Java.
If a symbolic dimension is found allow the user to provide a value, or default to 1.
`python .\onnxruntime_test.py --symbolic_dims batch=1,seqlen=4 onnxruntime\test\testdata\transform\fusion\fast_gelu_use_graph_input.onnx`
Disable ORT in offline optimization script (ORT could generate some fused ops (like FusedGemm) which cannot be converted to fp16).
Remove some models from benchmark until we have optimizations for them.
Update optimizer for GPT2 models exported from PyTorch 1.5.
Update benchmark to use GPT2 models without Past State inputs/outputs
Update bert_perf_test to allow setting omp_num_threads etc to test only one setting
* Implement QLinearRelu and its unit test.
* Add logic to compute table during constructor when all parameters is constant.
* Fix test case rounding result related with rounding mode.
* Enable running PEP8 checks via flake8 as part of the build if flake8 is installed.
Update scripts in \tools and \onnxruntime\python. Excluding \onnxruntime\python\tools which needs a lot more work to be PEP8 compliant. Also excluding orttraining\tools for the same reason.
Install flake8 as part of the static_analysis build task in the Win-CPU CI so the checks are run in one CI build.
Update coding standards doc.
* Update BUILD doc for ARM64 build for TensorRT support on Jetson device
* minor revision
* JetPack 4.4 is in developer preview stage, so we suggest to use JetPack
4.3