pytorch/caffe2/python/ideep
jgong5 c755616e00 Enable Detectron model inference for CPU and MKL-DNN paths (#10157)
Summary:
1. Support ops needed for inference of Faster-RCNN/Mask-RCNN needed in Detectron, mostly direct fallbacks.
2. Use CPU device to hold 0-dim tensors and integer tensors in both fallback op and blob feeder, needed by Detectron models.
3. Ignore 0-dim tensor in MKL-DNN concat operator.
4. Generate dynamic library of Detectron module for CPU device.

This PR obsoletes #9164.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10157

Differential Revision: D9276837

Pulled By: yinghai

fbshipit-source-id: dc364932ae4a2e7fcefdee70b5fce3c0cee91b6f
2018-08-29 15:11:01 -07:00
..
concat_split_op_test.py
conv_op_test.py Enable Conv fusion optimizations in optimizeForIdeep (#9255) 2018-07-16 21:28:50 -07:00
convfusion_op_test.py Fold AffineChannel to Conv, the same way as BN (for Detectron models) (#10293) 2018-08-13 22:43:37 -07:00
copy_op_test.py Add fallback to TensorCPU if there are unsupported types for IDEEP Tensor (#9667) 2018-07-23 13:54:57 -07:00
dropout_op_test.py
elementwise_sum_op_test.py
fc_op_test.py
LRN_op_test.py
operator_fallback_op_test.py Enable Detectron model inference for CPU and MKL-DNN paths (#10157) 2018-08-29 15:11:01 -07:00
pool_op_test.py
relu_op_test.py
softmax_op_test.py
spatial_bn_op_test.py
squeeze_op_test.py
test_ideep_net.py
transform_ideep_net.py