mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-15 21:00:47 +00:00
Summary: Mainly renaming pthread_create of C2, the only one referred internally in NNPACK, that is conflicting, to pthread_create_c2. Removed 2 other conflicting symbols that are not used internally at all. Pointing XNNPACK to original repo instead of the fork. Copy pasted the new interface and implementation to caff2/utils/threadpool, so that for internal builds we compile against this. When threadpool is unified this will be removed. Pull Request resolved: https://github.com/pytorch/pytorch/pull/33869 Differential Revision: D20140580 Pulled By: kimishpatel fbshipit-source-id: de70df0af9c7d6bc065e85ede0e1c4dd6a9e6be3
24 lines
975 B
C++
24 lines
975 B
C++
#pragma once
|
|
#include <caffe2/utils/threadpool/pthreadpool.h>
|
|
|
|
// TODO Implement a parallel_for version for Mobile here, add to Aten/Parallel.h
|
|
|
|
namespace caffe2 {
|
|
|
|
class ThreadPool;
|
|
|
|
// Return a singleton instance of caffe2::ThreadPool for ATen/TH multithreading.
|
|
ThreadPool* mobile_threadpool();
|
|
|
|
// NOTE: This interface is temporary and should not be used.
|
|
// Please use Aten/Parallel.h for parallel primitives in pytorch.
|
|
// This implementation will be used by pytorch mobile, specifically
|
|
// NNPACK/QNNPACK. For mobile we need to use caffe2::ThreadPool instead of the
|
|
// 3rd party pthreadpool. Future work (TODO) Implement a mobile version of
|
|
// "at::parallel_for" using caffe2::ThreadPool so all ATen/TH multithreading
|
|
// usage is mobile friendly; Refactor QNNPACK or pthreadpool to explicitly using
|
|
// "at::parallel_for" primitive to replace pthreadpool_compute_1d for Pytorch;
|
|
pthreadpool_t mobile_pthreadpool();
|
|
|
|
size_t getDefaultNumThreads();
|
|
} // namespace caffe2
|