pytorch/caffe2
cyy 507611f9ae [CUDACachingAllocator] Turn Allocator::allocate into non-const (#120969)
Ideally, the method should be non-const since it changes the allocator state. Some const_casts are also removed in the way.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120969
Approved by: https://github.com/albanD
2024-03-05 09:53:05 +00:00
..
contrib Revert "Increased compile time max GPUs to 512. Switched to int16_t DeviceIndex. (#119639)" 2024-02-28 18:57:09 +00:00
core [CUDACachingAllocator] Turn Allocator::allocate into non-const (#120969) 2024-03-05 09:53:05 +00:00
cuda_rtc
db
distributed
experiments
ideep
image
mobile
mpi
observers
onnx [codemod][highrisk] Fix shadowed variable in caffe2/caffe2/onnx/onnx_exporter.cc (#117996) 2024-01-22 22:57:06 +00:00
operators [codemod] Fix shadows in PyTorch (#117562) 2024-01-17 20:33:50 +00:00
opt
perfkernels [caffe2] Add an avx512 implementation of adagrad_update (#113289) 2024-02-15 01:45:30 +00:00
predictor
proto
python caffe2: remove support for specifically running "flaky tests" (#112007) 2024-02-13 07:56:37 +00:00
quantization
queue
serialize Expose recordSize in ChunkRecordIterator (#120239) 2024-02-21 04:33:03 +00:00
sgd
share
test
transforms
utils [codemod][lowrisk] Fix deprecated use of 0/NULL (#120740) 2024-02-28 20:13:13 +00:00
video
.clang-format
__init__.py
BUILD_MODE.bzl
CMakeLists.txt [AOTI] Use torchgen to generate C shim functions (#120513) 2024-03-05 04:28:44 +00:00
README.md
release-notes.md
requirements.txt
unexported_symbols.lds
VERSION_NUMBER
version_script.lds

Caffe2

Caffe2 is a lightweight, modular, and scalable deep learning framework. Building on the original Caffe, Caffe2 is designed with expression, speed, and modularity in mind.

Questions and Feedback

Please use GitHub issues (https://github.com/pytorch/pytorch/issues) to ask questions, report bugs, and request new features.

Further Resources on Caffe2.ai