..
autoheuristic
Remove unused Python variables in torch/[_-a]* ( #133492 )
2024-12-12 17:39:14 +00:00
codegen
[BE][CI] bump ruff to 0.8.4 ( #143753 )
2024-12-24 12:24:10 +00:00
compile_worker
Enable ruff's unused variable checking everywhere in pytorch ( #136965 )
2024-12-22 02:33:11 +00:00
fx_passes
[micro_pipeline_tp] don't pass return_A to fused_all_gather_scaled_matmul ( #143782 )
2024-12-24 07:25:38 +00:00
kernel
Enable ruff's unused variable checking everywhere in pytorch ( #136965 )
2024-12-22 02:33:11 +00:00
package
[aoti package] seek 0 after loading buffer ( #142204 )
2024-12-09 21:53:28 +00:00
runtime
[BE][Easy] use pathlib.Path instead of dirname / ".." / pardir ( #129374 )
2024-12-21 22:08:01 +00:00
__init__.py
Remove unused Python variables in torch/[_-a]* ( #133492 )
2024-12-12 17:39:14 +00:00
analyze_preserves_zero_mask.py
Infer whether prologues can be computed without upcasting to fp32 without changing numerics ( #142402 )
2024-12-13 23:25:15 +00:00
aoti_eager.py
Remove unused Python variables in torch/[_-a]* ( #133492 )
2024-12-12 17:39:14 +00:00
async_compile.py
Use process pool for precompilation of triton templates ( #142450 )
2024-12-18 01:48:04 +00:00
autotune_process.py
[Inductor XPU] Support max-autotune on XPU and reuse the corresponding Inductor UT. ( #143266 )
2024-12-24 05:42:36 +00:00
bounds.py
remove allow-untyped-defs from _inductor/bounds.py ( #141440 )
2024-11-24 16:23:31 +00:00
choices.py
Prologue Fusion ( #134532 )
2024-12-13 04:18:25 +00:00
codecache.py
Use absolute path path.resolve() -> path.absolute() ( #129409 )
2024-12-24 08:33:08 +00:00
comm_analysis.py
comm_lowering.py
[inductor] Replace set by OrderedSet ( #138466 )
2024-12-13 16:08:45 +00:00
comms.py
[Inductor] Move peak memory pass and overlap pass to be run at the right place ( #142822 )
2024-12-14 06:53:02 +00:00
compile_fx.py
Support tensor subclass unwrapping ( #141941 )
2024-12-21 00:29:31 +00:00
compiler_bisector.py
Remove unused Python variables in torch/[_-a]* ( #133492 )
2024-12-12 17:39:14 +00:00
config.py
Make Inductor cpp backend enable_floating_point_contract_flag to take string ( #143450 )
2024-12-20 16:28:54 +00:00
constant_folding.py
[inductor] Replace set by OrderedSet ( #138466 )
2024-12-13 16:08:45 +00:00
cpp_builder.py
[AOTI][reland] Emit a CMakeLists.txt when package_cpp_only ( #143680 )
2024-12-21 03:48:40 +00:00
cpu_vec_isa.py
[inductor] Fix an unused variable in cpu_vec_isa.py ( #138473 )
2024-12-20 18:50:19 +00:00
cudagraph_trees.py
[inductor] Replace set by OrderedSet ( #138466 )
2024-12-13 16:08:45 +00:00
cudagraph_utils.py
[inductor] Replace set by OrderedSet ( #138466 )
2024-12-13 16:08:45 +00:00
custom_graph_pass.py
debug.py
[provenance_tracking] Dump inductor_triton_kernel_to_post_grad_nodes.json info in debug_trace ( #143055 )
2024-12-18 06:51:50 +00:00
decomposition.py
Add support for bfloat16 atomic adds in fbcode ( #143629 )
2024-12-20 23:05:13 +00:00
dependencies.py
[inductor] Replace set by OrderedSet ( #138466 )
2024-12-13 16:08:45 +00:00
dtype_propagation.py
Infer whether prologues can be computed without upcasting to fp32 without changing numerics ( #142402 )
2024-12-13 23:25:15 +00:00
exc.py
extern_node_serializer.py
freezing.py
type annotations for meta_utils ( #140203 )
2024-11-13 20:07:47 +00:00
fx_utils.py
[inductor] Replace set by OrderedSet ( #138466 )
2024-12-13 16:08:45 +00:00
graph.py
Add support for bfloat16 atomic adds in fbcode ( #143629 )
2024-12-20 23:05:13 +00:00
hooks.py
index_propagation.py
inductor_prims.py
ir.py
[Hierarchical Compile] Update NoneAsConstantBuffer to support graph d… ( #143531 )
2024-12-20 09:23:12 +00:00
jagged_lowerings.py
[inductor] Refactor MutableBox to make IRNode typing easier ( #140895 )
2024-11-20 19:50:46 +00:00
loop_body.py
[inductor] Fix 3d tiling ( #141709 )
2024-12-01 19:47:41 +00:00
lowering.py
[inductor] Make adaptive_max_pool2d error on int64 ( #143762 )
2024-12-24 08:33:59 +00:00
memory.py
[inductor] Replace set by OrderedSet ( #138466 )
2024-12-13 16:08:45 +00:00
metrics.py
[inductor] Replace set by OrderedSet ( #138466 )
2024-12-13 16:08:45 +00:00
mkldnn_ir.py
Remove unused Python variables in torch/[_-a]* ( #133492 )
2024-12-12 17:39:14 +00:00
mkldnn_lowerings.py
[inductor][cpp] Add BMM kernel template for autotuning ( #129772 )
2024-12-06 04:54:00 +00:00
mock_cache.py
ops_handler.py
[inductor] Replace set by OrderedSet ( #138466 )
2024-12-13 16:08:45 +00:00
optimize_indexing.py
output_code.py
[logging] Log runtime autotuning timing to scuba ( #141919 )
2024-12-13 21:22:13 +00:00
pattern_matcher.py
Use absolute path path.resolve() -> path.absolute() ( #129409 )
2024-12-24 08:33:08 +00:00
quantized_lowerings.py
[ARM][feat]: Add 4 bit dynamic quantization matmuls & KleidiAI Backend ( #134124 )
2024-12-20 19:32:03 +00:00
remote_cache.py
Support remote caching requiring redis auth ( #141679 )
2024-12-12 17:07:50 +00:00
scheduler.py
[BE] Update triton repo link ( #143429 )
2024-12-22 18:38:35 +00:00
script.ld
select_algorithm.py
[Inductor XPU] Support max-autotune on XPU and reuse the corresponding Inductor UT. ( #143266 )
2024-12-24 05:42:36 +00:00
sizevars.py
Back out "Fix undesired specialization on slice after split. ( #142372 )" ( #143356 )
2024-12-17 09:17:18 +00:00
subgraph_lowering.py
[inductor] Replace set by OrderedSet ( #138466 )
2024-12-13 16:08:45 +00:00
test_case.py
Enable autograd cache on inductor tests ( #140890 )
2024-11-27 20:41:43 +00:00
test_operators.py
remove allow-untyped-defs for torch/_inductor/test_operators.py ( #143436 )
2024-12-18 12:54:25 +00:00
triton_bundler.py
pytorch/features: Make a feature logger and record triton bundling ( #141056 )
2024-11-22 01:31:08 +00:00
utils.py
[Inductor XPU] Support max-autotune on XPU and reuse the corresponding Inductor UT. ( #143266 )
2024-12-24 05:42:36 +00:00
virtualized.py
[inductor] Replace set by OrderedSet ( #138466 )
2024-12-13 16:08:45 +00:00
wrapper_benchmark.py
[inductor] Replace set by OrderedSet ( #138466 )
2024-12-13 16:08:45 +00:00