1.5 cherry picks - gchanan/pytorch GitHub Wiki
up to 30e7055ed703744e2b9eaa72f6d31a0e96900cb1
Release Branch: https://github.com/pytorch/pytorch/pull/36070/
https://github.com/pytorch/pytorch/issues/36499
https://github.com/pytorch/pytorch/issues/36798
https://github.com/pytorch/pytorch/pull/36927/
In Progress
Master PR |
Description |
Decision |
1.5 PR |
Status |
36382 |
Fixing SyncBN dgrad |
Yes, if new RC |
36688 |
Merged |
36542 |
[CI] fix test_distributed for python 3.8+ |
Yes, if new RC |
36687 |
Merged |
36378 (issue) |
CMake targets wrongly forward unknown options to NVCC (v1.5+) |
? |
Yes, if new RC |
Waiting for fix |
36656 |
Add a warning for Single-Process Multi-GPU DDP |
Yes, if new RC |
36537 |
Merged |
36656 |
Migrate release CI jobs to CircleCI for Windows |
Yes, if new RC |
36658 |
Merged |
Reverted PRs we will still need
Master PR |
Description |
Decision |
1.5 PR |
Status |
36114 |
Update docs for master to remove Python 2 references |
? |
? |
Waiting for master land |
ONNX
Master PR |
Description |
Decision |
1.5 PR |
Status |
35984 |
[ONNX] fix size for opset 11 |
Yes |
36185 |
Merged |
Current understanding: https://github.com/pytorch/pytorch/issues/34718 is the issue that should be fixed and needs 35984 and 35744
No Current Plans to Cherry-Pick
Master PR |
Description |
Decision |
1.5 PR |
Status |
35339 |
lshift and rshift on CUDA should match the behavior on CPU |
? |
|
Waiting for xuhdev |
35869 |
[ONNX] Added support for constant folding onnx::Add and onnx::Sub |
No, enhancement |
|
|
35280 |
[ONNX] Fix for constant folding: Slice, Added ReduceL1 and ReduceL2 |
No, enhancement |
|
|
35318 |
[ONNX] Export torch.inverse op |
? |
|
Waiting for Lu |
35467 |
Fixes default dtype value for onnx hardtanh export (opset11) |
Not worth it |
|
|
35744 |
[ONNX] Adding a pass to replace interpolate function with aten::__interpolate |
No, still not landed |
|
Out |
35506 |
Fix grid_sample out of boundary when grid contains large numbers |
Yes |
36164 |
Waiting for CI |
Something for https://github.com/pytorch/pytorch/issues/35446 |
|
|
|
|
Core
Master PR |
Description |
Decision |
1.5 PR |
Status |
35231 |
Fix Tensor radd type hint issue |
Yes |
35405 |
Merged |
35131 |
Add TORCH_CUDA_API to FilterDescriptor |
Yes |
35406 |
Merged |
35053 |
torch.cat: disallow inputs on different devices |
Yes |
35407 |
Merged |
35150 |
Making sure all tensors in torch.cat sequence have the same dtype |
Yes |
35477 |
Merged |
35253 |
Fix handling of non-finite values in topk |
Yes |
35435 |
Merged |
35102 |
Eager autocasting, out-of-place ops only (with MSVC 2017 fix) |
No: not needed by Uber nor MSFT |
35340 |
|
C++ API
Master PR |
Description |
Decision |
1.5 PR |
Status |
35001 |
Add xor_convergence test for lbfgs |
Yes |
35440 |
Merged |
35022, 35023, 35024, 35025, 35147 |
Fix C++ API torch::nn parity bugs |
Yes |
35380 |
Merged |
34957 |
Merged Optimizer and LossClosureOptimizer |
Yes |
34957 |
Merged |
Build/CI
Master PR |
Description |
Decision |
1.5 PR |
Status |
35057 |
PyTorch should always depend on future |
Yes |
35412 |
Merged |
35069 |
skip ctc_loss test on Windows |
Only if windows tests fail |
|
|
34940 |
Install CUDA manually on Windows CI to avoid flakiness |
Only if windows tests fail |
|
|
Distributed
Master PR |
Description |
Decision |
1.5 PR |
Status |
34755 |
Enforce rref python pickling to be in the scope of RPC call |
Yes |
35513 |
Merged |
34689 |
enforce rref JIT pickling to be in the scope of rpc calls |
Yes |
35514 |
Merged |
34828 |
[rpc] fix test_debug_info for python 3.5 |
No: Only if tests are flaky |
|
|
35425 |
Fix non-deterministic RNG behavior in dist_optimizer tests |
No: only if tests are flaky |
|
|
27637 |
[jit] make Future type annotation available in Python |
No: TorchScript + RPC is experimental |
|
|
33636 |
[1.5 Release][RPC Reliability] RRef Idempotency and RPC Retry enablement |
No: too risky |
|
|
Docs
Master PR |
Description |
Decision |
1.5 PR |
Status |
35109 |
[WIP] Refactored rpc docs |
Yes |
|
Merged |
XLA
Master PR |
Description |
Decision |
1.5 PR |
Status |
35449 |
Add warning to a known autograd issue on XLA backend |
Yes |
35450 |
Merged |