Skip to content

Support for AMD GPU address sanitizer.

2ee200e
Select commit
Loading
Failed to load commit list.
Open

Support for AMD GPU address sanitizer. #3007

Support for AMD GPU address sanitizer.
2ee200e
Select commit
Loading
Failed to load commit list.
ROCm Repo Management API / Jenkins failed Feb 27, 2026 in 5h 37m 41s

Tests/Test Inductor/Run pytorch_inductor_null: warning in 'junit' step

Tests / Test PyTorch / Test PyTorch / Run pytorch_test_2 / Shell Script

Error in sh step, with arguments ./test_pytorch_test.sh.

script returned exit code 1
Build log
[2026-02-27T00:16:37.008Z] + ./test_pytorch_test.sh
[2026-02-27T00:16:37.008Z] + [[ -z '' ]]
[2026-02-27T00:16:37.008Z] + DEFAULT_EXECUTION_OPTIONS='-e CUSTOM_TEST_ARTIFACT_BUILD_DIR=build/custom_test_artifacts         -e CUSTOM_TEST_ARTIFACTS_FILE=test_artifacts.zip         --copy-whls'
[2026-02-27T00:16:37.008Z] + source prepare_docker_env.sh -e 'EXTENSION_BUILD_GFXARCH=gfx90a;gfx908;gfx942' -e CI=1 -e TEST_CONFIG=default -e SHARD_NUMBER=2 -e CUSTOM_TEST_ARTIFACT_BUILD_DIR=build/custom_test_artifacts -e CUSTOM_TEST_ARTIFACTS_FILE=test_artifacts.zip --copy-whls
[2026-02-27T00:16:37.008Z] ++ set -ex
[2026-02-27T00:16:37.008Z] ++ set -o pipefail
[2026-02-27T00:16:37.008Z] ++ copy_whls=0
[2026-02-27T00:16:37.008Z] ++ envvars=()
[2026-02-27T00:16:37.008Z] ++ [[ 13 -gt 0 ]]
[2026-02-27T00:16:37.008Z] ++ key=-e
[2026-02-27T00:16:37.008Z] ++ case $key in
[2026-02-27T00:16:37.008Z] ++ envvars+=("-e $2")
[2026-02-27T00:16:37.008Z] ++ shift
[2026-02-27T00:16:37.008Z] ++ shift
[2026-02-27T00:16:37.008Z] ++ [[ 11 -gt 0 ]]
[2026-02-27T00:16:37.008Z] ++ key=-e
[2026-02-27T00:16:37.008Z] ++ case $key in
[2026-02-27T00:16:37.008Z] ++ envvars+=("-e $2")
[2026-02-27T00:16:37.008Z] ++ shift
[2026-02-27T00:16:37.008Z] ++ shift
[2026-02-27T00:16:37.008Z] ++ [[ 9 -gt 0 ]]
[2026-02-27T00:16:37.008Z] ++ key=-e
[2026-02-27T00:16:37.008Z] ++ case $key in
[2026-02-27T00:16:37.008Z] ++ envvars+=("-e $2")
[2026-02-27T00:16:37.008Z] ++ shift
[2026-02-27T00:16:37.008Z] ++ shift
[2026-02-27T00:16:37.008Z] ++ [[ 7 -gt 0 ]]
[2026-02-27T00:16:37.008Z] ++ key=-e
[2026-02-27T00:16:37.008Z] ++ case $key in
[2026-02-27T00:16:37.008Z] ++ envvars+=("-e $2")
[2026-02-27T00:16:37.008Z] ++ shift
[2026-02-27T00:16:37.008Z] ++ shift
[2026-02-27T00:16:37.008Z] ++ [[ 5 -gt 0 ]]
[2026-02-27T00:16:37.008Z] ++ key=-e
[2026-02-27T00:16:37.008Z] ++ case $key in
[2026-02-27T00:16:37.008Z] ++ envvars+=("-e $2")
[2026-02-27T00:16:37.008Z] ++ shift
[2026-02-27T00:16:37.008Z] ++ shift
[2026-02-27T00:16:37.008Z] ++ [[ 3 -gt 0 ]]
[2026-02-27T00:16:37.008Z] ++ key=-e
[2026-02-27T00:16:37.008Z] ++ case $key in
[2026-02-27T00:16:37.008Z] ++ envvars+=("-e $2")
[2026-02-27T00:16:37.008Z] ++ shift
[2026-02-27T00:16:37.008Z] ++ shift
[2026-02-27T00:16:37.008Z] ++ [[ 1 -gt 0 ]]
[2026-02-27T00:16:37.008Z] ++ key=--copy-whls
[2026-02-27T00:16:37.008Z] ++ case $key in
[2026-02-27T00:16:37.008Z] ++ copy_whls=1
[2026-02-27T00:16:37.008Z] ++ shift
[2026-02-27T00:16:37.008Z] ++ [[ 0 -gt 0 ]]
[2026-02-27T00:16:37.008Z] ++ envvars+=("-e PREBUILT=${PREBUILT:-false}")
[2026-02-27T00:16:37.008Z] ++ envvars+=("-e TEST_CORE=${TEST_CORE:-false}")
[2026-02-27T00:16:37.008Z] ++ [[ false == \t\r\u\e ]]
[2026-02-27T00:16:37.008Z] ++ [[ true == \t\r\u\e ]]
[2026-02-27T00:16:37.008Z] ++ envvars+=("-e TEST_CORE=${TEST_CORE}")
[2026-02-27T00:16:37.008Z] ++ docker ps -a
[2026-02-27T00:16:37.008Z] CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[2026-02-27T00:16:37.008Z] ++ docker rm -f pytorch-ci-container
[2026-02-27T00:16:37.008Z] ++ docker pull rocm/pytorch-ci-private:pytorch-linux-noble-rocm7.2-py3.12-c9f5e18bdf8c876928902b7a9eb9ba92c6a57e9f-gfx908_gfx90a_gfx942
[2026-02-27T00:16:41.095Z] Error response from daemon: pull access denied for rocm/pytorch-ci-private, repository does not exist or may require 'docker login'

Tests / Test PyTorch / Test PyTorch / Run pytorch_test_2 / Error signal

Error in error step, with arguments pytorch_test_2 failed.

pytorch_test_2 failed

Tests / Test PyTorch / Test PyTorch / Run pytorch_test_2 / Archive JUnit-formatted test results

Error in junit step.

No test report files were found. Configuration error?
Build log
[2026-02-27T00:16:41.375Z] Recording test results
[2026-02-27T00:16:43.355Z] No test report files were found. Configuration error?

Tests / Test PyTorch / Test PyTorch / Run pytorch_test_2 / Error signal

Error in error step, with arguments Failed to publish test reports xml files.

Failed to publish test reports xml files

Tests / Test Distributed / Test Distributed / Run pytorch_distributed_2 / Shell Script

Error in sh step, with arguments ./test_pytorch_test_distributed.sh.

script returned exit code 1
Build log
Build log truncated.

[2026-02-27T03:08:08.007Z] 
[2026-02-27T03:08:08.007Z] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_returns_tensor_with_no_grad <- ../../../../opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/distributed/distributed_test.py I0227 03:04:53.019000 1208147 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 0 with pid 1208234
[2026-02-27T03:08:08.007Z] I0227 03:04:53.020000 1208147 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 1 with pid 1208235
[2026-02-27T03:08:08.007Z] I0227 03:04:53.021000 1208147 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 2 with pid 1208236
[2026-02-27T03:08:08.007Z] I0227 03:04:53.022000 1208147 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 3 with pid 1208237
[2026-02-27T03:08:08.007Z] /opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/nn/parallel/distributed.py:949: UserWarning: You passed find_unused_parameters=true to DistributedDataParallel, `_set_static_graph` will detect unused parameters automatically, so you do not need to set find_unused_parameters=true, just be sure these unused parameters will not change during training loop while calling `_set_static_graph`.
[2026-02-27T03:08:08.007Z]   self._set_static_graph()
[2026-02-27T03:08:08.007Z] /opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/nn/parallel/distributed.py:949: UserWarning: You passed find_unused_parameters=true to DistributedDataParallel, `_set_static_graph` will detect unused parameters automatically, so you do not need to set find_unused_parameters=true, just be sure these unused parameters will not change during training loop while calling `_set_static_graph`.
[2026-02-27T03:08:08.007Z]   self._set_static_graph()
[2026-02-27T03:08:08.007Z] /opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/nn/parallel/distributed.py:949: UserWarning: You passed find_unused_parameters=true to DistributedDataParallel, `_set_static_graph` will detect unused parameters automatically, so you do not need to set find_unused_parameters=true, just be sure these unused parameters will not change during training loop while calling `_set_static_graph`.
[2026-02-27T03:08:08.007Z]   self._set_static_graph()
[2026-02-27T03:08:08.007Z] /opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/nn/parallel/distributed.py:949: UserWarning: You passed find_unused_parameters=true to DistributedDataParallel, `_set_static_graph` will detect unused parameters automatically, so you do not need to set find_unused_parameters=true, just be sure these unused parameters will not change during training loop while calling `_set_static_graph`.
[2026-02-27T03:08:08.007Z]   self._set_static_graph()
[2026-02-27T03:08:08.007Z] [rank3]:[W227 03:05:03.308730952 reducer.cpp:1502] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration,  which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[2026-02-27T03:08:08.007Z] [rank2]:[W227 03:05:03.309281747 reducer.cpp:1502] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration,  which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[2026-02-27T03:08:08.007Z] [rank1]:[W227 03:05:03.309334076 reducer.cpp:1502] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration,  which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[2026-02-27T03:08:08.007Z] [rank0]:[W227 03:05:03.309352581 reducer.cpp:1502] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration,  which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[2026-02-27T03:08:08.007Z] PASSED [12.3273s] [100%]
[2026-02-27T03:08:08.007Z] 
[2026-02-27T03:08:08.007Z] - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-4c4a5974c2fd6e42.xml -
[2026-02-27T03:08:08.007Z] ============================== 1 passed in 12.35s ==============================
[2026-02-27T03:08:08.007Z] Test results will be stored in test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-d15826d3f500ed58.xml
[2026-02-27T03:08:08.007Z] ============================= test session starts ==============================
[2026-02-27T03:08:08.007Z] platform linux -- Python 3.12.12, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.12/bin/python
[2026-02-27T03:08:08.007Z] cachedir: .pytest_cache
[2026-02-27T03:08:08.007Z] hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow]
[2026-02-27T03:08:08.007Z] rootdir: /var/lib/jenkins/pytorch
[2026-02-27T03:08:08.007Z] configfile: pytest.ini
[2026-02-27T03:08:08.007Z] plugins: xdist-3.3.1, xdoctest-1.3.0, hypothesis-6.56.4, cpp-2.3.0, rerunfailures-14.0, subtests-0.13.1, flakefinder-1.1.0, typeguard-4.3.0
[2026-02-27T03:08:08.007Z] collecting ... collected 1 item
[2026-02-27T03:08:08.007Z] stepcurrent: previously run test not found, not skipping.
[2026-02-27T03:08:08.007Z] Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_shared_grad_acc_unused_params
[2026-02-27T03:08:08.007Z] 
[2026-02-27T03:08:08.007Z] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_shared_grad_acc_unused_params <- ../../../../opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/distributed/distributed_test.py I0227 03:05:08.514000 1208788 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 0 with pid 1208992
[2026-02-27T03:08:08.007Z] I0227 03:05:08.515000 1208788 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 1 with pid 1208993
[2026-02-27T03:08:08.007Z] I0227 03:05:08.516000 1208788 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 2 with pid 1208994
[2026-02-27T03:08:08.007Z] I0227 03:05:08.517000 1208788 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 3 with pid 1208995
[2026-02-27T03:08:08.007Z] /opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/nn/parallel/distributed.py:949: UserWarning: You passed find_unused_parameters=true to DistributedDataParallel, `_set_static_graph` will detect unused parameters automatically, so you do not need to set find_unused_parameters=true, just be sure these unused parameters will not change during training loop while calling `_set_static_graph`.
[2026-02-27T03:08:08.007Z]   self._set_static_graph()
[2026-02-27T03:08:08.007Z] /opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/nn/parallel/distributed.py:949: UserWarning: You passed find_unused_parameters=true to DistributedDataParallel, `_set_static_graph` will detect unused parameters automatically, so you do not need to set find_unused_parameters=true, just be sure these unused parameters will not change during training loop while calling `_set_static_graph`.
[2026-02-27T03:08:08.007Z]   self._set_static_graph()
[2026-02-27T03:08:08.007Z] /opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/nn/parallel/distributed.py:949: UserWarning: You passed find_unused_parameters=true to DistributedDataParallel, `_set_static_graph` will detect unused parameters automatically, so you do not need to set find_unused_parameters=true, just be sure these unused parameters will not change during training loop while calling `_set_static_graph`.
[2026-02-27T03:08:08.007Z]   self._set_static_graph()
[2026-02-27T03:08:08.007Z] /opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/nn/parallel/distributed.py:949: UserWarning: You passed find_unused_parameters=true to DistributedDataParallel, `_set_static_graph` will detect unused parameters automatically, so you do not need to set find_unused_parameters=true, just be sure these unused parameters will not change during training loop while calling `_set_static_graph`.
[2026-02-27T03:08:08.007Z]   self._set_static_graph()
[2026-02-27T03:08:08.007Z] PASSED [11.8242s] [100%]
[2026-02-27T03:08:08.007Z] 
[2026-02-27T03:08:08.007Z] - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-d15826d3f500ed58.xml -
[2026-02-27T03:08:08.007Z] ============================== 1 passed in 11.84s ==============================
[2026-02-27T03:08:08.007Z] Test results will be stored in test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-6cb9668c65b99640.xml
[2026-02-27T03:08:08.007Z] ============================= test session starts ==============================
[2026-02-27T03:08:08.007Z] platform linux -- Python 3.12.12, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.12/bin/python
[2026-02-27T03:08:08.007Z] cachedir: .pytest_cache
[2026-02-27T03:08:08.007Z] hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow]
[2026-02-27T03:08:08.007Z] rootdir: /var/lib/jenkins/pytorch
[2026-02-27T03:08:08.007Z] configfile: pytest.ini
[2026-02-27T03:08:08.007Z] plugins: xdist-3.3.1, xdoctest-1.3.0, hypothesis-6.56.4, cpp-2.3.0, rerunfailures-14.0, subtests-0.13.1, flakefinder-1.1.0, typeguard-4.3.0
[2026-02-27T03:08:08.007Z] collecting ... collected 1 item
[2026-02-27T03:08:08.007Z] stepcurrent: previously run test not found, not skipping.
[2026-02-27T03:08:08.007Z] Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_uneven_input_join_disable
[2026-02-27T03:08:08.007Z] 
[2026-02-27T03:08:08.007Z] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_uneven_input_join_disable <- ../../../../opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/distributed/distributed_test.py I0227 03:05:23.474000 1210947 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 0 with pid 1211039
[2026-02-27T03:08:08.007Z] I0227 03:05:23.475000 1210947 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 1 with pid 1211040
[2026-02-27T03:08:08.007Z] I0227 03:05:23.476000 1210947 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 2 with pid 1211041
[2026-02-27T03:08:08.007Z] I0227 03:05:23.477000 1210947 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 3 with pid 1211042
[2026-02-27T03:08:08.007Z] PASSED [11.5253s] [100%]
[2026-02-27T03:08:08.007Z] 
[2026-02-27T03:08:08.007Z] - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-6cb9668c65b99640.xml -
[2026-02-27T03:08:08.007Z] ============================== 1 passed in 11.55s ==============================
[2026-02-27T03:08:08.007Z] Test results will be stored in test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-f38703e103963da9.xml
[2026-02-27T03:08:08.007Z] ============================= test session starts ==============================
[2026-02-27T03:08:08.007Z] platform linux -- Python 3.12.12, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.12/bin/python
[2026-02-27T03:08:08.007Z] cachedir: .pytest_cache
[2026-02-27T03:08:08.007Z] hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow]
[2026-02-27T03:08:08.007Z] rootdir: /var/lib/jenkins/pytorch
[2026-02-27T03:08:08.007Z] configfile: pytest.ini
[2026-02-27T03:08:08.007Z] plugins: xdist-3.3.1, xdoctest-1.3.0, hypothesis-6.56.4, cpp-2.3.0, rerunfailures-14.0, subtests-0.13.1, flakefinder-1.1.0, typeguard-4.3.0
[2026-02-27T03:08:08.007Z] collecting ... collected 1 item
[2026-02-27T03:08:08.007Z] stepcurrent: previously run test not found, not skipping.
[2026-02-27T03:08:08.007Z] Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_dump_DDP_relevant_env_vars
[2026-02-27T03:08:08.007Z] 
[2026-02-27T03:08:08.007Z] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_dump_DDP_relevant_env_vars <- ../../../../opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/distributed/distributed_test.py I0227 03:05:38.169000 1211492 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 0 with pid 1211598
[2026-02-27T03:08:08.007Z] I0227 03:05:38.170000 1211492 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 1 with pid 1211599
[2026-02-27T03:08:08.007Z] I0227 03:05:38.171000 1211492 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 2 with pid 1211600
[2026-02-27T03:08:08.007Z] I0227 03:05:38.171000 1211492 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 3 with pid 1211601
[2026-02-27T03:08:08.007Z] PASSED [4.1097s] [100%]
[2026-02-27T03:08:08.007Z] 
[2026-02-27T03:08:08.007Z] - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-f38703e103963da9.xml -
[2026-02-27T03:08:08.007Z] ============================== 1 passed in 4.13s ===============================
[2026-02-27T03:08:08.007Z] Test results will be stored in test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-37c2da9a237d66bf.xml
[2026-02-27T03:08:08.007Z] ============================= test session starts ==============================
[2026-02-27T03:08:08.007Z] platform linux -- Python 3.12.12, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.12/bin/python
[2026-02-27T03:08:08.007Z] cachedir: .pytest_cache
[2026-02-27T03:08:08.007Z] hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow]
[2026-02-27T03:08:08.008Z] rootdir: /var/lib/jenkins/pytorch
[2026-02-27T03:08:08.008Z] configfile: pytest.ini
[2026-02-27T03:08:08.008Z] plugins: xdist-3.3.1, xdoctest-1.3.0, hypothesis-6.56.4, cpp-2.3.0, rerunfailures-14.0, subtests-0.13.1, flakefinder-1.1.0, typeguard-4.3.0
[2026-02-27T03:08:08.008Z] collecting ... collected 1 item
[2026-02-27T03:08:08.008Z] stepcurrent: previously run test not found, not skipping.
[2026-02-27T03:08:08.008Z] Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_gather_full_group
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_gather_full_group <- ../../../../opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/distributed/distributed_test.py SKIPPED [0.0006s] (Nccl does not support CPU tensors) [100%]
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-37c2da9a237d66bf.xml -
[2026-02-27T03:08:08.008Z] ============================== 1 skipped in 0.02s ==============================
[2026-02-27T03:08:08.008Z] Test results will be stored in test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-a8e7d5fe016c4dfc.xml
[2026-02-27T03:08:08.008Z] ============================= test session starts ==============================
[2026-02-27T03:08:08.008Z] platform linux -- Python 3.12.12, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.12/bin/python
[2026-02-27T03:08:08.008Z] cachedir: .pytest_cache
[2026-02-27T03:08:08.008Z] hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow]
[2026-02-27T03:08:08.008Z] rootdir: /var/lib/jenkins/pytorch
[2026-02-27T03:08:08.008Z] configfile: pytest.ini
[2026-02-27T03:08:08.008Z] plugins: xdist-3.3.1, xdoctest-1.3.0, hypothesis-6.56.4, cpp-2.3.0, rerunfailures-14.0, subtests-0.13.1, flakefinder-1.1.0, typeguard-4.3.0
[2026-02-27T03:08:08.008Z] collecting ... collected 1 item
[2026-02-27T03:08:08.008Z] stepcurrent: previously run test not found, not skipping.
[2026-02-27T03:08:08.008Z] Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_gather_object
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_gather_object <- ../../../../opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/distributed/distributed_test.py I0227 03:05:48.909000 1212245 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 0 with pid 1212409
[2026-02-27T03:08:08.008Z] I0227 03:05:48.910000 1212245 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 1 with pid 1212410
[2026-02-27T03:08:08.008Z] I0227 03:05:48.911000 1212245 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 2 with pid 1212411
[2026-02-27T03:08:08.008Z] I0227 03:05:48.913000 1212245 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 3 with pid 1212412
[2026-02-27T03:08:08.008Z] [rank2]:W0227 03:05:52.450000 1212411 site-packages/torch/distributed/distributed_c10d.py:3155] _object_to_tensor size: 511 hash value: 10835203771810119675
[2026-02-27T03:08:08.008Z] [rank3]:W0227 03:05:52.484000 1212412 site-packages/torch/distributed/distributed_c10d.py:3155] _object_to_tensor size: 18 hash value: 6002723302436841527
[2026-02-27T03:08:08.008Z] [rank1]:W0227 03:05:52.494000 1212410 site-packages/torch/distributed/distributed_c10d.py:3155] _object_to_tensor size: 97 hash value: 2395169942717765378
[2026-02-27T03:08:08.008Z] [rank0]:W0227 03:05:52.509000 1212409 site-packages/torch/distributed/distributed_c10d.py:3155] _object_to_tensor size: 54 hash value: 15559783960315060411
[2026-02-27T03:08:08.008Z] [rank0]:W0227 03:05:58.911000 1212409 site-packages/torch/distributed/distributed_c10d.py:3170] _tensor_to_object size: 511 hash value: 16535821244606465813
[2026-02-27T03:08:08.008Z] [rank0]:W0227 03:05:58.912000 1212409 site-packages/torch/distributed/distributed_c10d.py:3170] _tensor_to_object size: 511 hash value: 16535821244606465813
[2026-02-27T03:08:08.008Z] [rank0]:W0227 03:05:58.912000 1212409 site-packages/torch/distributed/distributed_c10d.py:3170] _tensor_to_object size: 511 hash value: 16535821244606465813
[2026-02-27T03:08:08.008Z] [rank0]:W0227 03:05:58.914000 1212409 site-packages/torch/distributed/distributed_c10d.py:3170] _tensor_to_object size: 511 hash value: 16535821244606465813
[2026-02-27T03:08:08.008Z] /opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/distributed/c10d_logger.py:83: UserWarning: barrier(): using the device under current context. You can specify `device_id` in `init_process_group` to mute this warning.
[2026-02-27T03:08:08.008Z]   return func(*args, **kwargs)
[2026-02-27T03:08:08.008Z] PASSED [11.4227s] [100%]
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-a8e7d5fe016c4dfc.xml -
[2026-02-27T03:08:08.008Z] ============================== 1 passed in 11.44s ==============================
[2026-02-27T03:08:08.008Z] Test results will be stored in test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-d7d9ca2a42ccd63b.xml
[2026-02-27T03:08:08.008Z] ============================= test session starts ==============================
[2026-02-27T03:08:08.008Z] platform linux -- Python 3.12.12, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.12/bin/python
[2026-02-27T03:08:08.008Z] cachedir: .pytest_cache
[2026-02-27T03:08:08.008Z] hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow]
[2026-02-27T03:08:08.008Z] rootdir: /var/lib/jenkins/pytorch
[2026-02-27T03:08:08.008Z] configfile: pytest.ini
[2026-02-27T03:08:08.008Z] plugins: xdist-3.3.1, xdoctest-1.3.0, hypothesis-6.56.4, cpp-2.3.0, rerunfailures-14.0, subtests-0.13.1, flakefinder-1.1.0, typeguard-4.3.0
[2026-02-27T03:08:08.008Z] collecting ... collected 1 item
[2026-02-27T03:08:08.008Z] stepcurrent: previously run test not found, not skipping.
[2026-02-27T03:08:08.008Z] Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_gloo
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_monitored_barrier_gloo <- ../../../../opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/distributed/distributed_test.py SKIPPED [0.0005s] (Test requires backend nccl to be one of {'gloo'}) [100%]
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-d7d9ca2a42ccd63b.xml -
[2026-02-27T03:08:08.008Z] ============================== 1 skipped in 0.02s ==============================
[2026-02-27T03:08:08.008Z] Test results will be stored in test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-92cad00b2dd7ca5e.xml
[2026-02-27T03:08:08.008Z] ============================= test session starts ==============================
[2026-02-27T03:08:08.008Z] platform linux -- Python 3.12.12, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.12/bin/python
[2026-02-27T03:08:08.008Z] cachedir: .pytest_cache
[2026-02-27T03:08:08.008Z] hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow]
[2026-02-27T03:08:08.008Z] rootdir: /var/lib/jenkins/pytorch
[2026-02-27T03:08:08.008Z] configfile: pytest.ini
[2026-02-27T03:08:08.008Z] plugins: xdist-3.3.1, xdoctest-1.3.0, hypothesis-6.56.4, cpp-2.3.0, rerunfailures-14.0, subtests-0.13.1, flakefinder-1.1.0, typeguard-4.3.0
[2026-02-27T03:08:08.008Z] collecting ... collected 1 item
[2026-02-27T03:08:08.008Z] stepcurrent: previously run test not found, not skipping.
[2026-02-27T03:08:08.008Z] Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_group_size_exceeds_world_size
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_new_subgroups_group_size_exceeds_world_size <- ../../../../opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/distributed/distributed_test.py I0227 03:06:06.579000 1213143 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 0 with pid 1213230
[2026-02-27T03:08:08.008Z] I0227 03:06:06.580000 1213143 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 1 with pid 1213231
[2026-02-27T03:08:08.008Z] I0227 03:06:06.580000 1213143 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 2 with pid 1213232
[2026-02-27T03:08:08.008Z] I0227 03:06:06.581000 1213143 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 3 with pid 1213233
[2026-02-27T03:08:08.008Z] PASSED [4.0106s] [100%]
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-92cad00b2dd7ca5e.xml -
[2026-02-27T03:08:08.008Z] ============================== 1 passed in 4.03s ===============================
[2026-02-27T03:08:08.008Z] Test results will be stored in test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-c90b82359ce18e3a.xml
[2026-02-27T03:08:08.008Z] ============================= test session starts ==============================
[2026-02-27T03:08:08.008Z] platform linux -- Python 3.12.12, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.12/bin/python
[2026-02-27T03:08:08.008Z] cachedir: .pytest_cache
[2026-02-27T03:08:08.008Z] hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow]
[2026-02-27T03:08:08.008Z] rootdir: /var/lib/jenkins/pytorch
[2026-02-27T03:08:08.008Z] configfile: pytest.ini
[2026-02-27T03:08:08.008Z] plugins: xdist-3.3.1, xdoctest-1.3.0, hypothesis-6.56.4, cpp-2.3.0, rerunfailures-14.0, subtests-0.13.1, flakefinder-1.1.0, typeguard-4.3.0
[2026-02-27T03:08:08.008Z] collecting ... collected 1 item
[2026-02-27T03:08:08.008Z] stepcurrent: previously run test not found, not skipping.
[2026-02-27T03:08:08.008Z] Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_output_unused_in_loss_tuple_module
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_output_unused_in_loss_tuple_module <- ../../../../opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/distributed/distributed_test.py I0227 03:06:13.729000 1213571 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 0 with pid 1213677
[2026-02-27T03:08:08.008Z] I0227 03:06:13.731000 1213571 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 1 with pid 1213678
[2026-02-27T03:08:08.008Z] I0227 03:06:13.731000 1213571 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 2 with pid 1213679
[2026-02-27T03:08:08.008Z] I0227 03:06:13.732000 1213571 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 3 with pid 1213680
[2026-02-27T03:08:08.008Z] /opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/distributed/c10d_logger.py:83: UserWarning: barrier(): using the device under current context. You can specify `device_id` in `init_process_group` to mute this warning.
[2026-02-27T03:08:08.008Z]   return func(*args, **kwargs)
[2026-02-27T03:08:08.008Z] PASSED [12.6228s] [100%]
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-c90b82359ce18e3a.xml -
[2026-02-27T03:08:08.008Z] ============================== 1 passed in 12.64s ==============================
[2026-02-27T03:08:08.008Z] Test results will be stored in test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-4e17169feceaab9e.xml
[2026-02-27T03:08:08.008Z] ============================= test session starts ==============================
[2026-02-27T03:08:08.008Z] platform linux -- Python 3.12.12, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.12/bin/python
[2026-02-27T03:08:08.008Z] cachedir: .pytest_cache
[2026-02-27T03:08:08.008Z] hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow]
[2026-02-27T03:08:08.008Z] rootdir: /var/lib/jenkins/pytorch
[2026-02-27T03:08:08.008Z] configfile: pytest.ini
[2026-02-27T03:08:08.008Z] plugins: xdist-3.3.1, xdoctest-1.3.0, hypothesis-6.56.4, cpp-2.3.0, rerunfailures-14.0, subtests-0.13.1, flakefinder-1.1.0, typeguard-4.3.0
[2026-02-27T03:08:08.008Z] collecting ... collected 1 item
[2026-02-27T03:08:08.008Z] stepcurrent: previously run test not found, not skipping.
[2026-02-27T03:08:08.008Z] Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_post_localSGD_optimizer_parity_with_hierarchical_sgd_grad_is_view
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_post_localSGD_optimizer_parity_with_hierarchical_sgd_grad_is_view <- ../../../../opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/distributed/distributed_test.py I0227 03:06:29.541000 1214155 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 0 with pid 1214260
[2026-02-27T03:08:08.008Z] I0227 03:06:29.542000 1214155 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 1 with pid 1214261
[2026-02-27T03:08:08.008Z] I0227 03:06:29.543000 1214155 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 2 with pid 1214262
[2026-02-27T03:08:08.008Z] I0227 03:06:29.544000 1214155 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 3 with pid 1214263
[2026-02-27T03:08:08.008Z] PASSED [14.7244s] [100%]
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-4e17169feceaab9e.xml -
[2026-02-27T03:08:08.008Z] ============================== 1 passed in 14.74s ==============================
[2026-02-27T03:08:08.008Z] Test results will be stored in test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-db3648c41977466c.xml
[2026-02-27T03:08:08.008Z] ============================= test session starts ==============================
[2026-02-27T03:08:08.008Z] platform linux -- Python 3.12.12, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.12/bin/python
[2026-02-27T03:08:08.008Z] cachedir: .pytest_cache
[2026-02-27T03:08:08.008Z] hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow]
[2026-02-27T03:08:08.008Z] rootdir: /var/lib/jenkins/pytorch
[2026-02-27T03:08:08.008Z] configfile: pytest.ini
[2026-02-27T03:08:08.008Z] plugins: xdist-3.3.1, xdoctest-1.3.0, hypothesis-6.56.4, cpp-2.3.0, rerunfailures-14.0, subtests-0.13.1, flakefinder-1.1.0, typeguard-4.3.0
[2026-02-27T03:08:08.008Z] collecting ... collected 1 item
[2026-02-27T03:08:08.008Z] stepcurrent: previously run test not found, not skipping.
[2026-02-27T03:08:08.008Z] Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_full_group_max
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_full_group_max <- ../../../../opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/distributed/distributed_test.py SKIPPED [0.0006s] (Nccl does not support CPU tensors) [100%]
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-db3648c41977466c.xml -
[2026-02-27T03:08:08.008Z] ============================== 1 skipped in 0.02s ==============================
[2026-02-27T03:08:08.008Z] Test results will be stored in test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-d8469afd5b68d234.xml
[2026-02-27T03:08:08.008Z] ============================= test session starts ==============================
[2026-02-27T03:08:08.008Z] platform linux -- Python 3.12.12, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.12/bin/python
[2026-02-27T03:08:08.008Z] cachedir: .pytest_cache
[2026-02-27T03:08:08.008Z] hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow]
[2026-02-27T03:08:08.008Z] rootdir: /var/lib/jenkins/pytorch
[2026-02-27T03:08:08.008Z] configfile: pytest.ini
[2026-02-27T03:08:08.008Z] plugins: xdist-3.3.1, xdoctest-1.3.0, hypothesis-6.56.4, cpp-2.3.0, rerunfailures-14.0, subtests-0.13.1, flakefinder-1.1.0, typeguard-4.3.0
[2026-02-27T03:08:08.008Z] collecting ... collected 1 item
[2026-02-27T03:08:08.008Z] stepcurrent: previously run test not found, not skipping.
[2026-02-27T03:08:08.008Z] Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_max
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_max <- ../../../../opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/distributed/distributed_test.py SKIPPED [0.0005s] (Nccl does not support CPU tensors) [100%]
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-d8469afd5b68d234.xml -
[2026-02-27T03:08:08.008Z] ============================== 1 skipped in 0.02s ==============================
[2026-02-27T03:08:08.008Z] Test results will be stored in test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-a94903b2138a6f23.xml
[2026-02-27T03:08:08.008Z] ============================= test session starts ==============================
[2026-02-27T03:08:08.008Z] platform linux -- Python 3.12.12, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.12/bin/python
[2026-02-27T03:08:08.008Z] cachedir: .pytest_cache
[2026-02-27T03:08:08.008Z] hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow]
[2026-02-27T03:08:08.008Z] rootdir: /var/lib/jenkins/pytorch
[2026-02-27T03:08:08.008Z] configfile: pytest.ini
[2026-02-27T03:08:08.008Z] plugins: xdist-3.3.1, xdoctest-1.3.0, hypothesis-6.56.4, cpp-2.3.0, rerunfailures-14.0, subtests-0.13.1, flakefinder-1.1.0, typeguard-4.3.0
[2026-02-27T03:08:08.008Z] collecting ... collected 1 item
[2026-02-27T03:08:08.008Z] stepcurrent: previously run test not found, not skipping.
[2026-02-27T03:08:08.008Z] Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_product
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_product <- ../../../../opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/distributed/distributed_test.py SKIPPED [0.0005s] (Nccl does not support CPU tensors) [100%]
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-a94903b2138a6f23.xml -
[2026-02-27T03:08:08.008Z] ============================== 1 skipped in 0.02s ==============================
[2026-02-27T03:08:08.008Z] Test results will be stored in test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-39f018e51b502422.xml
[2026-02-27T03:08:08.008Z] ============================= test session starts ==============================
[2026-02-27T03:08:08.008Z] platform linux -- Python 3.12.12, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.12/bin/python
[2026-02-27T03:08:08.008Z] cachedir: .pytest_cache
[2026-02-27T03:08:08.008Z] hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow]
[2026-02-27T03:08:08.008Z] rootdir: /var/lib/jenkins/pytorch
[2026-02-27T03:08:08.008Z] configfile: pytest.ini
[2026-02-27T03:08:08.008Z] plugins: xdist-3.3.1, xdoctest-1.3.0, hypothesis-6.56.4, cpp-2.3.0, rerunfailures-14.0, subtests-0.13.1, flakefinder-1.1.0, typeguard-4.3.0
[2026-02-27T03:08:08.008Z] collecting ... collected 1 item
[2026-02-27T03:08:08.008Z] stepcurrent: previously run test not found, not skipping.
[2026-02-27T03:08:08.008Z] Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_sum_twice
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_reduce_sum_twice <- ../../../../opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/distributed/distributed_test.py SKIPPED [0.0005s] (Nccl does not support CPU tensors) [100%]
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-39f018e51b502422.xml -
[2026-02-27T03:08:08.008Z] ============================== 1 skipped in 0.02s ==============================
[2026-02-27T03:08:08.008Z] Test results will be stored in test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-f89e6468363a413e.xml
[2026-02-27T03:08:08.008Z] ============================= test session starts ==============================
[2026-02-27T03:08:08.008Z] platform linux -- Python 3.12.12, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.12/bin/python
[2026-02-27T03:08:08.008Z] cachedir: .pytest_cache
[2026-02-27T03:08:08.008Z] hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow]
[2026-02-27T03:08:08.008Z] rootdir: /var/lib/jenkins/pytorch
[2026-02-27T03:08:08.008Z] configfile: pytest.ini
[2026-02-27T03:08:08.008Z] plugins: xdist-3.3.1, xdoctest-1.3.0, hypothesis-6.56.4, cpp-2.3.0, rerunfailures-14.0, subtests-0.13.1, flakefinder-1.1.0, typeguard-4.3.0
[2026-02-27T03:08:08.008Z] collecting ... collected 1 item
[2026-02-27T03:08:08.008Z] stepcurrent: previously run test not found, not skipping.
[2026-02-27T03:08:08.008Z] Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter_full_group
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_scatter_full_group <- ../../../../opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/distributed/distributed_test.py SKIPPED [0.0005s] (Nccl does not support CPU tensors) [100%]
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-f89e6468363a413e.xml -
[2026-02-27T03:08:08.008Z] ============================== 1 skipped in 0.02s ==============================
[2026-02-27T03:08:08.008Z] Test results will be stored in test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-22c3736a0d2c5885.xml
[2026-02-27T03:08:08.008Z] ============================= test session starts ==============================
[2026-02-27T03:08:08.008Z] platform linux -- Python 3.12.12, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.12/bin/python
[2026-02-27T03:08:08.008Z] cachedir: .pytest_cache
[2026-02-27T03:08:08.008Z] hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow]
[2026-02-27T03:08:08.008Z] rootdir: /var/lib/jenkins/pytorch
[2026-02-27T03:08:08.008Z] configfile: pytest.ini
[2026-02-27T03:08:08.008Z] plugins: xdist-3.3.1, xdoctest-1.3.0, hypothesis-6.56.4, cpp-2.3.0, rerunfailures-14.0, subtests-0.13.1, flakefinder-1.1.0, typeguard-4.3.0
[2026-02-27T03:08:08.008Z] collecting ... collected 1 item
[2026-02-27T03:08:08.008Z] stepcurrent: previously run test not found, not skipping.
[2026-02-27T03:08:08.008Z] Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv <- ../../../../opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/distributed/distributed_test.py SKIPPED [0.0005s] (Nccl send/recv tested by test_send_recv_nccl) [100%]
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-22c3736a0d2c5885.xml -
[2026-02-27T03:08:08.008Z] ============================== 1 skipped in 0.02s ==============================
[2026-02-27T03:08:08.008Z] Test results will be stored in test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-1656f867943b531d.xml
[2026-02-27T03:08:08.008Z] ============================= test session starts ==============================
[2026-02-27T03:08:08.008Z] platform linux -- Python 3.12.12, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.12/bin/python
[2026-02-27T03:08:08.008Z] cachedir: .pytest_cache
[2026-02-27T03:08:08.008Z] hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow]
[2026-02-27T03:08:08.008Z] rootdir: /var/lib/jenkins/pytorch
[2026-02-27T03:08:08.008Z] configfile: pytest.ini
[2026-02-27T03:08:08.008Z] plugins: xdist-3.3.1, xdoctest-1.3.0, hypothesis-6.56.4, cpp-2.3.0, rerunfailures-14.0, subtests-0.13.1, flakefinder-1.1.0, typeguard-4.3.0
[2026-02-27T03:08:08.008Z] collecting ... collected 1 item
[2026-02-27T03:08:08.008Z] stepcurrent: previously run test not found, not skipping.
[2026-02-27T03:08:08.008Z] Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv_any_source_autograd_profiler
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv_any_source_autograd_profiler <- ../../../../opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/distributed/distributed_test.py SKIPPED [0.0005s] (nccl does not support send/recv from any source) [100%]
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-1656f867943b531d.xml -
[2026-02-27T03:08:08.008Z] ============================== 1 skipped in 0.02s ==============================
[2026-02-27T03:08:08.008Z] Test results will be stored in test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-e242359f48905377.xml
[2026-02-27T03:08:08.008Z] ============================= test session starts ==============================
[2026-02-27T03:08:08.008Z] platform linux -- Python 3.12.12, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.12/bin/python
[2026-02-27T03:08:08.008Z] cachedir: .pytest_cache
[2026-02-27T03:08:08.008Z] hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow]
[2026-02-27T03:08:08.008Z] rootdir: /var/lib/jenkins/pytorch
[2026-02-27T03:08:08.008Z] configfile: pytest.ini
[2026-02-27T03:08:08.008Z] plugins: xdist-3.3.1, xdoctest-1.3.0, hypothesis-6.56.4, cpp-2.3.0, rerunfailures-14.0, subtests-0.13.1, flakefinder-1.1.0, typeguard-4.3.0
[2026-02-27T03:08:08.008Z] collecting ... collected 1 item
[2026-02-27T03:08:08.008Z] stepcurrent: previously run test not found, not skipping.
[2026-02-27T03:08:08.008Z] Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv_nccl_torch_profiler
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_send_recv_nccl_torch_profiler <- ../../../../opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/distributed/distributed_test.py I0227 03:07:09.612000 1216026 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 0 with pid 1216118
[2026-02-27T03:08:08.008Z] I0227 03:07:09.613000 1216026 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 1 with pid 1216119
[2026-02-27T03:08:08.008Z] I0227 03:07:09.614000 1216026 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 2 with pid 1216120
[2026-02-27T03:08:08.008Z] I0227 03:07:09.615000 1216026 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 3 with pid 1216121
[2026-02-27T03:08:08.008Z] /opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/profiler/profiler.py:224: UserWarning: Warning: Profiler clears events at the end of each cycle.Only events from the current cycle will be reported.To keep events across cycles, set acc_events=True.
[2026-02-27T03:08:08.008Z]   _warn_once(
[2026-02-27T03:08:08.008Z] [rank3]:[W227 03:07:13.459369218 ProcessGroupNCCL.cpp:4095] Warning: An unbatched P2P op (send/recv) was called on this ProcessGroup with size 4.  In lazy initialization mode, this will result in a new 2-rank NCCL communicator to be created. (function operator())
[2026-02-27T03:08:08.008Z] /opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/profiler/profiler.py:224: UserWarning: Warning: Profiler clears events at the end of each cycle.Only events from the current cycle will be reported.To keep events across cycles, set acc_events=True.
[2026-02-27T03:08:08.008Z]   _warn_once(
[2026-02-27T03:08:08.008Z] [rank2]:[W227 03:07:13.471381412 ProcessGroupNCCL.cpp:4095] Warning: An unbatched P2P op (send/recv) was called on this ProcessGroup with size 4.  In lazy initialization mode, this will result in a new 2-rank NCCL communicator to be created. (function operator())
[2026-02-27T03:08:08.008Z] /opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/profiler/profiler.py:224: UserWarning: Warning: Profiler clears events at the end of each cycle.Only events from the current cycle will be reported.To keep events across cycles, set acc_events=True.
[2026-02-27T03:08:08.008Z]   _warn_once(
[2026-02-27T03:08:08.008Z] [rank0]:[W227 03:07:13.508284859 ProcessGroupNCCL.cpp:4095] Warning: An unbatched P2P op (send/recv) was called on this ProcessGroup with size 4.  In lazy initialization mode, this will result in a new 2-rank NCCL communicator to be created. (function operator())
[2026-02-27T03:08:08.008Z] /opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/profiler/profiler.py:224: UserWarning: Warning: Profiler clears events at the end of each cycle.Only events from the current cycle will be reported.To keep events across cycles, set acc_events=True.
[2026-02-27T03:08:08.008Z]   _warn_once(
[2026-02-27T03:08:08.008Z] [rank1]:[W227 03:07:13.510462258 ProcessGroupNCCL.cpp:4095] Warning: An unbatched P2P op (send/recv) was called on this ProcessGroup with size 4.  In lazy initialization mode, this will result in a new 2-rank NCCL communicator to be created. (function operator())
[2026-02-27T03:08:08.008Z] [rank1]:[W227 03:07:18.366071545 ProcessGroupNCCL.cpp:4095] Warning: An unbatched P2P op (send/recv) was called on this ProcessGroup with size 4.  In lazy initialization mode, this will result in a new 2-rank NCCL communicator to be created. (function operator())
[2026-02-27T03:08:08.008Z] [rank0]:[W227 03:07:27.400732679 ProcessGroupNCCL.cpp:4095] Warning: An unbatched P2P op (send/recv) was called on this ProcessGroup with size 4.  In lazy initialization mode, this will result in a new 2-rank NCCL communicator to be created. (function operator())
[2026-02-27T03:08:08.008Z] [rank2]:[W227 03:07:28.821193118 ProcessGroupNCCL.cpp:4095] Warning: An unbatched P2P op (send/recv) was called on this ProcessGroup with size 4.  In lazy initialization mode, this will result in a new 2-rank NCCL communicator to be created. (function operator())
[2026-02-27T03:08:08.008Z] [rank3]:[W227 03:07:29.084568876 ProcessGroupNCCL.cpp:4095] Warning: An unbatched P2P op (send/recv) was called on this ProcessGroup with size 4.  In lazy initialization mode, this will result in a new 2-rank NCCL communicator to be created. (function operator())
[2026-02-27T03:08:08.008Z] [rank2]:[W227 03:07:29.221287447 collection.cpp:1148] Warning: ROCTracer produced duplicate flow start: 3 (function operator())
[2026-02-27T03:08:08.008Z] [rank1]:[W227 03:07:29.232445080 collection.cpp:1148] Warning: ROCTracer produced duplicate flow start: 3 (function operator())
[2026-02-27T03:08:08.008Z] [rank0]:[W227 03:07:29.311861941 collection.cpp:1148] Warning: ROCTracer produced duplicate flow start: 3 (function operator())
[2026-02-27T03:08:08.008Z] [rank3]:[W227 03:07:29.321678171 collection.cpp:1148] Warning: ROCTracer produced duplicate flow start: 3 (function operator())
[2026-02-27T03:08:08.008Z] PASSED [21.8347s] [100%]
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-e242359f48905377.xml -
[2026-02-27T03:08:08.008Z] ============================== 1 passed in 21.85s ==============================
[2026-02-27T03:08:08.008Z] Test results will be stored in test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-23c3b44f6afce5c3.xml
[2026-02-27T03:08:08.008Z] ============================= test session starts ==============================
[2026-02-27T03:08:08.008Z] platform linux -- Python 3.12.12, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.12/bin/python
[2026-02-27T03:08:08.008Z] cachedir: .pytest_cache
[2026-02-27T03:08:08.008Z] hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow]
[2026-02-27T03:08:08.008Z] rootdir: /var/lib/jenkins/pytorch
[2026-02-27T03:08:08.008Z] configfile: pytest.ini
[2026-02-27T03:08:08.008Z] plugins: xdist-3.3.1, xdoctest-1.3.0, hypothesis-6.56.4, cpp-2.3.0, rerunfailures-14.0, subtests-0.13.1, flakefinder-1.1.0, typeguard-4.3.0
[2026-02-27T03:08:08.008Z] collecting ... collected 1 item
[2026-02-27T03:08:08.008Z] stepcurrent: previously run test not found, not skipping.
[2026-02-27T03:08:08.008Z] Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_sparse_all_reduce_sum_cuda
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_sparse_all_reduce_sum_cuda <- ../../../../opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/distributed/distributed_test.py SKIPPED [0.0005s] (Only Gloo backend support sparse all reduce) [100%]
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-23c3b44f6afce5c3.xml -
[2026-02-27T03:08:08.008Z] ============================== 1 skipped in 0.02s ==============================
[2026-02-27T03:08:08.008Z] Test results will be stored in test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-7076c7d52408b04a.xml
[2026-02-27T03:08:08.008Z] ============================= test session starts ==============================
[2026-02-27T03:08:08.008Z] platform linux -- Python 3.12.12, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.12/bin/python
[2026-02-27T03:08:08.008Z] cachedir: .pytest_cache
[2026-02-27T03:08:08.008Z] hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow]
[2026-02-27T03:08:08.008Z] rootdir: /var/lib/jenkins/pytorch
[2026-02-27T03:08:08.008Z] configfile: pytest.ini
[2026-02-27T03:08:08.008Z] plugins: xdist-3.3.1, xdoctest-1.3.0, hypothesis-6.56.4, cpp-2.3.0, rerunfailures-14.0, subtests-0.13.1, flakefinder-1.1.0, typeguard-4.3.0
[2026-02-27T03:08:08.008Z] collecting ... collected 1 item
[2026-02-27T03:08:08.008Z] stepcurrent: previously run test not found, not skipping.
[2026-02-27T03:08:08.008Z] Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_stateless_api_with_ddp
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_stateless_api_with_ddp <- ../../../../opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/distributed/distributed_test.py I0227 03:07:37.713000 1216908 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 0 with pid 1217014
[2026-02-27T03:08:08.008Z] I0227 03:07:37.714000 1216908 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 1 with pid 1217015
[2026-02-27T03:08:08.008Z] I0227 03:07:37.715000 1216908 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 2 with pid 1217016
[2026-02-27T03:08:08.008Z] I0227 03:07:37.715000 1216908 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 3 with pid 1217017
[2026-02-27T03:08:08.008Z] PASSED [11.7178s] [100%]
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-7076c7d52408b04a.xml -
[2026-02-27T03:08:08.008Z] ============================== 1 passed in 11.74s ==============================
[2026-02-27T03:08:08.008Z] Test results will be stored in test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-a47f3379be6b264a.xml
[2026-02-27T03:08:08.008Z] ============================= test session starts ==============================
[2026-02-27T03:08:08.008Z] platform linux -- Python 3.12.12, pytest-7.3.2, pluggy-1.6.0 -- /opt/conda/envs/py_3.12/bin/python
[2026-02-27T03:08:08.008Z] cachedir: .pytest_cache
[2026-02-27T03:08:08.008Z] hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow]
[2026-02-27T03:08:08.008Z] rootdir: /var/lib/jenkins/pytorch
[2026-02-27T03:08:08.008Z] configfile: pytest.ini
[2026-02-27T03:08:08.008Z] plugins: xdist-3.3.1, xdoctest-1.3.0, hypothesis-6.56.4, cpp-2.3.0, rerunfailures-14.0, subtests-0.13.1, flakefinder-1.1.0, typeguard-4.3.0
[2026-02-27T03:08:08.008Z] collecting ... collected 1 item
[2026-02-27T03:08:08.008Z] stepcurrent: previously run test not found, not skipping.
[2026-02-27T03:08:08.008Z] Running 1 items in this shard: test/distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_verify_model_across_rank_without_logger
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_verify_model_across_rank_without_logger <- ../../../../opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/distributed/distributed_test.py I0227 03:07:52.563000 1217656 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 0 with pid 1217738
[2026-02-27T03:08:08.008Z] I0227 03:07:52.565000 1217656 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 1 with pid 1217739
[2026-02-27T03:08:08.008Z] I0227 03:07:52.565000 1217656 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 2 with pid 1217740
[2026-02-27T03:08:08.008Z] I0227 03:07:52.566000 1217656 site-packages/torch/testing/_internal/common_distributed.py:854] Started process 3 with pid 1217741
[2026-02-27T03:08:08.008Z] PASSED [11.5241s] [100%]
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] - generated xml file: /var/lib/jenkins/pytorch/test/test-reports/dist-nccl-init-file/distributed.test_distributed_spawn/distributed.test_distributed_spawn-a47f3379be6b264a.xml -
[2026-02-27T03:08:08.008Z] ============================== 1 passed in 11.55s ==============================
[2026-02-27T03:08:08.008Z] Traceback (most recent call last):
[2026-02-27T03:08:08.008Z]   File "/var/lib/jenkins/pytorch/test/distributed/test_distributed_spawn.py", line 60, in <module>
[2026-02-27T03:08:08.008Z]     run_tests()
[2026-02-27T03:08:08.008Z]   File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1327, in run_tests
[2026-02-27T03:08:08.008Z]     raise AssertionError(
[2026-02-27T03:08:08.008Z] AssertionError: 1 unit test(s) failed:
[2026-02-27T03:08:08.008Z] 	distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_apply_optim_in_backward_ignored_params
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] FINISHED PRINTING LOG FILE of distributed/test_distributed_spawn 5/7 (test/test-reports/distributed.test_distributed_spawn_5.7_d47b3c4bf13d3ecc_.log)
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] Finished distributed/test_distributed_spawn 5/7 ... [2026-02-27 03:08:05.042767][28722.495681658], took 23.40min
[2026-02-27T03:08:08.008Z] distributed/test_distributed_spawn 5/7 failed!
[2026-02-27T03:08:08.008Z] Emitting td_test_failure_stats_v2
[2026-02-27T03:08:08.008Z] /var/lib/jenkins/pytorch/tools/stats/upload_metrics.py:140: UserWarning: Not emitting metrics for td_test_failure_stats_v2. Missing repo. Please set the GITHUB_REPOSITORY environment variable to pass in this value.
[2026-02-27T03:08:08.008Z]   warn(f"Not emitting metrics for {metric_name}. {e}")
[2026-02-27T03:08:08.008Z] Traceback (most recent call last):
[2026-02-27T03:08:08.008Z]   File "/var/lib/jenkins/pytorch/test/run_test.py", line 2241, in <module>
[2026-02-27T03:08:08.008Z] Parsing testcases for test report: /var/lib/jenkins/pytorch/test/test-reports/python-pytest/distributed.test_c10d_nccl/distributed.test_c10d_nccl-9756309ee3f6901e.xml
[2026-02-27T03:08:08.008Z] Found job id: None
[2026-02-27T03:08:08.008Z] Failed to parse and upload json test reports: Unable to locate credentials
[2026-02-27T03:08:08.008Z] GITHUB_RUN_ID, GITHUB_RUN_ATTEMPT, or ARTIFACTS_FILE_SUFFIX not set, not uploading
[2026-02-27T03:08:08.008Z] Uploading artifacts took 0.00 seconds
[2026-02-27T03:08:08.008Z]     main()
[2026-02-27T03:08:08.008Z]   File "/var/lib/jenkins/pytorch/test/run_test.py", line 2192, in main
[2026-02-27T03:08:08.008Z]     run_tests(
[2026-02-27T03:08:08.008Z]   File "/var/lib/jenkins/pytorch/test/run_test.py", line 2013, in run_tests
[2026-02-27T03:08:08.008Z]     raise RuntimeError(failure.message + keep_going_message)
[2026-02-27T03:08:08.008Z] RuntimeError: distributed/test_distributed_spawn 5/7 failed!
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] Tip: You can keep running tests even on failure by passing --keep-going to run_test.py.
[2026-02-27T03:08:08.008Z] If running on CI, add the 'keep-going' label to your PR and rerun your jobs.
[2026-02-27T03:08:08.008Z] 
[2026-02-27T03:08:08.008Z] real	89m23.156s
[2026-02-27T03:08:08.008Z] user	182m3.053s
[2026-02-27T03:08:08.008Z] sys	307m26.149s
[2026-02-27T03:08:08.008Z] + sccache_epilogue
[2026-02-27T03:08:08.008Z] + echo '::group::Sccache Compilation Log'
[2026-02-27T03:08:08.008Z] + echo '=================== sccache compilation log ==================='
[2026-02-27T03:08:08.008Z] ::group::Sccache Compilation Log
[2026-02-27T03:08:08.008Z] =================== sccache compilation log ===================
Output truncated.

Details

  • Kill older PR Builds (1.4 sec)
  • Initialize (1 hr 11 min)
    • Download CI scripts (53 sec)
    • Checkout Pytorch (1 min 58 sec)
    • Check base Docker image existence (23 sec)
    • Pull Docker Image (8 min 56 sec)
    • Build PyTorch (57 min)
  • Tests (4 hr 25 min)
    • Test PyTorch (7 ms)
      • Test PyTorch (1 hr 34 min)
        • Run pytorch_test_1 (1 hr 10 min)
        • Run pytorch_test_2 (23 min)
          Error: script returned exit code 1 - logs
          Error: pytorch_test_2 failed - logs
          Error: No test report files were found. Configuration error? - logs
          Error: Failed to publish test reports xml files - logs
    • Test Distributed (6 ms)
      • Test Distributed (4 hr 25 min)
        • Run pytorch_distributed_1 (2 hr 39 min)
        • Run pytorch_distributed_2 (1 hr 45 min)
          Error: script returned exit code 1 - logs
          Error: pytorch_distributed_2 failed - logs
          Unstable: 2 tests failed - logs
    • Test Inductor (6 ms)
      • Test Inductor (4 hr 19 min)
        • Run pytorch_inductor_null (4 hr 19 min)
          Error: Found 1 failure(s) in pytorch_reports/python-pytest/inductor.test_aot_inductor/inductor.test_aot_inductor-c2ff5398abf38e7d.xml report - logs
          Error: Some tests are failed or errored - logs
          Error: pytorch_inductor_null failed - logs
          Unstable: 1 tests failed - logs
    • Test PyTorch Slow (7 ms)
      • Test PyTorch Slow (7 sec)
    • Microbenchmark (14 sec)
      • Microbenchmark (7 sec)
  • Post Build (1.2 sec)
  • Declarative: Post Actions (3.6 sec)