Update references from `iree-org` to `openxla`. (#12304)

Generated by two find/replaces across the repo:
* `iree-org.github -> openxla.github`
* `iree-org/iree([^-])` with `openxla/iree$1` (note: this excludes other
repos like `iree-org/iree-llvm-fork`)

Other remaining patterns: `iree-org/projects/`, `iree-org/actions`
diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml
index 7968b99..7fad702 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.yml
+++ b/.github/ISSUE_TEMPLATE/bug_report.yml
@@ -7,7 +7,7 @@
       value: |
         :star2: Thanks for taking the time to report this issue! :star2:
 
-        Please search through [other recent issues](https://github.com/iree-org/iree/issues) to see if your report overlaps with an existing issue.
+        Please search through [other recent issues](https://github.com/openxla/iree/issues) to see if your report overlaps with an existing issue.
   - type: textarea
     id: what-happened
     attributes:
diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml
index 91c3c75..84fc2e5 100644
--- a/.github/ISSUE_TEMPLATE/config.yml
+++ b/.github/ISSUE_TEMPLATE/config.yml
@@ -1,13 +1,13 @@
 blank_issues_enabled: true
 contact_links:
   - name: 📄 Blank Issue
-    url: https://github.com/iree-org/iree/issues/new
+    url: https://github.com/openxla/iree/issues/new
     about: If you know what you're doing (especially collaborators)
   - name: 📧 Send an email to our mailing list
     url: https://groups.google.com/forum/#!forum/iree-discuss
     about: For announcements and asynchronous discussion
   - name: 🗣 Start a discussion on GitHub
-    url: https://github.com/iree-org/iree/discussions
+    url: https://github.com/openxla/iree/discussions
     about: An alternative platform for asynchronous discussion
   - name: 💬 Join us on Discord
     url: https://discord.gg/26P4xW4
diff --git a/.github/workflows/build_package.yml b/.github/workflows/build_package.yml
index fad162b..98bdffd 100644
--- a/.github/workflows/build_package.yml
+++ b/.github/workflows/build_package.yml
@@ -167,7 +167,7 @@
           output_dir: "${{ github.workspace }}/bindist"
         run: |
           # libzstd on GitHub Action bots is not compatible with MacOS universal.
-          # https://github.com/iree-org/iree/issues/9955
+          # https://github.com/openxla/iree/issues/9955
           sudo rm -rf /usr/local/lib/libzstd*
           sudo rm -rf /usr/local/lib/cmake/zstd/*
 
diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index b59ef9b..c065a68 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -1020,7 +1020,7 @@
           # Wildcard pattern to match all execution benchmark results. Empty if
           # execution_benchmarks is skipped, which results in no match.
           EXECUTION_BENCHMARK_RESULTS_PATTERN: ${{ steps.download-execution-results.outputs.execution-benchmark-results-pattern }}
-          IREE_BUILD_URL: https://github.com/iree-org/iree/actions/runs/${{ github.run_id }}/attempts/${{ github.run_attempt }}
+          IREE_BUILD_URL: https://github.com/openxla/iree/actions/runs/${{ github.run_id }}/attempts/${{ github.run_attempt }}
           PR_NUMBER: ${{ github.event.pull_request.number }}
           BENCHMARK_COMMENT_ARTIFACT: benchmark-comment.json
         run: |
diff --git a/CITATION.cff b/CITATION.cff
index 49f4a66..74d0d8f 100644
--- a/CITATION.cff
+++ b/CITATION.cff
@@ -16,8 +16,8 @@
     email: laurenzo@google.com
     affiliation: Google
 license: "Apache-2.0 WITH LLVM-exception"
-url: "https://iree-org.github.io/iree/"
-repository-code: "https://github.com/iree-org/iree"
+url: "https://openxla.github.io/iree/"
+repository-code: "https://github.com/openxla/iree"
 keywords:
   - compiler
   - "machine learning"
diff --git a/CMakeLists.txt b/CMakeLists.txt
index eb66d8c..1d06f31 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -321,7 +321,7 @@
 
 # STREQUAL feels wrong here - we don't care about the exact true-value used,
 # ON or TRUE or something else. But we haven't been able to think of a less bad
-# alternative. https://github.com/iree-org/iree/pull/8474#discussion_r840790062
+# alternative. https://github.com/openxla/iree/pull/8474#discussion_r840790062
 if(NOT IREE_ENABLE_TSAN STREQUAL IREE_BYTECODE_MODULE_ENABLE_TSAN)
   message(SEND_ERROR
       "IREE_ENABLE_TSAN and IREE_BYTECODE_MODULE_ENABLE_TSAN must be "
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index afdd2b0..d9d717b 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -68,7 +68,7 @@
 ## Peculiarities
 
 Our documentation on
-[repository management](https://github.com/iree-org/iree/blob/main/docs/developers/developing_iree/repository_management.md)
+[repository management](https://github.com/openxla/iree/blob/main/docs/developers/developing_iree/repository_management.md)
 has more information on some of the oddities in our repository setup and
 workflows. For the most part, these should be transparent to normal developer
 workflows.
diff --git a/README.md b/README.md
index 46282b1..5477ec6 100644
--- a/README.md
+++ b/README.md
@@ -6,10 +6,10 @@
 that scales up to meet the needs of the datacenter and down to satisfy the
 constraints and special considerations of mobile and edge deployments.
 
-See [our website](https://iree-org.github.io/iree/) for project details, user
+See [our website](https://openxla.github.io/iree/) for project details, user
 guides, and instructions on building from source.
 
-[![CI Status](https://github.com/iree-org/iree/actions/workflows/ci.yml/badge.svg?query=branch%3Amain+event%3Apush)](https://github.com/iree-org/iree/actions/workflows/ci.yml?query=branch%3Amain+event%3Apush)
+[![CI Status](https://github.com/openxla/iree/actions/workflows/ci.yml/badge.svg?query=branch%3Amain+event%3Apush)](https://github.com/openxla/iree/actions/workflows/ci.yml?query=branch%3Amain+event%3Apush)
 
 #### Project Status
 
@@ -21,7 +21,7 @@
 
 ## Communication Channels
 
-*   [GitHub issues](https://github.com/iree-org/iree/issues): Feature requests,
+*   [GitHub issues](https://github.com/openxla/iree/issues): Feature requests,
     bugs, and other work tracking
 *   [IREE Discord server](https://discord.gg/26P4xW4): Daily development
     discussions with the core team and collaborators
@@ -41,7 +41,7 @@
 ![IREE Architecture](docs/website/docs/assets/images/iree_architecture_dark.svg#gh-dark-mode-only)
 ![IREE Architecture](docs/website/docs/assets/images/iree_architecture.svg#gh-light-mode-only)
 
-See [our website](https://iree-org.github.io/iree/) for more information.
+See [our website](https://openxla.github.io/iree/) for more information.
 
 ## Presentations and Talks
 
diff --git a/benchmarks/README.md b/benchmarks/README.md
index c8b3611..a0c2aa0 100644
--- a/benchmarks/README.md
+++ b/benchmarks/README.md
@@ -63,7 +63,7 @@
 
 1. Install `iree-import-tflite`.
    ```
-   $ python -m pip install iree-tools-tflite -f https://iree-org.github.io/iree/pip-release-links.html
+   $ python -m pip install iree-tools-tflite -f https://openxla.github.io/iree/pip-release-links.html
    ```
 
 2. Expose and confirm the binary `iree-import-tflite` is in your path by running
@@ -85,8 +85,8 @@
 
 ### <a name="run-benchmark-locally"></a> Running benchmark suites locally
 
-First you need to have [`iree-import-tflite`](https://iree-org.github.io/iree/getting-started/tflite/),
-[`iree-import-tf`](https://iree-org.github.io/iree/getting-started/tensorflow/),
+First you need to have [`iree-import-tflite`](https://openxla.github.io/iree/getting-started/tflite/),
+[`iree-import-tf`](https://openxla.github.io/iree/getting-started/tensorflow/),
 and `requests` in your python environment. Then you can build the target
 `iree-benchmark-suites` to generate the required files. Note that this target
 requires the `IREE_BUILD_BENCHMARKS` CMake option.
diff --git a/benchmarks/dashboard.md b/benchmarks/dashboard.md
index fca5ca6..4eafd16 100644
--- a/benchmarks/dashboard.md
+++ b/benchmarks/dashboard.md
@@ -81,15 +81,15 @@
 
 This field specifies the IREE HAL driver:
 
-* [`local-task`](https://iree-org.github.io/iree/deployment-configurations/cpu/):
+* [`local-task`](https://openxla.github.io/iree/deployment-configurations/cpu/):
   For CPU via the local task system. Kernels contain CPU native instructions AOT
   compiled using LLVM. This driver issues workloads to the CPU asynchronously
   and supports multithreading.
-* [`local-sync`](https://iree-org.github.io/iree/deployment-configurations/cpu/):
+* [`local-sync`](https://openxla.github.io/iree/deployment-configurations/cpu/):
   For CPU via the local 'sync' device. Kernels contain contain CPU native
   instructions AOT compiled using LLVM. This driver issues workloads to the CPU
   synchronously.
-* [`Vulkan`](https://iree-org.github.io/iree/deployment-configurations/gpu-vulkan/):
+* [`Vulkan`](https://openxla.github.io/iree/deployment-configurations/gpu-vulkan/):
   For GPU via Vulkan. Kernels contain SPIR-V. This driver issues workload to
   the GPU via the Vulkan API.
 
diff --git a/build_tools/benchmarks/comparisons/README.md b/build_tools/benchmarks/comparisons/README.md
index 8913dff..550f851 100644
--- a/build_tools/benchmarks/comparisons/README.md
+++ b/build_tools/benchmarks/comparisons/README.md
@@ -30,7 +30,7 @@
 ### Install Android NDK and ADB
 
 Detailed steps
-[here](https://iree-org.github.io/iree/building-from-source/android/#install-android-ndk-and-adb).
+[here](https://openxla.github.io/iree/building-from-source/android/#install-android-ndk-and-adb).
 
 ### Install the Termux App and the Python Interpreter
 
diff --git a/build_tools/benchmarks/comparisons/setup_desktop.sh b/build_tools/benchmarks/comparisons/setup_desktop.sh
index b81232e..723daa3 100644
--- a/build_tools/benchmarks/comparisons/setup_desktop.sh
+++ b/build_tools/benchmarks/comparisons/setup_desktop.sh
@@ -46,7 +46,7 @@
 mkdir "${SOURCE_DIR}"
 cd "${SOURCE_DIR}"
 
-git clone https://github.com/iree-org/iree.git
+git clone https://github.com/openxla/iree.git
 
 cd iree
 git submodule update --init
diff --git a/build_tools/benchmarks/comparisons/setup_mobile.sh b/build_tools/benchmarks/comparisons/setup_mobile.sh
index ca7b41a..c557a23 100644
--- a/build_tools/benchmarks/comparisons/setup_mobile.sh
+++ b/build_tools/benchmarks/comparisons/setup_mobile.sh
@@ -6,7 +6,7 @@
 
 # Run commands below on the workstation that the phone is attached to.
 # Prerequisites:
-#   Manual installations of the Android NDK and ADB are needed. See https://iree-org.github.io/iree/building-from-source/android/#install-android-ndk-and-adb for instructions.
+#   Manual installations of the Android NDK and ADB are needed. See https://openxla.github.io/iree/building-from-source/android/#install-android-ndk-and-adb for instructions.
 #   Manual installations of the Termux App and python are needed on the Android device. See README.md for instructions.
 
 #!/bin/bash
@@ -54,7 +54,7 @@
 mkdir "${SOURCE_DIR}"
 cd "${SOURCE_DIR}"
 
-git clone https://github.com/iree-org/iree.git
+git clone https://github.com/openxla/iree.git
 
 cd iree
 cp "${SOURCE_DIR}/iree/build_tools/benchmarks/set_adreno_gpu_scaling_policy.sh" "${ROOT_DIR}/setup/"
diff --git a/build_tools/benchmarks/convperf/build_and_run_convperf.sh b/build_tools/benchmarks/convperf/build_and_run_convperf.sh
index a23415a..efc6cd7 100755
--- a/build_tools/benchmarks/convperf/build_and_run_convperf.sh
+++ b/build_tools/benchmarks/convperf/build_and_run_convperf.sh
@@ -39,7 +39,7 @@
 
 # Update IREE.
 pushd external/iree
-git fetch https://github.com/iree-org/iree "${IREE_COMMIT}"
+git fetch https://github.com/openxla/iree "${IREE_COMMIT}"
 git checkout "${IREE_COMMIT}"
 git submodule update --init --jobs 8 --depth 1
 popd # external/iree
@@ -68,6 +68,3 @@
   mv runtimes.json "${RESULTS_DIR}/resnet50_thread$i.json"
   mv convs.png "${RESULTS_DIR}/resnet50_thread$i.png"
 done
-
-
-
diff --git a/build_tools/benchmarks/generate_benchmark_comment.py b/build_tools/benchmarks/generate_benchmark_comment.py
index f009965..cafafd6 100755
--- a/build_tools/benchmarks/generate_benchmark_comment.py
+++ b/build_tools/benchmarks/generate_benchmark_comment.py
@@ -26,7 +26,7 @@
 from common import benchmark_definition, benchmark_presentation, common_arguments
 from reporting import benchmark_comment
 
-GITHUB_IREE_REPO_PREFIX = "https://github.com/iree-org/iree"
+GITHUB_IREE_REPO_PREFIX = "https://github.com/openxla/iree"
 IREE_DASHBOARD_URL = "https://perf.iree.dev/apis/v2"
 IREE_PROJECT_ID = 'IREE'
 # The maximal numbers of trials when querying base commit benchmark results.
diff --git a/build_tools/benchmarks/mmperf/build_mmperf.sh b/build_tools/benchmarks/mmperf/build_mmperf.sh
index 9f32e1c..1039fa5 100755
--- a/build_tools/benchmarks/mmperf/build_mmperf.sh
+++ b/build_tools/benchmarks/mmperf/build_mmperf.sh
@@ -45,7 +45,7 @@
 
 # Update IREE.
 pushd external/iree
-git fetch https://github.com/iree-org/iree "${IREE_SHA}"
+git fetch https://github.com/openxla/iree "${IREE_SHA}"
 git checkout "${IREE_SHA}"
 git submodule update --init --jobs 8 --depth 1
 popd # external/iree
diff --git a/build_tools/benchmarks/post_benchmark_comment.py b/build_tools/benchmarks/post_benchmark_comment.py
index 2aba5c0..9322d85 100755
--- a/build_tools/benchmarks/post_benchmark_comment.py
+++ b/build_tools/benchmarks/post_benchmark_comment.py
@@ -33,7 +33,7 @@
 
 from reporting import benchmark_comment
 
-GITHUB_IREE_API_PREFIX = "https://api.github.com/repos/iree-org/iree"
+GITHUB_IREE_API_PREFIX = "https://api.github.com/repos/openxla/iree"
 GITHUB_GIST_API = "https://api.github.com/gists"
 GITHUB_API_VERSION = "2022-11-28"
 
diff --git a/build_tools/benchmarks/upload_benchmarks_to_dashboard.py b/build_tools/benchmarks/upload_benchmarks_to_dashboard.py
index f3e8676..52fca19 100755
--- a/build_tools/benchmarks/upload_benchmarks_to_dashboard.py
+++ b/build_tools/benchmarks/upload_benchmarks_to_dashboard.py
@@ -28,7 +28,7 @@
 from common.benchmark_thresholds import BENCHMARK_THRESHOLDS
 
 IREE_DASHBOARD_URL = "https://perf.iree.dev"
-IREE_GITHUB_COMMIT_URL_PREFIX = 'https://github.com/iree-org/iree/commit'
+IREE_GITHUB_COMMIT_URL_PREFIX = 'https://github.com/openxla/iree/commit'
 IREE_PROJECT_ID = 'IREE'
 THIS_DIRECTORY = pathlib.Path(__file__).resolve().parent
 
@@ -37,8 +37,8 @@
 For the graph, the x axis is the Git commit index, and the y axis is the
 measured metrics. The unit for the numbers is shown in the "Unit" dropdown.
 <br>
-See <a href="https://github.com/iree-org/iree/tree/main/benchmarks/dashboard.md">
-https://github.com/iree-org/iree/tree/main/benchmarks/dashboard.md
+See <a href="https://github.com/openxla/iree/tree/main/benchmarks/dashboard.md">
+https://github.com/openxla/iree/tree/main/benchmarks/dashboard.md
 </a> for benchmark philosophy, specification, and definitions.
 """
 
diff --git a/build_tools/cmake/build_and_test_tsan.sh b/build_tools/cmake/build_and_test_tsan.sh
index 1356b9d..5b765e9 100755
--- a/build_tools/cmake/build_and_test_tsan.sh
+++ b/build_tools/cmake/build_and_test_tsan.sh
@@ -67,7 +67,7 @@
 # Disable actually running GPU tests. This tends to yield TSan reports that are
 # specific to one's particular GPU driver and therefore hard to reproduce across
 # machines and often un-actionable anyway.
-# See e.g. https://github.com/iree-org/iree/issues/9393
+# See e.g. https://github.com/openxla/iree/issues/9393
 export IREE_VULKAN_DISABLE=1
 export IREE_CUDA_DISABLE=1
 
diff --git a/build_tools/cmake/build_tracing.sh b/build_tools/cmake/build_tracing.sh
index 2d39092..99b7829 100755
--- a/build_tools/cmake/build_tracing.sh
+++ b/build_tools/cmake/build_tracing.sh
@@ -17,7 +17,7 @@
 source build_tools/cmake/setup_build.sh
 source build_tools/cmake/setup_ccache.sh
 
-# Note: https://github.com/iree-org/iree/issues/6404 prevents us from building
+# Note: https://github.com/openxla/iree/issues/6404 prevents us from building
 # tests with these other settings. Many tests invoke the compiler tools with
 # MLIR threading enabled, which crashes with compiler tracing enabled.
 "${CMAKE_BIN?}" -B "${BUILD_DIR}" \
diff --git a/build_tools/cmake/iree_benchmark_suite.cmake b/build_tools/cmake/iree_benchmark_suite.cmake
index 1e2bbc2..b9296f6 100644
--- a/build_tools/cmake/iree_benchmark_suite.cmake
+++ b/build_tools/cmake/iree_benchmark_suite.cmake
@@ -33,7 +33,7 @@
                       " that iree-import-tflite be available "
                       " (either on PATH or via IREE_IMPORT_TFLITE_PATH). "
                       " Install from a release with "
-                      " `python -m pip install iree-tools-tflite -f https://iree-org.github.io/iree/pip-release-links.html`")
+                      " `python -m pip install iree-tools-tflite -f https://openxla.github.io/iree/pip-release-links.html`")
   endif()
 
   if(NOT TARGET "${_RULE_TARGET_NAME}")
@@ -88,7 +88,7 @@
                       " that iree-import-tf be available "
                       " (either on PATH or via IREE_IMPORT_TF_PATH). "
                       " Install from a release with "
-                      " `python -m pip install iree-tools-tf -f https://iree-org.github.io/iree/pip-release-links.html`")
+                      " `python -m pip install iree-tools-tf -f https://openxla.github.io/iree/pip-release-links.html`")
   endif()
 
   if(NOT TARGET "${_RULE_TARGET_NAME}")
diff --git a/build_tools/cmake/iree_copts.cmake b/build_tools/cmake/iree_copts.cmake
index 32c8adf..4e16665 100644
--- a/build_tools/cmake/iree_copts.cmake
+++ b/build_tools/cmake/iree_copts.cmake
@@ -309,7 +309,7 @@
 # compatible solution.
 #
 # See also:
-#   https://github.com/iree-org/iree/issues/4665.
+#   https://github.com/openxla/iree/issues/4665.
 #   https://discourse.cmake.org/t/how-to-fix-build-warning-d9025-overriding-gr-with-gr/878
 #   https://gitlab.kitware.com/cmake/cmake/-/issues/20610
 if(CMAKE_CXX_FLAGS AND "${CMAKE_CXX_COMPILER_ID}" STREQUAL "MSVC")
diff --git a/build_tools/cmake/iree_python.cmake b/build_tools/cmake/iree_python.cmake
index b5b3efd..5aaab3f 100644
--- a/build_tools/cmake/iree_python.cmake
+++ b/build_tools/cmake/iree_python.cmake
@@ -166,7 +166,7 @@
   foreach(_SRC_FILE ${_RULE_SRCS})
     # _SRC_FILE could have other path components in it, so we need to make a
     # directory for it. Ninja does this automatically, but make doesn't. See
-    # https://github.com/iree-org/iree/issues/6801
+    # https://github.com/openxla/iree/issues/6801
     set(_SRC_BIN_PATH "${CMAKE_CURRENT_BINARY_DIR}/${_SRC_FILE}")
     get_filename_component(_SRC_BIN_DIR "${_SRC_BIN_PATH}" DIRECTORY)
     add_custom_command(
diff --git a/build_tools/docker/dockerfiles/manylinux2014_x86_64-release.Dockerfile b/build_tools/docker/dockerfiles/manylinux2014_x86_64-release.Dockerfile
index 6f86bd7..9498d21 100644
--- a/build_tools/docker/dockerfiles/manylinux2014_x86_64-release.Dockerfile
+++ b/build_tools/docker/dockerfiles/manylinux2014_x86_64-release.Dockerfile
@@ -65,7 +65,7 @@
 # Git started enforcing strict user checking, which thwarts version
 # configuration scripts in a docker image where the tree was checked
 # out by the host and mapped in. Disable the check.
-# See: https://github.com/iree-org/iree/issues/12046
+# See: https://github.com/openxla/iree/issues/12046
 # We use the wildcard option to disable the checks. This was added
 # in git 2.35.3
 RUN git config --global --add safe.directory '*'
diff --git a/build_tools/github_actions/runner/README.md b/build_tools/github_actions/runner/README.md
index 6c379b6..d7785ba 100644
--- a/build_tools/github_actions/runner/README.md
+++ b/build_tools/github_actions/runner/README.md
@@ -75,7 +75,7 @@
 
 Using GitHub's [artifact actions](https://github.com/actions/upload-artifact)
 with runners on GCE turns out to be prohibitively slow (see discussion in
-https://github.com/iree-org/iree/issues/9881). Instead we use our own
+https://github.com/openxla/iree/issues/9881). Instead we use our own
 [Google Cloud Storage](https://cloud.google.com/storage) (GCS) buckets to save
 artifacts from jobs and fetch them in subsequent jobs:
 `iree-github-actions-presubmit-artifacts` and
diff --git a/build_tools/github_actions/runner/gcp/create_templates.sh b/build_tools/github_actions/runner/gcp/create_templates.sh
index 9f71850..ea98def 100755
--- a/build_tools/github_actions/runner/gcp/create_templates.sh
+++ b/build_tools/github_actions/runner/gcp/create_templates.sh
@@ -22,7 +22,7 @@
 CPU_IMAGE="${CPU_IMAGE:-github-runner-cpu-2023-01-30-1675109033}"
 CPU_DISK_SIZE_GB="${CPU_DISK_SIZE_GB:-100}"
 
-PROD_TEMPLATE_CONFIG_REPO="${PROD_TEMPLATE_CONFIG_REPO:-iree-org/iree}"
+PROD_TEMPLATE_CONFIG_REPO="${PROD_TEMPLATE_CONFIG_REPO:-openxla/iree}"
 GITHUB_RUNNER_SCOPE="${GITHUB_RUNNER_SCOPE:-iree-org}"
 
 TEMPLATE_CONFIG_REPO="${TEMPLATE_CONFIG_REPO:-${PROD_TEMPLATE_CONFIG_REPO}}"
diff --git a/build_tools/github_actions/runner/gcp/image_setup.sh b/build_tools/github_actions/runner/gcp/image_setup.sh
index 83b2480..57845b8 100644
--- a/build_tools/github_actions/runner/gcp/image_setup.sh
+++ b/build_tools/github_actions/runner/gcp/image_setup.sh
@@ -268,8 +268,8 @@
     nice_curl \
       --remote-name-all \
       --output-dir "${script_dir}" \
-      https://raw.githubusercontent.com/iree-org/iree/main/build_tools/scripts/check_vulkan.sh \
-      https://raw.githubusercontent.com/iree-org/iree/main/build_tools/scripts/check_cuda.sh
+      https://raw.githubusercontent.com/openxla/iree/main/build_tools/scripts/check_vulkan.sh \
+      https://raw.githubusercontent.com/openxla/iree/main/build_tools/scripts/check_cuda.sh
 
     chmod +x "${script_dir}/check_vulkan.sh" "${script_dir}/check_cuda.sh"
 
diff --git a/build_tools/python_deploy/pypi_deploy.sh b/build_tools/python_deploy/pypi_deploy.sh
index 795664a..7b75b30 100755
--- a/build_tools/python_deploy/pypi_deploy.sh
+++ b/build_tools/python_deploy/pypi_deploy.sh
@@ -49,7 +49,7 @@
 function download_wheels() {
   echo ""
   echo "Downloading wheels from '${RELEASE}'"
-  gh release download "${RELEASE}" --repo iree-org/iree --pattern "*.whl"
+  gh release download "${RELEASE}" --repo openxla/iree --pattern "*.whl"
 }
 
 # For some reason auditwheel detects these as not manylinux compliant even
diff --git a/build_tools/scripts/generate_release_index.py b/build_tools/scripts/generate_release_index.py
index 0160362..0e7ea94 100755
--- a/build_tools/scripts/generate_release_index.py
+++ b/build_tools/scripts/generate_release_index.py
@@ -22,7 +22,7 @@
   parser = argparse.ArgumentParser()
   parser.add_argument("--repo",
                       "--repository",
-                      default="iree-org/iree",
+                      default="openxla/iree",
                       help="The GitHub repository to fetch releases from.")
   parser.add_argument(
       "--output",
diff --git a/build_tools/scripts/get_e2e_artifacts.py b/build_tools/scripts/get_e2e_artifacts.py
index bf7cfa3..72f5f20 100755
--- a/build_tools/scripts/get_e2e_artifacts.py
+++ b/build_tools/scripts/get_e2e_artifacts.py
@@ -96,7 +96,7 @@
   """Check that we aren't overwriting files unless we expect to."""
   # Note: We can't use a check that the files have identical contents because
   # tf_input.mlir can have random numbers appended to its function names.
-  # See https://github.com/iree-org/iree/issues/3375
+  # See https://github.com/openxla/iree/issues/3375
 
   expected_collision = any([name in filename for name in EXPECTED_COLLISIONS])
   if filename in written_paths and not expected_collision:
@@ -150,8 +150,8 @@
   print(
       "The bazel integrations build and tests are deprecated. This script "
       "may be reworked in the future. For the time being refer to "
-      "https://iree-org.github.io/iree/building-from-source/python-bindings-and-importers/ "
-      "and https://github.com/iree-org/iree/blob/main/docs/developers/developing_iree/e2e_benchmarking.md "
+      "https://openxla.github.io/iree/building-from-source/python-bindings-and-importers/ "
+      "and https://github.com/openxla/iree/blob/main/docs/developers/developing_iree/e2e_benchmarking.md "
       "for information on how to run TensorFlow benchmarks.")
   exit(1)
 
diff --git a/build_tools/scripts/get_latest_green.sh b/build_tools/scripts/get_latest_green.sh
index 6238a00..979acb2 100755
--- a/build_tools/scripts/get_latest_green.sh
+++ b/build_tools/scripts/get_latest_green.sh
@@ -39,7 +39,7 @@
     for workflow in "${REQUIRED_WORKFLOWS[@]}"; do
       local successful_run_count="$(\
         gh api --jq '.total_count' \
-        "/repos/iree-org/iree/actions/workflows/${workflow}/runs?${query_string}" \
+        "/repos/openxla/iree/actions/workflows/${workflow}/runs?${query_string}" \
       )"
       # Any successful run of the workflow (including reruns) is OK.
       if (( successful_run_count==0 )); then
diff --git a/build_tools/scripts/integrate/README.md b/build_tools/scripts/integrate/README.md
index 6db9af0..399a55f 100644
--- a/build_tools/scripts/integrate/README.md
+++ b/build_tools/scripts/integrate/README.md
@@ -95,7 +95,7 @@
 * https://github.com/iree-org/iree-tf-fork (`master` branch)
 
 Iree repository has an
-action named [Advance Upstream Forks](https://github.com/iree-org/iree/actions/workflows/advance_upstream_forks.yml)
+action named [Advance Upstream Forks](https://github.com/openxla/iree/actions/workflows/advance_upstream_forks.yml)
 to update the forks. Just select `Run Workflow` on that action and give it a
 minute. You should see the fork repository mainline branch move forward. This
 action runs hourly. If needing up to the minute changes, you may need to trigger
@@ -304,7 +304,7 @@
 
 ### Update C-API exported
 
-If a new symbol needs to be export in the C-API run this [script](https://github.com/iree-org/iree/blob/main/compiler/src/iree/compiler/API2/generate_exports.py)
+If a new symbol needs to be export in the C-API run this [script](https://github.com/openxla/iree/blob/main/compiler/src/iree/compiler/API2/generate_exports.py)
 from IREE root directory:
 
 ```
@@ -416,7 +416,7 @@
   ..
 ```
 
-To repro failures in CI `bazel_linux_x86-swiftshader_core`, we can follow the [doc](https://github.com/iree-org/iree/blob/main/docs/developers/get_started/building_with_bazel_linux.md) to build IREE using bazel. E.g.,
+To repro failures in CI `bazel_linux_x86-swiftshader_core`, we can follow the [doc](https://github.com/openxla/iree/blob/main/docs/developers/get_started/building_with_bazel_linux.md) to build IREE using bazel. E.g.,
 
 ```bash
 export CC=clang
diff --git a/build_tools/scripts/integrate/bump_llvm.py b/build_tools/scripts/integrate/bump_llvm.py
index 687d0be..7dd770b 100755
--- a/build_tools/scripts/integrate/bump_llvm.py
+++ b/build_tools/scripts/integrate/bump_llvm.py
@@ -44,95 +44,92 @@
 
 
 def main(args):
-    if not args.disable_setup_remote:
-        iree_utils.git_setup_remote(args.upstream_remote,
-                                    args.upstream_repository)
+  if not args.disable_setup_remote:
+    iree_utils.git_setup_remote(args.upstream_remote, args.upstream_repository)
 
-    iree_utils.git_check_porcelain()
-    print(f"Fetching remote repository: {args.upstream_remote}")
-    iree_utils.git_fetch(repository=args.upstream_remote)
+  iree_utils.git_check_porcelain()
+  print(f"Fetching remote repository: {args.upstream_remote}")
+  iree_utils.git_fetch(repository=args.upstream_remote)
 
-    # If re-using a branch, make sure we are not on that branch.
-    if args.reuse_branch:
-        iree_utils.git_checkout("main")
+  # If re-using a branch, make sure we are not on that branch.
+  if args.reuse_branch:
+    iree_utils.git_checkout("main")
 
-    # Create branch.
-    branch_name = args.branch_name
-    if not branch_name:
-        branch_name = f"bump-llvm-{date.today().strftime('%Y%m%d')}"
-    print(f"Creating branch {branch_name} (override with --branch-name=)")
-    iree_utils.git_create_branch(branch_name,
-                                 checkout=True,
-                                 ref=f"{args.upstream_remote}/main",
-                                 force=args.reuse_branch)
+  # Create branch.
+  branch_name = args.branch_name
+  if not branch_name:
+    branch_name = f"bump-llvm-{date.today().strftime('%Y%m%d')}"
+  print(f"Creating branch {branch_name} (override with --branch-name=)")
+  iree_utils.git_create_branch(branch_name,
+                               checkout=True,
+                               ref=f"{args.upstream_remote}/main",
+                               force=args.reuse_branch)
 
-    # Reset the llvm-project submodule to track upstream.
-    # This will discard any cherrypicks that may have been committed locally,
-    # but the assumption is that if doing a main llvm version bump, the
-    # cherrypicks will be incorporated at the new commit. If not, well, ymmv
-    # and you will find out.
-    iree_utils.git_submodule_set_origin(
-        "third_party/llvm-project",
-        url="https://github.com/iree-org/iree-llvm-fork.git",
-        branch="--default")
+  # Reset the llvm-project submodule to track upstream.
+  # This will discard any cherrypicks that may have been committed locally,
+  # but the assumption is that if doing a main llvm version bump, the
+  # cherrypicks will be incorporated at the new commit. If not, well, ymmv
+  # and you will find out.
+  iree_utils.git_submodule_set_origin(
+      "third_party/llvm-project",
+      url="https://github.com/iree-org/iree-llvm-fork.git",
+      branch="--default")
 
-    # Remove the branch pin file, reverting us to pure upstream.
-    branch_pin_file = os.path.join(
-        iree_utils.get_repo_root(),
-        iree_modules.MODULE_INFOS["llvm-project"].branch_pin_file)
-    if os.path.exists(branch_pin_file):
-        os.remove(branch_pin_file)
+  # Remove the branch pin file, reverting us to pure upstream.
+  branch_pin_file = os.path.join(
+      iree_utils.get_repo_root(),
+      iree_modules.MODULE_INFOS["llvm-project"].branch_pin_file)
+  if os.path.exists(branch_pin_file):
+    os.remove(branch_pin_file)
 
-    # Update the LLVM submodule.
-    llvm_commit = args.llvm_commit
-    print(f"Updating LLVM submodule to {llvm_commit}")
-    llvm_root = iree_utils.get_submodule_root("llvm-project")
-    iree_utils.git_fetch(repository="origin",
-        ref="refs/heads/main", repo_dir=llvm_root)
-    if llvm_commit == "HEAD":
-        llvm_commit = "origin/main"
-    iree_utils.git_reset(llvm_commit, repo_dir=llvm_root)
-    llvm_commit, llvm_summary = iree_utils.git_current_commit(
-        repo_dir=llvm_root)
-    print(f"LLVM submodule reset to:\n  {llvm_summary}\n")
+  # Update the LLVM submodule.
+  llvm_commit = args.llvm_commit
+  print(f"Updating LLVM submodule to {llvm_commit}")
+  llvm_root = iree_utils.get_submodule_root("llvm-project")
+  iree_utils.git_fetch(repository="origin",
+                       ref="refs/heads/main",
+                       repo_dir=llvm_root)
+  if llvm_commit == "HEAD":
+    llvm_commit = "origin/main"
+  iree_utils.git_reset(llvm_commit, repo_dir=llvm_root)
+  llvm_commit, llvm_summary = iree_utils.git_current_commit(repo_dir=llvm_root)
+  print(f"LLVM submodule reset to:\n  {llvm_summary}\n")
 
-    # Create a commit.
-    print("Create commit...")
-    iree_utils.git_create_commit(
-        message=(f"Integrate llvm-project at {llvm_commit}\n\n"
-                 f"* Reset third_party/llvm-project: {llvm_summary}"),
-        add_all=True)
+  # Create a commit.
+  print("Create commit...")
+  iree_utils.git_create_commit(
+      message=(f"Integrate llvm-project at {llvm_commit}\n\n"
+               f"* Reset third_party/llvm-project: {llvm_summary}"),
+      add_all=True)
 
-    # Push.
-    print("Pushing...")
-    iree_utils.git_push_branch(args.upstream_remote, branch_name)
+  # Push.
+  print("Pushing...")
+  iree_utils.git_push_branch(args.upstream_remote, branch_name)
 
 
 def parse_arguments(argv):
-    parser = argparse.ArgumentParser(description="IREE LLVM-bump-inator")
-    parser.add_argument("--upstream-remote",
-                        help="Upstream remote",
-                        default="UPSTREAM_AUTOMATION")
-    parser.add_argument("--upstream-repository",
-                        help="Upstream repository URL",
-                        default="git@github.com:iree-org/iree.git")
-    parser.add_argument("--disable-setup-remote",
-                        help="Disable remote setup",
-                        action="store_true",
-                        default=False)
-    parser.add_argument("--llvm-commit",
-                        help="LLVM commit sha",
-                        default="HEAD")
-    parser.add_argument("--branch-name",
-                        help="Integrate branch to create",
-                        default=None)
-    parser.add_argument("--reuse-branch",
-                        help="Allow re-use of an existing branch",
-                        action="store_true",
-                        default=False)
-    args = parser.parse_args(argv)
-    return args
+  parser = argparse.ArgumentParser(description="IREE LLVM-bump-inator")
+  parser.add_argument("--upstream-remote",
+                      help="Upstream remote",
+                      default="UPSTREAM_AUTOMATION")
+  parser.add_argument("--upstream-repository",
+                      help="Upstream repository URL",
+                      default="git@github.com:openxla/iree.git")
+  parser.add_argument("--disable-setup-remote",
+                      help="Disable remote setup",
+                      action="store_true",
+                      default=False)
+  parser.add_argument("--llvm-commit", help="LLVM commit sha", default="HEAD")
+  parser.add_argument("--branch-name",
+                      help="Integrate branch to create",
+                      default=None)
+  parser.add_argument("--reuse-branch",
+                      help="Allow re-use of an existing branch",
+                      action="store_true",
+                      default=False)
+  args = parser.parse_args(argv)
+  return args
 
 
 if __name__ == "__main__":
-    main(parse_arguments(sys.argv[1:]))
+  main(parse_arguments(sys.argv[1:]))
diff --git a/compiler/src/iree/compiler/API/python/test/tools/compiler_core_test.py b/compiler/src/iree/compiler/API/python/test/tools/compiler_core_test.py
index 4d041d5..162d9ff 100644
--- a/compiler/src/iree/compiler/API/python/test/tools/compiler_core_test.py
+++ b/compiler/src/iree/compiler/API/python/test/tools/compiler_core_test.py
@@ -47,7 +47,7 @@
 
   # Compiling the string form means that the compiler does not have a valid
   # source file name, which can cause issues. Verify specifically.
-  # See: https://github.com/iree-org/iree/issues/4439
+  # See: https://github.com/openxla/iree/issues/4439
   def testCompileStrLLVMCPU(self):
     binary = iree.compiler.tools.compile_str(SIMPLE_MUL_ASM,
                                              target_backends=["llvm-cpu"])
@@ -56,7 +56,7 @@
 
   # Verifies that multiple target_backends are accepted. Which two are not
   # load bearing.
-  # See: https://github.com/iree-org/iree/issues/4436
+  # See: https://github.com/openxla/iree/issues/4436
   def testCompileMultipleBackends(self):
     binary = iree.compiler.tools.compile_str(
         SIMPLE_MUL_ASM, target_backends=["llvm-cpu", "vulkan-spirv"])
diff --git a/compiler/src/iree/compiler/API2/CMakeLists.txt b/compiler/src/iree/compiler/API2/CMakeLists.txt
index ff9671b..14662ef 100644
--- a/compiler/src/iree/compiler/API2/CMakeLists.txt
+++ b/compiler/src/iree/compiler/API2/CMakeLists.txt
@@ -109,7 +109,7 @@
     # TODO: We should really be dumping binaries into bin/ not
     # tools/. This must line up with binaries built this way because
     # DLLs must be in the same directory as the binary.
-    # See: https://github.com/iree-org/iree/issues/11297
+    # See: https://github.com/openxla/iree/issues/11297
     RUNTIME_OUTPUT_DIRECTORY "${PROJECT_BINARY_DIR}/tools"
     ARCHIVE_OUTPUT_DIRECTORY "${PROJECT_BINARY_DIR}/lib"
 )
diff --git a/compiler/src/iree/compiler/API2/Internal/IREEOptToolEntryPoint.cpp b/compiler/src/iree/compiler/API2/Internal/IREEOptToolEntryPoint.cpp
index 2c3a06c..61eb206 100644
--- a/compiler/src/iree/compiler/API2/Internal/IREEOptToolEntryPoint.cpp
+++ b/compiler/src/iree/compiler/API2/Internal/IREEOptToolEntryPoint.cpp
@@ -19,7 +19,7 @@
 
 int ireeOptRunMain(int argc, char **argv) {
   llvm::setBugReportMsg(
-      "Please report issues to https://github.com/iree-org/iree/issues and "
+      "Please report issues to https://github.com/openxla/iree/issues and "
       "include the crash backtrace.\n");
   llvm::InitLLVM y(argc, argv);
 
diff --git a/compiler/src/iree/compiler/API2/Internal/LLDToolEntryPoint.cpp b/compiler/src/iree/compiler/API2/Internal/LLDToolEntryPoint.cpp
index ec63010..13a79a0 100644
--- a/compiler/src/iree/compiler/API2/Internal/LLDToolEntryPoint.cpp
+++ b/compiler/src/iree/compiler/API2/Internal/LLDToolEntryPoint.cpp
@@ -70,7 +70,7 @@
 
 int ireeCompilerRunLldMain(int argc, char **argv) {
   llvm::setBugReportMsg(
-      "Please report issues to https://github.com/iree-org/iree/issues and "
+      "Please report issues to https://github.com/openxla/iree/issues and "
       "include the crash backtrace.\n");
   InitLLVM x(argc, argv);
   sys::Process::UseANSIEscapeCodes(true);
diff --git a/compiler/src/iree/compiler/API2/test/CMakeLists.txt b/compiler/src/iree/compiler/API2/test/CMakeLists.txt
index dd9f75c..f5a5ec9 100644
--- a/compiler/src/iree/compiler/API2/test/CMakeLists.txt
+++ b/compiler/src/iree/compiler/API2/test/CMakeLists.txt
@@ -24,7 +24,7 @@
 ### BAZEL_TO_CMAKE_PRESERVES_ALL_CONTENT_BELOW_THIS_LINE ###
 
 # Move to bin/ directory and systematically name more appropriately.
-# See: https://github.com/iree-org/iree/issues/11297
+# See: https://github.com/openxla/iree/issues/11297
 if(TARGET iree_compiler_API2_test_api-test-binary)
   set_target_properties(iree_compiler_API2_test_api-test-binary
     PROPERTIES
diff --git a/compiler/src/iree/compiler/API2/test/api-test-main.c b/compiler/src/iree/compiler/API2/test/api-test-main.c
index 32bc463..63bd5b4 100644
--- a/compiler/src/iree/compiler/API2/test/api-test-main.c
+++ b/compiler/src/iree/compiler/API2/test/api-test-main.c
@@ -10,7 +10,7 @@
 //
 // Originally contributed due to the work of edubart who figured out how to
 // be the first user of the combined MLIR+IREE CAPI:
-// https://github.com/iree-org/iree/pull/8582
+// https://github.com/openxla/iree/pull/8582
 
 #include <stdio.h>
 
diff --git a/compiler/src/iree/compiler/Codegen/Common/ConvertToDestinationPassingStylePass.cpp b/compiler/src/iree/compiler/Codegen/Common/ConvertToDestinationPassingStylePass.cpp
index 04e36c8..2d80fbc 100644
--- a/compiler/src/iree/compiler/Codegen/Common/ConvertToDestinationPassingStylePass.cpp
+++ b/compiler/src/iree/compiler/Codegen/Common/ConvertToDestinationPassingStylePass.cpp
@@ -417,7 +417,7 @@
 ///    the new use is tied to the result of the user.
 /// This makes the result of the compute op be in the store set, and
 /// bufferizable without using a new stack. See
-/// https://github.com/iree-org/iree/issues/8303.
+/// https://github.com/openxla/iree/issues/8303.
 static LogicalResult adaptComputeConsumerToAvoidStackAllocation(
     func::FuncOp funcOp, bool useWARForCooperativeMatrixCodegen) {
   IRRewriter rewriter(funcOp.getContext());
diff --git a/compiler/src/iree/compiler/Dialect/Flow/Conversion/TensorToFlow/Patterns.cpp b/compiler/src/iree/compiler/Dialect/Flow/Conversion/TensorToFlow/Patterns.cpp
index 0757039..73d1a90 100644
--- a/compiler/src/iree/compiler/Dialect/Flow/Conversion/TensorToFlow/Patterns.cpp
+++ b/compiler/src/iree/compiler/Dialect/Flow/Conversion/TensorToFlow/Patterns.cpp
@@ -144,7 +144,7 @@
   LogicalResult matchAndRewrite(tensor::FromElementsOp op,
                                 PatternRewriter &rewriter) const override {
     // TODO: This pattern was mainly added to iron out some kinks specific to
-    // detensoring (see: https://github.com/iree-org/iree/issues/1159). Do we
+    // detensoring (see: https://github.com/openxla/iree/issues/1159). Do we
     // need to expand this check for other uses?
     if (op->getParentOfType<Flow::DispatchWorkgroupsOp>()) {
       return failure();
diff --git a/compiler/src/iree/compiler/Dialect/HAL/IR/HALBase.td b/compiler/src/iree/compiler/Dialect/HAL/IR/HALBase.td
index e438736..50d70fa 100644
--- a/compiler/src/iree/compiler/Dialect/HAL/IR/HALBase.td
+++ b/compiler/src/iree/compiler/Dialect/HAL/IR/HALBase.td
@@ -864,7 +864,7 @@
   let mnemonic = "affinity.queue";
   let summary = [{specifies a set of allowed queues for an operation}];
   let description = [{
-    WIP; see https://github.com/iree-org/iree/issues/10765.
+    WIP; see https://github.com/openxla/iree/issues/10765.
     This may change in the future to either be a nested attribute on a larger
     affinity struct or be defined by an implementation of the affinity attr
     interface. For now this allows higher levels of the stack to specify
diff --git a/compiler/src/iree/compiler/Dialect/HAL/Target/LLVM/LLVMCPUTarget.cpp b/compiler/src/iree/compiler/Dialect/HAL/Target/LLVM/LLVMCPUTarget.cpp
index de09425..ff058c3 100644
--- a/compiler/src/iree/compiler/Dialect/HAL/Target/LLVM/LLVMCPUTarget.cpp
+++ b/compiler/src/iree/compiler/Dialect/HAL/Target/LLVM/LLVMCPUTarget.cpp
@@ -233,7 +233,7 @@
       // Tracy. In principle this could also be achieved by enabling unwind
       // tables, but we tried that and that didn't work in Tracy (which uses
       // libbacktrace), while enabling frame pointers worked.
-      // https://github.com/iree-org/iree/issues/3957
+      // https://github.com/openxla/iree/issues/3957
       func.addFnAttr("frame-pointer", "all");
 
       // -ffreestanding-like behavior.
diff --git a/compiler/src/iree/compiler/Dialect/Stream/IR/StreamInterfaces.td b/compiler/src/iree/compiler/Dialect/Stream/IR/StreamInterfaces.td
index c9b2e60..a67cb2b 100644
--- a/compiler/src/iree/compiler/Dialect/Stream/IR/StreamInterfaces.td
+++ b/compiler/src/iree/compiler/Dialect/Stream/IR/StreamInterfaces.td
@@ -18,7 +18,7 @@
 
   let summary = [{defines execution context affinity}];
   let description = [{
-    WIP; see https://github.com/iree-org/iree/issues/10765.
+    WIP; see https://github.com/openxla/iree/issues/10765.
 
     TBD. The intent is that this can specify host, device, and queue affinity.
     Scopes can be annotated with an affinity to ensure execution within happens
diff --git a/compiler/src/iree/compiler/Dialect/VM/IR/VMOps.td b/compiler/src/iree/compiler/Dialect/VM/IR/VMOps.td
index 7c0d75a..0928a0a 100644
--- a/compiler/src/iree/compiler/Dialect/VM/IR/VMOps.td
+++ b/compiler/src/iree/compiler/Dialect/VM/IR/VMOps.td
@@ -26,7 +26,7 @@
 // Pure ops do not have any memory effects, do not invoke Undefined Behavior,
 // and are always safe to speculate/hoist.
 //
-// TODO(https://github.com/iree-org/iree/issues/11179): More VM ops should be
+// TODO(https://github.com/openxla/iree/issues/11179): More VM ops should be
 // made pure.
 class VM_PureOp<string mnemonic, list<Trait> traits = []> :
       VM_TrivialOp<mnemonic, !listconcat(traits, [AlwaysSpeculatable])>;
diff --git a/compiler/src/iree/compiler/Tools/init_iree.cc b/compiler/src/iree/compiler/Tools/init_iree.cc
index 91c6eeb..cccd45d 100644
--- a/compiler/src/iree/compiler/Tools/init_iree.cc
+++ b/compiler/src/iree/compiler/Tools/init_iree.cc
@@ -9,7 +9,7 @@
 #include "llvm/Support/CommandLine.h"
 
 static void versionPrinter(llvm::raw_ostream &os) {
-  os << "IREE (https://iree-org.github.io/):\n  ";
+  os << "IREE (https://openxla.github.io/):\n  ";
   std::string version = mlir::iree_compiler::getIreeRevision();
   if (version.empty()) {
     version = "(unknown)";
@@ -37,7 +37,7 @@
 mlir::iree_compiler::InitIree::InitIree(int &argc, char **&argv)
     : init_llvm_(argc, argv) {
   llvm::setBugReportMsg(
-      "Please report issues to https://github.com/iree-org/iree/issues and "
+      "Please report issues to https://github.com/openxla/iree/issues and "
       "include the crash backtrace.\n");
   llvm::cl::SetVersionPrinter(versionPrinter);
 }
diff --git a/docs/api_docs/python/requirements.txt b/docs/api_docs/python/requirements.txt
index ef4970b..a160bc4 100644
--- a/docs/api_docs/python/requirements.txt
+++ b/docs/api_docs/python/requirements.txt
@@ -6,6 +6,6 @@
 sphinx_toolbox==2.15.0
 
 # IREE Python API
--f https://iree-org.github.io/iree/pip-release-links.html
+-f https://openxla.github.io/iree/pip-release-links.html
 iree-compiler
 iree-runtime
diff --git a/docs/developers/best_practices.md b/docs/developers/best_practices.md
index 1c2d9a7..4f0d31d 100644
--- a/docs/developers/best_practices.md
+++ b/docs/developers/best_practices.md
@@ -29,7 +29,7 @@
 arguments.
 
 See the
-[variables and state](https://github.com/iree-org/iree/tree/main/samples/variables_and_state)
+[variables and state](https://github.com/openxla/iree/tree/main/samples/variables_and_state)
 sample for further guidance on tracking and using state.
 
 ### Limit uses of dynamic shapes
@@ -40,7 +40,7 @@
 varying dimensions like the x/y/channel dimensions of images.
 
 See the
-[dynamic shapes](https://github.com/iree-org/iree/tree/main/samples/dynamic_shapes)
+[dynamic shapes](https://github.com/openxla/iree/tree/main/samples/dynamic_shapes)
 sample for further guidance on using dynamic shapes.
 
 ## Practices for compilation settings
@@ -52,7 +52,7 @@
 ### Tuning compilation heuristics
 
 IREE runs its own suite of benchmarks continuously using the definitions at
-https://github.com/iree-org/iree/tree/main/benchmarks. The flags set for these
+https://github.com/openxla/iree/tree/main/benchmarks. The flags set for these
 benchmarks represent the latest manually tuned values for workloads we track
 closely and referencing them may help with your own search for peak performance.
 You can use these flags in your own explorations, but note that as compiler
@@ -66,7 +66,7 @@
 ### Tuning runtime settings
 
 When running on the CPU, the task system flags specified in
-[iree/task/api.c](https://github.com/iree-org/iree/blob/main/iree/task/api.c)
+[iree/task/api.c](https://github.com/openxla/iree/blob/main/iree/task/api.c)
 give control over how worker threads will be created. For example, the
 `--task_topology_group_count=3` flag can be set to explicitly run on three
 workers rather than rely on heuristic selection that defaults to one worker
diff --git a/docs/developers/debugging/integration_correctness_issue_breakdown.md b/docs/developers/debugging/integration_correctness_issue_breakdown.md
index 79bd97f..862a772 100644
--- a/docs/developers/debugging/integration_correctness_issue_breakdown.md
+++ b/docs/developers/debugging/integration_correctness_issue_breakdown.md
@@ -9,7 +9,7 @@
 See [instructions for reproducing failures in TF/TFLite integration tests](https://github.com/hanhanW/iree/blob/main/docs/developers/debugging/tf_integrations_test_repro.md).
 
 For input data, they are not dumped within the flagfile. You can construct the
-function inputs by looking into `log.txt`. There is an [issue](https://github.com/iree-org/iree/issues/8658)
+function inputs by looking into `log.txt`. There is an [issue](https://github.com/openxla/iree/issues/8658)
 for tracking this.
 
 ## iree-samples
@@ -28,14 +28,14 @@
     --save-temp-iree-input=/tmp/iree-samples/tflitehub/tmp/mobilenet_v2_int8_test.py/tosa.mlir
 ```
 
-Unfortunately, the artifacts are not dumped in the runs. There is an [issue](https://github.com/iree-org/iree/issues/8756)
+Unfortunately, the artifacts are not dumped in the runs. There is an [issue](https://github.com/openxla/iree/issues/8756)
 for tracking this. A workaround can be found in the issue.
 
 # Narrow down the repro
 
 The model itself is big. IREE breaks a model into dispatches and launches the
 kernels. The inputs and outputs could be diverged starting from one of
-launches. To get a smaller reproduce, you can use [-iree-flow-trace-dispatch-tensors](https://github.com/iree-org/iree/blob/main/docs/developers/developing_iree/developer_overview.md#iree-flow-trace-dispatch-tensors).
+launches. To get a smaller reproduce, you can use [-iree-flow-trace-dispatch-tensors](https://github.com/openxla/iree/blob/main/docs/developers/developing_iree/developer_overview.md#iree-flow-trace-dispatch-tensors).
 You can compare the logs between builds/backends, and get the idea about which
 dispatch results in wrong outputs. The dumped inputs can be reused in a
 flagfile.
diff --git a/docs/developers/debugging/releases.md b/docs/developers/debugging/releases.md
index ee505bb..4dfbe51 100644
--- a/docs/developers/debugging/releases.md
+++ b/docs/developers/debugging/releases.md
@@ -93,16 +93,16 @@
 branch.
 
 To run
-[`schedule_snapshot_release.yml`](https://github.com/iree-org/iree/blob/main/.github/workflows/schedule_snapshot_release.yml),
+[`schedule_snapshot_release.yml`](https://github.com/openxla/iree/blob/main/.github/workflows/schedule_snapshot_release.yml),
 comment out
-[this line](https://github.com/iree-org/iree/blob/392449e986493bf710e3da637ebf807715da9ffe/.github/workflows/schedule_snapshot_release.yml#L14):
+[this line](https://github.com/openxla/iree/blob/392449e986493bf710e3da637ebf807715da9ffe/.github/workflows/schedule_snapshot_release.yml#L14):
 ```yaml
 # Don't run this in everyone's forks.
 if: github.repository == 'iree-org/iree'
 ```
 
 And change the branch from 'main' to the branch you are developing on
-[here](https://github.com/iree-org/iree/blob/392449e986493bf710e3da637ebf807715da9ffe/.github/workflows/schedule_snapshot_release.yml#L37):
+[here](https://github.com/openxla/iree/blob/392449e986493bf710e3da637ebf807715da9ffe/.github/workflows/schedule_snapshot_release.yml#L37):
 ```yaml
 - name: Pushing changes
   uses: ad-m/github-push-action@40bf560936a8022e68a3c00e7d2abefaf01305a6  # v0.6.0
@@ -113,21 +113,21 @@
 ```
 
 To speed up
-[`build_package.yml`](https://github.com/iree-org/iree/blob/main/.github/workflows/build_package.yml),
+[`build_package.yml`](https://github.com/openxla/iree/blob/main/.github/workflows/build_package.yml),
 you may want to comment out some of the builds
-[here](https://github.com/iree-org/iree/blob/392449e986493bf710e3da637ebf807715da9ffe/.github/workflows/build_package.yml#L34-L87).
+[here](https://github.com/openxla/iree/blob/392449e986493bf710e3da637ebf807715da9ffe/.github/workflows/build_package.yml#L34-L87).
 The
-[`py-pure-pkgs`](https://github.com/iree-org/iree/blob/392449e986493bf710e3da637ebf807715da9ffe/.github/workflows/build_package.yml#L52)
+[`py-pure-pkgs`](https://github.com/openxla/iree/blob/392449e986493bf710e3da637ebf807715da9ffe/.github/workflows/build_package.yml#L52)
 build takes only ~2 minutes and the
-[`py-runtime-pkg`](https://github.com/iree-org/iree/blob/392449e986493bf710e3da637ebf807715da9ffe/.github/workflows/build_package.yml#L39)
+[`py-runtime-pkg`](https://github.com/openxla/iree/blob/392449e986493bf710e3da637ebf807715da9ffe/.github/workflows/build_package.yml#L39)
 build takes ~5, while the others can take several hours.
 
 From your development branch, you can manually run the
-[Schedule Snapshot Release](https://github.com/iree-org/iree/actions/workflows/schedule_snapshot_release.yml)
+[Schedule Snapshot Release](https://github.com/openxla/iree/actions/workflows/schedule_snapshot_release.yml)
 action, which invokes the
-[Build Native Release Packages](https://github.com/iree-org/iree/actions/workflows/build_package.yml)
+[Build Native Release Packages](https://github.com/openxla/iree/actions/workflows/build_package.yml)
 action, which finally invokes the
-[Validate and Publish Release](https://github.com/iree-org/iree/actions/workflows/validate_and_publish_release.yml)
+[Validate and Publish Release](https://github.com/openxla/iree/actions/workflows/validate_and_publish_release.yml)
 action.  If you already have a draft release and know the release id, package
 version, and run ID from a previous Build Native Release Packages run, you can
 also manually run just the Validate and Publish Release action.
diff --git a/docs/developers/debugging/tf_integrations_test_repro.md b/docs/developers/debugging/tf_integrations_test_repro.md
index 569eabd..0c2e989 100644
--- a/docs/developers/debugging/tf_integrations_test_repro.md
+++ b/docs/developers/debugging/tf_integrations_test_repro.md
@@ -3,7 +3,7 @@
 These are steps to reproduce/address failures in TF/TFLite integration tests. All steps here
 assume starting from the IREE root directory.
 
-1. First setup the python environment as described [here](https://iree-org.github.io/iree/building-from-source/python-bindings-and-importers/#environment-setup).
+1. First setup the python environment as described [here](https://openxla.github.io/iree/building-from-source/python-bindings-and-importers/#environment-setup).
 
 ```
 python -m venv iree.venv
@@ -13,7 +13,7 @@
 2. Install latest IREE release binaries. The importers are not expected to change much, so using the release binaries should work for most cases
 
 ```
-python -m pip install iree-compiler iree-runtime iree-tools-tf iree-tools-tflite --find-links https://iree-org.github.io/iree/pip-release-links.html
+python -m pip install iree-compiler iree-runtime iree-tools-tf iree-tools-tflite --find-links https://openxla.github.io/iree/pip-release-links.html
 ```
 
 3. Install TF nightly
diff --git a/docs/developers/design_docs/codegen_passes.md b/docs/developers/design_docs/codegen_passes.md
index c52daa2..4f69024 100644
--- a/docs/developers/design_docs/codegen_passes.md
+++ b/docs/developers/design_docs/codegen_passes.md
@@ -639,8 +639,8 @@
 Once applied the resulting IR is in SPIR-V dialect that can be serialized to a
 SPIR-V binary.
 
-[ConvertToGPU]: https://github.com/iree-org/iree/blob/main/iree/compiler/Conversion/LinalgToSPIRV/ConvertToGPUPass.cpp
-[ConvertToSPIRV]: https://github.com/iree-org/iree/blob/main/iree/compiler/Conversion/LinalgToSPIRV/ConvertToSPIRVPass.cpp
+[ConvertToGPU]: https://github.com/openxla/iree/blob/main/iree/compiler/Conversion/LinalgToSPIRV/ConvertToGPUPass.cpp
+[ConvertToSPIRV]: https://github.com/openxla/iree/blob/main/iree/compiler/Conversion/LinalgToSPIRV/ConvertToSPIRVPass.cpp
 [DotAfterAll]: https://gist.github.com/MaheshRavishankar/9e2d406296f469515c4a79bf1e7eef44
 [GPUToSPIRV]: https://github.com/llvm/llvm-project/blob/master/mlir/include/mlir/Conversion/GPUToSPIRV/ConvertGPUToSPIRV.h
 [HLOToLinalgPass]: https://github.com/tensorflow/tensorflow/blob/75c40f6bff2faa3d90a375dfa4025b2e6e2d7a3d/tensorflow/compiler/mlir/xla/transforms/passes.h#L67
@@ -649,7 +649,7 @@
 [LinalgFusionOfTensorOps]: https://github.com/llvm/llvm-project/blob/80cb25cbd555f9634836b766c86aead435b60eaa/mlir/include/mlir/Dialect/Linalg/Passes.td#L30
 [LinalgPromotionPatterns]: https://github.com/llvm/llvm-project/blob/303a7f7a26e2aae1cb85f49dccbc0b5d14e0b2e0/mlir/include/mlir/Dialect/Linalg/Transforms/Transforms.h#L358
 [LinalgRationale]: https://mlir.llvm.org/docs/Rationale/RationaleLinalgDialect/
-[LinalgTileAndFuse]: https://github.com/iree-org/iree/blob/main/iree/compiler/Conversion/LinalgToSPIRV/LinalgTileAndFusePass.cpp
+[LinalgTileAndFuse]: https://github.com/openxla/iree/blob/main/iree/compiler/Conversion/LinalgToSPIRV/LinalgTileAndFusePass.cpp
 [LinalgTiling]: https://mlir.llvm.org/docs/Dialects/Linalg/#set-of-key-transformationsa-namekey_transformationsa
 [LinalgTilingPatterns]: https://github.com/llvm/llvm-project/blob/master/mlir/include/mlir/Dialect/Linalg/Transforms/Transforms.h
 [NVVMAddressSpace]: https://docs.nvidia.com/cuda/nvvm-ir-spec/index.html#address-space
diff --git a/docs/developers/design_docs/cuda_backend.md b/docs/developers/design_docs/cuda_backend.md
index 6ff3848..c0bffeb 100644
--- a/docs/developers/design_docs/cuda_backend.md
+++ b/docs/developers/design_docs/cuda_backend.md
@@ -102,10 +102,10 @@
 4xf32=3 4 5 6
 ```
 
-[iree-cuda]: https://github.com/iree-org/iree/tree/main/iree/hal/drivers/cuda/
-[cuda-symbols]: https://github.com/iree-org/iree/blob/main/iree/hal/drivers/cuda/dynamic_symbols_tables.h
+[iree-cuda]: https://github.com/openxla/iree/tree/main/iree/hal/drivers/cuda/
+[cuda-symbols]: https://github.com/openxla/iree/blob/main/iree/hal/drivers/cuda/dynamic_symbols_tables.h
 [cuda-driver]: https://docs.nvidia.com/cuda/cuda-driver-api/index.html
 [cuda-graph]: https://developer.nvidia.com/blog/cuda-graphs/
 [vulkan-semaphore]: https://www.khronos.org/blog/vulkan-timeline-semaphores
-[semaphore-issue]: https://github.com/iree-org/iree/issues/4727
-[codegen-passes]: https://github.com/iree-org/iree/blob/main/docs/design_docs/codegen_passes.md
+[semaphore-issue]: https://github.com/openxla/iree/issues/4727
+[codegen-passes]: https://github.com/openxla/iree/blob/main/docs/design_docs/codegen_passes.md
diff --git a/docs/developers/design_docs/function_abi.md b/docs/developers/design_docs/function_abi.md
index 457ed58..baa1154 100644
--- a/docs/developers/design_docs/function_abi.md
+++ b/docs/developers/design_docs/function_abi.md
@@ -50,26 +50,26 @@
 -   ValueType:
 
     -   Runtime:
-        [`iree_vm_value`](https://github.com/iree-org/iree/blob/main/iree/vm/value.h)
+        [`iree_vm_value`](https://github.com/openxla/iree/blob/main/iree/vm/value.h)
     -   Compile Time: primitive MLIR integer/floating point types
 
 -   Simple ND-Array Buffer:
 
     -   Runtime:
-        [`iree_hal_buffer_view`](https://github.com/iree-org/iree/blob/main/iree/hal/buffer_view.h)
+        [`iree_hal_buffer_view`](https://github.com/openxla/iree/blob/main/iree/hal/buffer_view.h)
     -   Compile Time: `tensor<>`
 
 -   String:
 
     -   Runtime:
-        [`iree_vm_list`](https://github.com/iree-org/iree/blob/main/iree/vm/list.h)
+        [`iree_vm_list`](https://github.com/openxla/iree/blob/main/iree/vm/list.h)
         containing `i8`
     -   Compile Time: `!util.list<i8>`
 
 -   Tuple:
 
     -   Runtime:
-        [`iree_vm_list`](https://github.com/iree-org/iree/blob/main/iree/vm/list.h)
+        [`iree_vm_list`](https://github.com/openxla/iree/blob/main/iree/vm/list.h)
         of variant
     -   Compile Time: `!util.list<?>`
     -   Note that these are statically type erased at the boundary.
@@ -77,7 +77,7 @@
 -   TypedList (homogenous):
 
     -   Runtime:
-        [`iree_vm_list`](https://github.com/iree-org/iree/blob/main/iree/vm/list.h)
+        [`iree_vm_list`](https://github.com/openxla/iree/blob/main/iree/vm/list.h)
         of `T`
     -   Compile Time: `!util.list<T>`
 
diff --git a/docs/developers/design_docs/hal_driver_features.md b/docs/developers/design_docs/hal_driver_features.md
index f5dcbdc..2a423e9 100644
--- a/docs/developers/design_docs/hal_driver_features.md
+++ b/docs/developers/design_docs/hal_driver_features.md
@@ -101,20 +101,20 @@
    `VK_CapabilitiesAttr` to the attribute added to `SPV_ResourceLimitsAttr`.
 
 [d89364]: https://reviews.llvm.org/D89364
-[iree-hal]: https://github.com/iree-org/iree/tree/main/iree/hal
-[iree-hal-c-api]: https://github.com/iree-org/iree/blob/main/iree/hal/api.h
-[iree-hal-dialect]: https://github.com/iree-org/iree/tree/main/iree/compiler/Dialect/HAL
-[iree-vulkan-dialect]: https://github.com/iree-org/iree/tree/main/iree/compiler/Dialect/Vulkan
-[iree-vulkan-base-td]: https://github.com/iree-org/iree/blob/main/iree/compiler/Dialect/Vulkan/IR/VulkanBase.td
-[iree-vulkan-cap-td]: https://github.com/iree-org/iree/blob/main/iree/compiler/Dialect/Vulkan/IR/VulkanAttributes.td
-[iree-vulkan-target-env]: https://github.com/iree-org/iree/blob/b4739d704de15029cd671e53e7d7e743f4ca2e35/iree/compiler/Dialect/HAL/Target/VulkanSPIRV/VulkanSPIRVTarget.cpp#L66-L70
-[iree-vulkan-target-triple]: https://github.com/iree-org/iree/blob/main/iree/compiler/Dialect/Vulkan/Utils/TargetEnvUtils.cpp
-[iree-vulkan-target-conv]: https://github.com/iree-org/iree/blob/b4739d704de15029cd671e53e7d7e743f4ca2e35/iree/compiler/Dialect/Vulkan/Utils/TargetEnvUtils.h#L29-L42
-[iree-spirv-target-attach]: https://github.com/iree-org/iree/blob/b4739d704de15029cd671e53e7d7e743f4ca2e35/iree/compiler/Dialect/HAL/Target/VulkanSPIRV/VulkanSPIRVTarget.cpp#L228-L240
+[iree-hal]: https://github.com/openxla/iree/tree/main/iree/hal
+[iree-hal-c-api]: https://github.com/openxla/iree/blob/main/iree/hal/api.h
+[iree-hal-dialect]: https://github.com/openxla/iree/tree/main/iree/compiler/Dialect/HAL
+[iree-vulkan-dialect]: https://github.com/openxla/iree/tree/main/iree/compiler/Dialect/Vulkan
+[iree-vulkan-base-td]: https://github.com/openxla/iree/blob/main/iree/compiler/Dialect/Vulkan/IR/VulkanBase.td
+[iree-vulkan-cap-td]: https://github.com/openxla/iree/blob/main/iree/compiler/Dialect/Vulkan/IR/VulkanAttributes.td
+[iree-vulkan-target-env]: https://github.com/openxla/iree/blob/b4739d704de15029cd671e53e7d7e743f4ca2e35/iree/compiler/Dialect/HAL/Target/VulkanSPIRV/VulkanSPIRVTarget.cpp#L66-L70
+[iree-vulkan-target-triple]: https://github.com/openxla/iree/blob/main/iree/compiler/Dialect/Vulkan/Utils/TargetEnvUtils.cpp
+[iree-vulkan-target-conv]: https://github.com/openxla/iree/blob/b4739d704de15029cd671e53e7d7e743f4ca2e35/iree/compiler/Dialect/Vulkan/Utils/TargetEnvUtils.h#L29-L42
+[iree-spirv-target-attach]: https://github.com/openxla/iree/blob/b4739d704de15029cd671e53e7d7e743f4ca2e35/iree/compiler/Dialect/HAL/Target/VulkanSPIRV/VulkanSPIRVTarget.cpp#L228-L240
 [mlir-spirv-extensions-attr]: https://github.com/llvm/llvm-project/blob/076305568cd6c7c02ceb9cfc35e1543153406d19/mlir/include/mlir/Dialect/SPIRV/SPIRVBase.td#L314
 [mlir-spirv-target]: https://mlir.llvm.org/docs/Dialects/SPIR-V/#target-environment
 [mlir-spirv-attr]: https://github.com/llvm/llvm-project/blob/076305568cd6c7c02ceb9cfc35e1543153406d19/mlir/include/mlir/Dialect/SPIRV/SPIRVAttributes.h
 [mlir-spirv-target-td]: https://github.com/llvm/llvm-project/blob/076305568cd6c7c02ceb9cfc35e1543153406d19/mlir/include/mlir/Dialect/SPIRV/TargetAndABI.td
-[pr-3469]: https://github.com/iree-org/iree/pull/3469
+[pr-3469]: https://github.com/openxla/iree/pull/3469
 [vk-coop-mat-ext]: khronos.org/registry/vulkan/specs/1.2-extensions/man/html/VK_NV_cooperative_matrix.html
 [vulkaninfo]: https://vulkan.lunarg.com/doc/view/latest/linux/vulkaninfo.html
diff --git a/docs/developers/developing_iree/benchmarking.md b/docs/developers/developing_iree/benchmarking.md
index 3d7c9c2..9c6fdea 100644
--- a/docs/developers/developing_iree/benchmarking.md
+++ b/docs/developers/developing_iree/benchmarking.md
@@ -156,7 +156,7 @@
 introduce as little overhead as possible and have several benchmark binaries
 dedicated for evaluating the VM's performance. These benchmark binaries are
 named as `*_benchmark` in the
-[`iree/vm/`](https://github.com/iree-org/iree/tree/main/iree/vm) directory. They
+[`iree/vm/`](https://github.com/openxla/iree/tree/main/iree/vm) directory. They
 also use the Google Benchmark library as the above.
 
 ## CPU Configuration
diff --git a/docs/developers/developing_iree/contributor_tips.md b/docs/developers/developing_iree/contributor_tips.md
index 20f179b..0118d07 100644
--- a/docs/developers/developing_iree/contributor_tips.md
+++ b/docs/developers/developing_iree/contributor_tips.md
@@ -13,7 +13,7 @@
 We tend to use the "triangular" or "forking" workflow. Develop primarily on a
 clone of the repository on your development machine. Any local branches named
 the same as persistent branches from the
-[main repository](https://github.com/iree-org/iree) (currently `main`, `google`,
+[main repository](https://github.com/openxla/iree) (currently `main`, `google`,
 and `stable`) are pristine (though potentially stale) copies. You only
 fastforward these to match upstream and otherwise do development on other
 branches. When sending PRs, you push to a different branch on your public fork
@@ -42,7 +42,7 @@
     # From whatever directory under which you want to nest your repo
     $ git clone git@github.com:<github_username>/iree.git
     $ cd iree
-    $ git remote add upstream git@github.com:iree-org/iree.git
+    $ git remote add upstream git@github.com:openxla/iree.git
     ```
 
     This is especially important for maintainers who have write access (so can
@@ -55,7 +55,7 @@
     URL explicitly before pushing.
 
 3.  Use a script like
-    [git_update.sh](https://github.com/iree-org/iree/blob/main/scripts/git/git_update.sh)
+    [git_update.sh](https://github.com/openxla/iree/blob/main/scripts/git/git_update.sh)
     to easily synchronize `main` with `upstream`. Submodules make this is a
     little trickier than it should be. You can also add this as a git alias.
 
diff --git a/docs/developers/developing_iree/developer_overview.md b/docs/developers/developing_iree/developer_overview.md
index f9ccfee..cbe89cf 100644
--- a/docs/developers/developing_iree/developer_overview.md
+++ b/docs/developers/developing_iree/developer_overview.md
@@ -4,59 +4,59 @@
 developers.
 
 ** Note: project layout is evolving at the moment, see
-   https://github.com/iree-org/iree/issues/8955 **
+   https://github.com/openxla/iree/issues/8955 **
 
 ## Project Code Layout
 
-[iree/](https://github.com/iree-org/iree/blob/main/iree/)
+[iree/](https://github.com/openxla/iree/blob/main/iree/)
 
 *   Core IREE project
 
-[integrations/](https://github.com/iree-org/iree/blob/main/integrations/)
+[integrations/](https://github.com/openxla/iree/blob/main/integrations/)
 
 *   Integrations between IREE and other frameworks, such as TensorFlow
 
-[runtime/](https://github.com/iree-org/iree/tree/main/runtime/)
+[runtime/](https://github.com/openxla/iree/tree/main/runtime/)
 
 *   IREE runtime code, with no dependencies on the compiler
 
-[bindings/](https://github.com/iree-org/iree/blob/main/bindings/)
+[bindings/](https://github.com/openxla/iree/blob/main/bindings/)
 
 *   Language and platform bindings, such as Python
-*   Also see [runtime/bindings/](https://github.com/iree-org/iree/tree/main/runtime/bindings)
+*   Also see [runtime/bindings/](https://github.com/openxla/iree/tree/main/runtime/bindings)
 
-[samples/](https://github.com/iree-org/iree/blob/main/samples/)
+[samples/](https://github.com/openxla/iree/blob/main/samples/)
 
 *   Samples built using IREE's runtime and compiler
 *   Also see the separate https://github.com/iree-org/iree-samples repository
 
 ## IREE Compiler Code Layout
 
-[iree/compiler/](https://github.com/iree-org/iree/blob/main/iree/compiler/)
+[iree/compiler/](https://github.com/openxla/iree/blob/main/iree/compiler/)
 
 *   IREE's MLIR dialects, LLVM compiler passes, module translation code, etc.
 
 ## IREE Runtime Code Layout
 
-[iree/base/](https://github.com/iree-org/iree/blob/main/runtime/src/iree/base/)
+[iree/base/](https://github.com/openxla/iree/blob/main/runtime/src/iree/base/)
 
 *   Common types and utilities used throughout the runtime
 
-[iree/hal/](https://github.com/iree-org/iree/blob/main/runtime/src/iree/hal/)
+[iree/hal/](https://github.com/openxla/iree/blob/main/runtime/src/iree/hal/)
 
 *   **H**ardware **A**bstraction **L**ayer for IREE's runtime, with
     implementations for hardware and software backends
 
-[iree/schemas/](https://github.com/iree-org/iree/blob/main/runtime/src/iree/schemas/)
+[iree/schemas/](https://github.com/openxla/iree/blob/main/runtime/src/iree/schemas/)
 
 *   Shared data storage format definitions, primarily using
     [FlatBuffers](https://google.github.io/flatbuffers/)
 
-[tools/](https://github.com/iree-org/iree/blob/main/tools/)
+[tools/](https://github.com/openxla/iree/blob/main/tools/)
 
 *   Assorted tools used to optimize, translate, and evaluate IREE
 
-[iree/vm/](https://github.com/iree-org/iree/blob/main/runtime/src/iree/vm/)
+[iree/vm/](https://github.com/openxla/iree/blob/main/runtime/src/iree/vm/)
 
 *   Bytecode **V**irtual **M**achine used to work with IREE modules and invoke
     IREE functions
@@ -87,7 +87,7 @@
 `FileCheck` should be used to test the generated output.
 
 Here's an example of a small compiler pass running on a
-[test file](https://github.com/iree-org/iree/blob/main/iree/compiler/Dialect/Util/Transforms/test/drop_compiler_hints.mlir):
+[test file](https://github.com/openxla/iree/blob/main/iree/compiler/Dialect/Util/Transforms/test/drop_compiler_hints.mlir):
 
 ```shell
 $ ../iree-build/tools/iree-opt \
@@ -99,7 +99,7 @@
 
 For a more complex example, here's how to run IREE's complete transformation
 pipeline targeting the VMVX backend on the
-[fullyconnected.mlir](https://github.com/iree-org/iree/blob/main/tests/e2e/models/fullyconnected.mlir)
+[fullyconnected.mlir](https://github.com/openxla/iree/blob/main/tests/e2e/models/fullyconnected.mlir)
 model file:
 
 ```shell
@@ -110,7 +110,7 @@
 ```
 
 Custom passes may also be layered on top of `iree-opt`, see
-[samples/custom_modules/dialect](https://github.com/iree-org/iree/blob/main/samples/custom_modules/dialect)
+[samples/custom_modules/dialect](https://github.com/openxla/iree/blob/main/samples/custom_modules/dialect)
 for a sample.
 
 ### iree-compile
@@ -151,7 +151,7 @@
 and executes it as a series of
 [googletest](https://github.com/google/googletest) tests. This is the test
 runner for the IREE
-[check framework](https://github.com/iree-org/iree/tree/main/docs/developing_iree/testing_guide.md#end-to-end-tests).
+[check framework](https://github.com/openxla/iree/tree/main/docs/developing_iree/testing_guide.md#end-to-end-tests).
 
 ```shell
 $ ../iree-build/tools/iree-compile \
@@ -177,7 +177,7 @@
 function as exported by default and running all of them.
 
 For example, to execute the contents of
-[samples/models/simple_abs.mlir](https://github.com/iree-org/iree/blob/main/samples/models/simple_abs.mlir):
+[samples/models/simple_abs.mlir](https://github.com/openxla/iree/blob/main/samples/models/simple_abs.mlir):
 
 ```shell
 # iree-run-mlir <compiler flags> [input.mlir] <runtime flags>
@@ -227,4 +227,4 @@
 ### Useful Vulkan driver flags
 
 For IREE's Vulkan runtime driver, there are a few useful flags defined in
-[driver_module.cc](https://github.com/iree-org/iree/blob/main/iree/hal/drivers/vulkan/registration/driver_module.cc):
+[driver_module.cc](https://github.com/openxla/iree/blob/main/iree/hal/drivers/vulkan/registration/driver_module.cc):
diff --git a/docs/developers/developing_iree/e2e_benchmarking.md b/docs/developers/developing_iree/e2e_benchmarking.md
index 44431de..cb18a5e 100644
--- a/docs/developers/developing_iree/e2e_benchmarking.md
+++ b/docs/developers/developing_iree/e2e_benchmarking.md
@@ -5,7 +5,7 @@
 > Note:<br>
 > &nbsp;&nbsp;&nbsp;&nbsp;The TensorFlow integrations are currently being
   refactored. The `bazel` build is deprecated. Refer to
-  https://iree-org.github.io/iree/get-started/getting-started-python for a general
+  https://openxla.github.io/iree/get-started/getting-started-python for a general
   overview of how to build and execute the e2e tests.
 
 We use our end-to-end TensorFlow integration tests to test compilation and
@@ -14,7 +14,7 @@
 to, and to run them using valid inputs for each model.
 
 This guide assumes that you can run the tensorflow integration tests. See
-[this doc](https://iree-org.github.io/iree/building-from-source/python-bindings-and-importers/)
+[this doc](https://openxla.github.io/iree/building-from-source/python-bindings-and-importers/)
 for more information. That doc also covers writing new tests, which you'll need
 to do if you'd like to benchmark a new TensorFlow model.
 
@@ -187,7 +187,7 @@
 
 IREE only supports compiling to Android with CMake. Documentation on setting up
 your environment to cross-compile to Android can be found
-[here](https://iree-org.github.io/iree/building-from-source/android/).
+[here](https://openxla.github.io/iree/building-from-source/android/).
 
 ```shell
 # After following the instructions above up to 'Build all targets', the
diff --git a/docs/developers/developing_iree/profiling_vulkan_gpu.md b/docs/developers/developing_iree/profiling_vulkan_gpu.md
index 2248860..cffdb96 100644
--- a/docs/developers/developing_iree/profiling_vulkan_gpu.md
+++ b/docs/developers/developing_iree/profiling_vulkan_gpu.md
@@ -48,7 +48,7 @@
 app. In IREE we have a simple Android native app wrapper to help package
 IREE core libraries together with a specific VM bytecode invocation into an
 Android app. The wrapper and its documentation are placed at
-[`tools/android/run_module_app/`](https://github.com/iree-org/iree/tree/main/tools/android/run_module_app).
+[`tools/android/run_module_app/`](https://github.com/openxla/iree/tree/main/tools/android/run_module_app).
 
 For example, to package a module compiled from the following `mhlo-dot.mlir` as
 an Android app:
diff --git a/docs/developers/developing_iree/profiling_with_tracy.md b/docs/developers/developing_iree/profiling_with_tracy.md
index 047e8a2..1d2f141 100644
--- a/docs/developers/developing_iree/profiling_with_tracy.md
+++ b/docs/developers/developing_iree/profiling_with_tracy.md
@@ -391,6 +391,6 @@
 ## Configuring Tracy instrumentation
 
 Set IREE's `IREE_TRACING_MODE` value (defined in
-[iree/base/tracing.h](https://github.com/iree-org/iree/blob/main/iree/base/tracing.h))
+[iree/base/tracing.h](https://github.com/openxla/iree/blob/main/iree/base/tracing.h))
 to adjust which tracing features, such as allocation tracking and callstacks,
 are enabled.
diff --git a/docs/developers/developing_iree/sanitizers.md b/docs/developers/developing_iree/sanitizers.md
index 2004fca..810a519 100644
--- a/docs/developers/developing_iree/sanitizers.md
+++ b/docs/developers/developing_iree/sanitizers.md
@@ -59,7 +59,7 @@
 etc. (anything that internally uses the CMake `iree_bytecode_module` macro).
 
 The CMake option `IREE_BUILD_SAMPLES=OFF` is needed because samples [currently
-assume](https://github.com/iree-org/iree/pull/8893) that the embedded linker is
+assume](https://github.com/openxla/iree/pull/8893) that the embedded linker is
 used, so they are incompatible with
 `IREE_BYTECODE_MODULE_FORCE_LLVM_SYSTEM_LINKER=ON`.
 
@@ -70,7 +70,7 @@
 If you know what you're doing (i.e. if you are not building targets that
 internally involve a LLVM/CPU `iree_bytecode_module`), feel free to locally comment out
 the CMake error and only set `IREE_ENABLE_TSAN`. Also see a
-[past attempt]((https://github.com/iree-org/iree/pull/8966) to relax that CMake
+[past attempt]((https://github.com/openxla/iree/pull/8966) to relax that CMake
 validation.
 
 ### MSan (MemorySanitizer)
diff --git a/docs/developers/developing_iree/testing_guide.md b/docs/developers/developing_iree/testing_guide.md
index d20b98a..9c82f1c 100644
--- a/docs/developers/developing_iree/testing_guide.md
+++ b/docs/developers/developing_iree/testing_guide.md
@@ -31,7 +31,7 @@
 ### Running a Test
 
 For the test
-https://github.com/iree-org/iree/blob/main/iree/compiler/Dialect/VM/Conversion/MathToVM/test/arithmetic_ops.mlir
+https://github.com/openxla/iree/blob/main/iree/compiler/Dialect/VM/Conversion/MathToVM/test/arithmetic_ops.mlir
 
 With CMake, run this from the build directory:
 
@@ -76,7 +76,7 @@
 ```
 
 There is a corresponding CMake function, calls to which will be generated by our
-[Bazel to CMake Converter](https://github.com/iree-org/iree/tree/main/build_tools/bazel_to_cmake/bazel_to_cmake.py).
+[Bazel to CMake Converter](https://github.com/openxla/iree/tree/main/build_tools/bazel_to_cmake/bazel_to_cmake.py).
 
 ```cmake
 iree_lit_test_suite(
@@ -174,7 +174,7 @@
 
 We have created a corresponding CMake function `iree_cc_test` that mirrors the
 Bazel rule's behavior. Our
-[Bazel to CMake converter](https://github.com/iree-org/iree/tree/main/build_tools/bazel_to_cmake/bazel_to_cmake.py)
+[Bazel to CMake converter](https://github.com/openxla/iree/tree/main/build_tools/bazel_to_cmake/bazel_to_cmake.py)
 should generally derive the `CMakeLists.txt` file from the BUILD file:
 
 ```cmake
@@ -230,7 +230,7 @@
 ### Running a Test
 
 For the test
-https://github.com/iree-org/iree/tree/main/tests/e2e/xla_ops/floor.mlir
+https://github.com/openxla/iree/tree/main/tests/e2e/xla_ops/floor.mlir
 compiled for the VMVX target backend and running on the VMVX driver (here they
 match exactly, but in principle there's a many-to-many mapping from backends to
 drivers).
@@ -294,7 +294,7 @@
 Next we use this input constant to exercise the runtime feature under test (in
 this case, just a single floor operation). Finally, we use a check dialect
 operation to make an assertion about the output. There are a few different
-[assertion operations](https://github.com/iree-org/iree/tree/main/iree/compiler/Modules/Check).
+[assertion operations](https://github.com/openxla/iree/tree/main/iree/compiler/Modules/Check).
 Here we use the `expect_almost_eq_const` op: *almost* because we are comparing
 floats and want to allow for floating-point imprecision, and *const* because we
 want to compare it to a constant value. This last part is just syntactic sugar
@@ -391,7 +391,7 @@
 
 The CMake functions follow a similar pattern. The calls to them are generated in
 our `CMakeLists.txt` file by
-[bazel_to_cmake](https://github.com/iree-org/iree/tree/main/build_tools/bazel_to_cmake/bazel_to_cmake.py).
+[bazel_to_cmake](https://github.com/openxla/iree/tree/main/build_tools/bazel_to_cmake/bazel_to_cmake.py).
 
 There are other test targets that generate tests based on template configuraton
 and platform detection, such as `iree_static_linker_test`. Those targets are
diff --git a/docs/developers/get_started/README.md b/docs/developers/get_started/README.md
index c5b160a..5cc21bf 100644
--- a/docs/developers/get_started/README.md
+++ b/docs/developers/get_started/README.md
@@ -3,7 +3,7 @@
 ---
 
 The primary guides are located at
-https://iree-org.github.io/iree/building-from-source/ (source in
+https://openxla.github.io/iree/building-from-source/ (source in
 [the website/ folder](../../website/docs/building-from-source/) )
 
 ---
diff --git a/docs/developers/get_started/building_with_bazel_linux.md b/docs/developers/get_started/building_with_bazel_linux.md
index e03e6ef..dfcc597 100644
--- a/docs/developers/get_started/building_with_bazel_linux.md
+++ b/docs/developers/get_started/building_with_bazel_linux.md
@@ -12,7 +12,7 @@
 ### Install Bazel
 
 Install Bazel, matching IREE's
-[`.bazelversion`](https://github.com/iree-org/iree/blob/main/.bazelversion) by
+[`.bazelversion`](https://github.com/openxla/iree/blob/main/.bazelversion) by
 following the
 [official docs](https://docs.bazel.build/versions/master/install.html).
 
@@ -44,7 +44,7 @@
 Clone the repository, initialize its submodules and configure:
 
 ```shell
-$ git clone https://github.com/iree-org/iree.git
+$ git clone https://github.com/openxla/iree.git
 $ cd iree
 $ git submodule update --init
 $ python3 configure_bazel.py
@@ -105,7 +105,7 @@
 ```
 
 Translate a
-[MLIR file](https://github.com/iree-org/iree/blob/main/samples/models/simple_abs.mlir)
+[MLIR file](https://github.com/openxla/iree/blob/main/samples/models/simple_abs.mlir)
 and execute a function in the compiled module:
 
 ```shell
diff --git a/docs/developers/get_started/building_with_bazel_macos.md b/docs/developers/get_started/building_with_bazel_macos.md
index c6a2222..199e1ce 100644
--- a/docs/developers/get_started/building_with_bazel_macos.md
+++ b/docs/developers/get_started/building_with_bazel_macos.md
@@ -45,7 +45,7 @@
 Clone the repository, initialize its submodules and configure:
 
 ```shell
-$ git clone https://github.com/iree-org/iree.git
+$ git clone https://github.com/openxla/iree.git
 $ cd iree
 $ git submodule update --init
 $ python3 configure_bazel.py
@@ -108,7 +108,7 @@
 ```
 
 Translate a
-[MLIR file](https://github.com/iree-org/iree/blob/main/samples/models/simple_abs.mlir)
+[MLIR file](https://github.com/openxla/iree/blob/main/samples/models/simple_abs.mlir)
 and execute a function in the compiled module:
 
 ```shell
diff --git a/docs/developers/get_started/building_with_bazel_windows.md b/docs/developers/get_started/building_with_bazel_windows.md
index cb2de28..74bf054 100644
--- a/docs/developers/get_started/building_with_bazel_windows.md
+++ b/docs/developers/get_started/building_with_bazel_windows.md
@@ -18,7 +18,7 @@
 ### Install Bazel
 
 Install Bazel version > 2.0.0 (see
-[`.bazelversion`](https://github.com/iree-org/iree/blob/main/.bazelversion) for
+[`.bazelversion`](https://github.com/openxla/iree/blob/main/.bazelversion) for
 the specific version IREE uses) by following the
 [official docs](https://docs.bazel.build/versions/master/install-windows.html).
 
@@ -52,7 +52,7 @@
 clone the repository, initialize its submodules, and configure:
 
 ```powershell
-> git clone https://github.com/iree-org/iree.git
+> git clone https://github.com/openxla/iree.git
 > cd iree
 > git submodule update --init
 > python configure_bazel.py
@@ -111,7 +111,7 @@
 ```
 
 Translate a
-[MLIR file](https://github.com/iree-org/iree/blob/main/samples/models/simple_abs.mlir)
+[MLIR file](https://github.com/openxla/iree/blob/main/samples/models/simple_abs.mlir)
 and execute a function in the compiled module:
 
 ```powershell
diff --git a/docs/developers/objectives.md b/docs/developers/objectives.md
index cb07621..d9bebca 100644
--- a/docs/developers/objectives.md
+++ b/docs/developers/objectives.md
@@ -87,14 +87,14 @@
 
 +   P1 KR: Able to perform IREE profiling using Tracy
 
-    + See [https://github.com/iree-org/iree/issues/1886](https://github.com/iree-org/iree/issues/1886), [https://github.com/wolfpld/tracy](https://github.com/wolfpld/tracy)
+    + See [https://github.com/openxla/iree/issues/1886](https://github.com/openxla/iree/issues/1886), [https://github.com/wolfpld/tracy](https://github.com/wolfpld/tracy)
 
 +   P1 KR: Able to map time spent in execution to back to source using Tracy
-    +   See https://github.com/iree-org/iree/issues/1199
+    +   See https://github.com/openxla/iree/issues/1199
     +   Source layer (source python, HLO, HAL, etc) is configurable at compile time.
 
 +   P1 KR: Able to track compile-time performance-related statistics
-    + See [https://github.com/iree-org/iree/issues/1409](https://github.com/iree-org/iree/issues/1409)
+    + See [https://github.com/openxla/iree/issues/1409](https://github.com/openxla/iree/issues/1409)
     + Initial stats to track: number of executables, the serialized size of constant data, the serialized size of the executables, the number of host readbacks (flow.tensor.load), backend specific stats like the number of split dispatches in the SPIR-V backend, dynamic shape info like the number of tensors with dynamic shapes that survive after shape propagation
 
 +   P1 KR: Internal and external contributors able to confidently assess performance impact of a change.
diff --git a/docs/developers/tensorflow_coverage/language_and_speech_coverage.md b/docs/developers/tensorflow_coverage/language_and_speech_coverage.md
index 2c6ab58..d156424 100644
--- a/docs/developers/tensorflow_coverage/language_and_speech_coverage.md
+++ b/docs/developers/tensorflow_coverage/language_and_speech_coverage.md
@@ -3,7 +3,7 @@
 Tests of MobileBert and streamable Keyword Spotting models.
 
 IREE has three main backend
-[targets](https://github.com/iree-org/iree/tree/main/iree/compiler/Dialect/HAL/Target):
+[targets](https://github.com/openxla/iree/tree/main/iree/compiler/Dialect/HAL/Target):
 `vmvx` , `llvm` and `vulkan-spirv`. We also test TFLite in our infrastructure
 for benchmarking purposes.
 
diff --git a/docs/website/docs/bindings/c-api.md b/docs/website/docs/bindings/c-api.md
index 2497d3d..9dbe9fa 100644
--- a/docs/website/docs/bindings/c-api.md
+++ b/docs/website/docs/bindings/c-api.md
@@ -7,11 +7,11 @@
 
 | Component header file                                                       | Overview                                                                  |
 |-----------------------------------------------------------------------------|---------------------------------------------------------------------------|
-| [iree/base/api.h](https://github.com/iree-org/iree/blob/main/runtime/src/iree/base/api.h) | Core API, type definitions, ownership policies, utilities                 |
-| [iree/vm/api.h](https://github.com/iree-org/iree/blob/main/runtime/src/iree/vm/api.h)     | VM APIs: loading modules, I/O, calling functions                          |
-| [iree/hal/api.h](https://github.com/iree-org/iree/blob/main/runtime/src/iree/hal/api.h)   | HAL APIs: device management, synchronization, accessing hardware features |
+| [iree/base/api.h](https://github.com/openxla/iree/blob/main/runtime/src/iree/base/api.h) | Core API, type definitions, ownership policies, utilities                 |
+| [iree/vm/api.h](https://github.com/openxla/iree/blob/main/runtime/src/iree/vm/api.h)     | VM APIs: loading modules, I/O, calling functions                          |
+| [iree/hal/api.h](https://github.com/openxla/iree/blob/main/runtime/src/iree/hal/api.h)   | HAL APIs: device management, synchronization, accessing hardware features |
 
-The [samples/](https://github.com/iree-org/iree/tree/main/samples)
+The [samples/](https://github.com/openxla/iree/tree/main/samples)
 directory demonstrates several ways to use IREE's C API.
 
 ## Prerequisites
@@ -147,7 +147,7 @@
 
 !!! note
     Many IREE samples use
-    [`c_embed_data`](https://github.com/iree-org/iree/tree/main/build_tools/embed_data)
+    [`c_embed_data`](https://github.com/openxla/iree/tree/main/build_tools/embed_data)
     to embed vmfb files as C code to avoid file I/O and ease portability.
     Applications should use what makes sense for their platforms and deployment
     configurations.
@@ -211,11 +211,11 @@
 
 [^1]:
   We are exploring adding a C API for IREE's compiler, see
-  [this GitHub issue](https://github.com/iree-org/iree/issues/3817)
+  [this GitHub issue](https://github.com/openxla/iree/issues/3817)
 
 [^2]:
   We plan on deploying via [vcpkg](https://github.com/microsoft/vcpkg) in the
   future too, see
-  [this GitHub project](https://github.com/iree-org/iree/projects/18)
+  [this GitHub project](https://github.com/openxla/iree/projects/18)
 
 *[vmfb]: VM FlatBuffer
diff --git a/docs/website/docs/bindings/python.md b/docs/website/docs/bindings/python.md
index 61208f7..c458fc4 100644
--- a/docs/website/docs/bindings/python.md
+++ b/docs/website/docs/bindings/python.md
@@ -25,7 +25,7 @@
 !!! Caution
     The TensorFlow, TensorFlow Lite, and XLA packages are currently only
     available on Linux and macOS. They are not available on Windows yet (see
-    [this issue](https://github.com/iree-org/iree/issues/6417)).
+    [this issue](https://github.com/openxla/iree/issues/6417)).
 
 ## Prerequisites
 
@@ -104,11 +104,11 @@
 !!! Tip
 
     Nightly packages are also published on
-    [GitHub releases](https://github.com/iree-org/iree/releases). To use these,
+    [GitHub releases](https://github.com/openxla/iree/releases). To use these,
     run `pip install` with this extra option:
 
     ```
-    --find-links https://iree-org.github.io/iree/pip-release-links.html
+    --find-links https://openxla.github.io/iree/pip-release-links.html
     ```
 
 ### Building from source
@@ -122,7 +122,7 @@
 [readthedocs](https://iree-python-api.readthedocs.io/en/latest/).
 
 Check out the samples in IREE's
-[samples/colab/ directory](https://github.com/iree-org/iree/tree/main/samples/colab)
+[samples/colab/ directory](https://github.com/openxla/iree/tree/main/samples/colab)
 and the [iree-samples repository](https://github.com/iree-org/iree-samples) for
 examples using the Python APIs.
 
diff --git a/docs/website/docs/bindings/tensorflow-lite.md b/docs/website/docs/bindings/tensorflow-lite.md
index cd309d5..947c557 100644
--- a/docs/website/docs/bindings/tensorflow-lite.md
+++ b/docs/website/docs/bindings/tensorflow-lite.md
@@ -1,7 +1,7 @@
 # TensorFlow Lite bindings
 
 !!! todo
-    [Issue#5462](https://github.com/iree-org/iree/issues/5462): write this documentation
+    [Issue#5462](https://github.com/openxla/iree/issues/5462): write this documentation
 
 <!-- TODO(??): overview, advantages/disadvantages to using TFLite bindings -->
 
diff --git a/docs/website/docs/blog/2021-07-19-tflite-tosa.md b/docs/website/docs/blog/2021-07-19-tflite-tosa.md
index 3e260d8..c8e1616 100644
--- a/docs/website/docs/blog/2021-07-19-tflite-tosa.md
+++ b/docs/website/docs/blog/2021-07-19-tflite-tosa.md
@@ -34,7 +34,7 @@
 ## Examples
 
 TFLite with IREE is available in Python and Java.  We have a
-[colab notebook](https://colab.research.google.com/github/iree-org/iree/blob/main/samples/colab/tflite_text_classification.ipynb)
+[colab notebook](https://colab.research.google.com/github/openxla/iree/blob/main/samples/colab/tflite_text_classification.ipynb)
 that shows how to use IREE’s python bindings and TFLite compiler tools to
 compile a pre-trained TFLite model from a FlatBuffer and run using IREE.  We
 also have an
diff --git a/docs/website/docs/blog/2021-10-15-cuda-backend.md b/docs/website/docs/blog/2021-10-15-cuda-backend.md
index a1e47a1..25ef911 100644
--- a/docs/website/docs/blog/2021-10-15-cuda-backend.md
+++ b/docs/website/docs/blog/2021-10-15-cuda-backend.md
@@ -18,7 +18,7 @@
 
 ### HAL support
 
-IREE has a [HAL API](https://github.com/iree-org/iree/blob/main/docs/developers/design_roadmap.md#hal-hardware-abstraction-layer-and-multi-architecture-executables)
+IREE has a [HAL API](https://github.com/openxla/iree/blob/main/docs/developers/design_roadmap.md#hal-hardware-abstraction-layer-and-multi-architecture-executables)
 that abstract all the targets behind a common interface. The first step to
 supporting a CUDA target was to map the HAL API onto CUDA. We use the CUDA
 driver API to reduce dependencies and be closer to the hardware. The HAL API is
@@ -31,7 +31,7 @@
 
 HAL exposes an API that can be tested independently, even if we are not able to
 create CUDA kernels yet we can test a large portion of the CUDA driver using
-[CTS tests](https://github.com/iree-org/iree/blob/main/iree/hal/cts/README.md).
+[CTS tests](https://github.com/openxla/iree/blob/main/iree/hal/cts/README.md).
 Those can be run to make sure a system has the required CUDA support.
 
  ![Compilation flow](./2021-10-15-cuda-compiler-flow.png){ align=left }
@@ -86,7 +86,7 @@
 ![Compilation diagram](./2021-10-15-cuda-bring_up.png)
 
 The steps to reproduce running a simple op end to end through CUDA backend are
-described [here](https://github.com/iree-org/iree/blob/main/docs/developers/design_docs/cuda_backend.md#example).
+described [here](https://github.com/openxla/iree/blob/main/docs/developers/design_docs/cuda_backend.md#example).
 
 ## Performance
 
diff --git a/docs/website/docs/building-from-source/getting-started.md b/docs/website/docs/building-from-source/getting-started.md
index ca08dc9..78cedfa 100644
--- a/docs/website/docs/building-from-source/getting-started.md
+++ b/docs/website/docs/building-from-source/getting-started.md
@@ -65,7 +65,7 @@
 submodules:
 
 ``` shell
-git clone https://github.com/iree-org/iree.git
+git clone https://github.com/openxla/iree.git
 cd iree
 git submodule update --init
 ```
@@ -136,7 +136,7 @@
     -DCMAKE_CXX_COMPILER_LAUNCHER=ccache
     ```
 
-    See also our [developer documentation for ccache](https://github.com/iree-org/iree/blob/main/docs/developers/developing_iree/ccache.md).
+    See also our [developer documentation for ccache](https://github.com/openxla/iree/blob/main/docs/developers/developing_iree/ccache.md).
 
 ## What's next?
 
diff --git a/docs/website/docs/building-from-source/index.md b/docs/website/docs/building-from-source/index.md
index dd1be66..2528f57 100644
--- a/docs/website/docs/building-from-source/index.md
+++ b/docs/website/docs/building-from-source/index.md
@@ -1,7 +1,7 @@
 # Building IREE from source
 
 While IREE does offer
-[binary distributions](https://github.com/iree-org/iree/releases) for its
+[binary distributions](https://github.com/openxla/iree/releases) for its
 compiler tools and [Python bindings](../bindings/python.md), building from
 source is still useful when using IREE's runtime or when making changes to the
 compiler or import tools themselves.
diff --git a/docs/website/docs/building-from-source/python-bindings-and-importers.md b/docs/website/docs/building-from-source/python-bindings-and-importers.md
index 0026169..ca881f9 100644
--- a/docs/website/docs/building-from-source/python-bindings-and-importers.md
+++ b/docs/website/docs/building-from-source/python-bindings-and-importers.md
@@ -202,7 +202,7 @@
 !!! Caution
 
     This section is under construction. Refer to the
-    [source documentation](https://github.com/iree-org/iree/tree/main/integrations/tensorflow#readme)
+    [source documentation](https://github.com/openxla/iree/tree/main/integrations/tensorflow#readme)
     for the latest building from source instructions.
 
 ???+ Note
diff --git a/docs/website/docs/building-from-source/riscv.md b/docs/website/docs/building-from-source/riscv.md
index 11a67cd..5457325 100644
--- a/docs/website/docs/building-from-source/riscv.md
+++ b/docs/website/docs/building-from-source/riscv.md
@@ -68,7 +68,7 @@
 
 The following instruction shows how to build for a RISC-V 64-bit Linux machine.
 For other RISC-V targets, please refer to
-[riscv.toolchain.cmake](https://github.com/iree-org/iree/blob/main/build_tools/cmake/riscv.toolchain.cmake)
+[riscv.toolchain.cmake](https://github.com/openxla/iree/blob/main/build_tools/cmake/riscv.toolchain.cmake)
 as a reference of how to set up the cmake configuration.
 
 #### RISC-V 64-bit Linux target
@@ -90,7 +90,7 @@
 !!! note
     The following instructions are meant for the RISC-V 64-bit Linux
     target. For the bare-metal target, please refer to
-    [simple_embedding](https://github.com/iree-org/iree/blob/main/samples/simple_embedding)
+    [simple_embedding](https://github.com/openxla/iree/blob/main/samples/simple_embedding)
     to see how to build a ML workload for a bare-metal machine.
 
 Set the path to qemu-riscv64 Linux emulator binary in the `QEMU_BIN` environment
diff --git a/docs/website/docs/deployment-configurations/bare-metal.md b/docs/website/docs/deployment-configurations/bare-metal.md
index 6a8a4a6..b970cb1 100644
--- a/docs/website/docs/deployment-configurations/bare-metal.md
+++ b/docs/website/docs/deployment-configurations/bare-metal.md
@@ -45,10 +45,10 @@
 * `iree-hal-target-backends=llvm-cpu`: Compile using the LLVM CPU target
 * `iree-llvm-target-triple`: Use the `<arch>-pc-linux-elf` LLVM target triple
     so the artifact has a fixed ABI to be rendered by the
-    [elf_module library](https://github.com/iree-org/iree/tree/main/iree/hal/local/elf)
+    [elf_module library](https://github.com/openxla/iree/tree/main/iree/hal/local/elf)
 * `iree-llvm-debug-symbols=false`: To reduce the artifact size
 
-See [generate.sh](https://github.com/iree-org/iree/blob/main/iree/hal/local/elf/testdata/generate.sh)
+See [generate.sh](https://github.com/openxla/iree/blob/main/iree/hal/local/elf/testdata/generate.sh)
 for example command-line instructions of some common architectures
 
 You can replace the MLIR file with the other MLIR model files, following the
@@ -56,7 +56,7 @@
 
 ### Compiling the bare-metal model for static-library support
 
-See the [static_library](https://github.com/iree-org/iree/tree/main/samples/static_library)
+See the [static_library](https://github.com/openxla/iree/tree/main/samples/static_library)
 demo sample for an example and instructions on running a model with IREE's
 `static_library_loader`.
 
@@ -89,11 +89,11 @@
 * `set(IREE_BUILD_TESTS OFF)`: Disable tests until IREE supports running them
   on bare-metal platforms
 * `set(IREE_BUILD_SAMPLES ON)`: Build
-  [simple_embedding](https://github.com/iree-org/iree/tree/main/samples/simple_embedding)
+  [simple_embedding](https://github.com/openxla/iree/tree/main/samples/simple_embedding)
   example
 
 !!! todo
-    Clean the list up after [#6353](https://github.com/iree-org/iree/issues/6353)
+    Clean the list up after [#6353](https://github.com/openxla/iree/issues/6353)
     is fixed.
 
 Also, set the toolchain-specific cmake file to match the tool path, target
@@ -115,13 +115,13 @@
 
 Examples of how to setup the CMakeLists.txt and .cmake file:
 
-* [IREE RISC-V toolchain cmake](https://github.com/iree-org/iree/blob/main/build_tools/cmake/riscv.toolchain.cmake)
+* [IREE RISC-V toolchain cmake](https://github.com/openxla/iree/blob/main/build_tools/cmake/riscv.toolchain.cmake)
 * [IREE Bare-Metal Arm Sample](https://github.com/iml130/iree-bare-metal-arm)
 * [IREE Bare-Metal RV32 Sample](https://github.com/AmbiML/iree-rv32-springbok)
 
 ## Bare-metal execution example
 
 See
-[simple_embedding for generic platform](https://github.com/iree-org/iree/blob/main/samples/simple_embedding/README.md#generic-platform-support)
+[simple_embedding for generic platform](https://github.com/openxla/iree/blob/main/samples/simple_embedding/README.md#generic-platform-support)
 to see how to use the IREE runtime library to build/run the IREE model for the
 bare-metal target.
diff --git a/docs/website/docs/deployment-configurations/index.md b/docs/website/docs/deployment-configurations/index.md
index 8754fc2..10a4cfe 100644
--- a/docs/website/docs/deployment-configurations/index.md
+++ b/docs/website/docs/deployment-configurations/index.md
@@ -37,7 +37,7 @@
 | `cuda`         | NVIDIA GPU support via PTX for CUDA | `cuda` |
 | `rocm`         | **Experimental** <br> AMD GPU support via HSACO for ROCm | `rocm` |
 | `webgpu-wgsl`  | **Experimental** <br> GPU support on the Web via WGSL for WebGPU | `webgpu` |
-| `metal-spirv`  | **Stale - see [Issue#4370](https://github.com/iree-org/iree/issues/4370)** <br> GPU support on Apple platforms via MSL for Metal | `metal` |
+| `metal-spirv`  | **Stale - see [Issue#4370](https://github.com/openxla/iree/issues/4370)** <br> GPU support on Apple platforms via MSL for Metal | `metal` |
 
 !!! tip
     The list of available compiler target backends can be queried with
@@ -59,7 +59,7 @@
 | `cuda`       | NVIDIA GPU execution using CUDA |
 | `rocm`       | **Experimental** <br> AMD GPU execution using ROCm |
 | `webgpu`     | **Experimental** <br> GPU execution on the web using WebGPU |
-| `metal`      | **Stale - see [Issue#4370](https://github.com/iree-org/iree/issues/4370)** <br> GPU execution on Apple platforms using Metal |
+| `metal`      | **Stale - see [Issue#4370](https://github.com/openxla/iree/issues/4370)** <br> GPU execution on Apple platforms using Metal |
 
 !!! tip
     The list of available runtime HAL devices can be queried with
diff --git a/docs/website/docs/extensions/index.md b/docs/website/docs/extensions/index.md
index f0d9d46..b87d9de 100644
--- a/docs/website/docs/extensions/index.md
+++ b/docs/website/docs/extensions/index.md
@@ -210,8 +210,8 @@
 cross-module calls and users must be aware that the compiler cannot optimize
 across the call boundaries.
 
-See the [synchronous tensor I/O](https://github.com/iree-org/iree/tree/main/samples/custom_module/sync/)
-and [asynchronous tensor I/O](https://github.com/iree-org/iree/tree/main/samples/custom_module/async/)
+See the [synchronous tensor I/O](https://github.com/openxla/iree/tree/main/samples/custom_module/sync/)
+and [asynchronous tensor I/O](https://github.com/openxla/iree/tree/main/samples/custom_module/async/)
 samples.
 
 ### Pros
@@ -243,18 +243,18 @@
 
 The runtime portion requires that the code be exported to the VM system by way
 of an `iree_vm_module_t` interface. A low-level native interface exists with
-minimal overhead and is used for example [by the IREE HAL itself](https://github.com/iree-org/iree/tree/main/iree/modules/hal).
+minimal overhead and is used for example [by the IREE HAL itself](https://github.com/openxla/iree/tree/main/iree/modules/hal).
 There is also a C++ wrapper that is significantly easier to work with however it
 needs some performance improvements.
 
-Full end-to-end examples can be found under [`samples/custom_modules/`](https://github.com/iree-org/iree/tree/main/samples/custom_modules):
+Full end-to-end examples can be found under [`samples/custom_modules/`](https://github.com/openxla/iree/tree/main/samples/custom_modules):
 
-* The [basic](https://github.com/iree-org/iree/tree/main/samples/custom_module/basic/)
+* The [basic](https://github.com/openxla/iree/tree/main/samples/custom_module/basic/)
 sample shows how to add VM modules with custom types and take advantage of ABI
 features like fallback functions and optional imports.
-* The [synchronous tensor I/O](https://github.com/iree-org/iree/tree/main/samples/custom_module/sync/)
+* The [synchronous tensor I/O](https://github.com/openxla/iree/tree/main/samples/custom_module/sync/)
 sample shows a call taking and returning a tensor and performing blocking work.
-* The [asynchronous tensor I/O](https://github.com/iree-org/iree/tree/main/samples/custom_module/async/)
+* The [asynchronous tensor I/O](https://github.com/openxla/iree/tree/main/samples/custom_module/async/)
 sample shows the same thing but with fences for asynchronous scheduling.
 
 ## 3. Extend target-specific device conversion patterns
@@ -498,6 +498,6 @@
 marshal arguments and results.
 
 The compiler-side needs some additional work but an example is included here:
-[Issue 7504](https://github.com/iree-org/iree/issues/7504).
+[Issue 7504](https://github.com/openxla/iree/issues/7504).
 The runtime-side is complete and resolution is performed by a user-supplied
 `iree_hal_executable_import_provider_t`.
diff --git a/docs/website/docs/getting-started/index.md b/docs/website/docs/getting-started/index.md
index f90b63a..f60e637 100644
--- a/docs/website/docs/getting-started/index.md
+++ b/docs/website/docs/getting-started/index.md
@@ -30,7 +30,7 @@
 ## Samples
 
 Check out the samples in IREE's
-[samples/colab/ directory](https://github.com/iree-org/iree/tree/main/colab),
+[samples/colab/ directory](https://github.com/openxla/iree/tree/main/colab),
 as well as the [iree-samples repository](https://github.com/iree-org/iree-samples),
 which contains workflow comparisons across frameworks.
 
diff --git a/docs/website/docs/getting-started/pytorch.md b/docs/website/docs/getting-started/pytorch.md
index 200868d..f5f1ea7 100644
--- a/docs/website/docs/getting-started/pytorch.md
+++ b/docs/website/docs/getting-started/pytorch.md
@@ -72,7 +72,7 @@
 ```
 
 Here we have a choice of backend we want to target. See the
-[Deployment Configurations](https://iree-org.github.io/iree/deployment-configurations/)
+[Deployment Configurations](https://openxla.github.io/iree/deployment-configurations/)
 section of this site for a full list of targets and configurations.
 
 The generated flatbuffer can now be serialized and stored for another time or
diff --git a/docs/website/docs/getting-started/tensorflow.md b/docs/website/docs/getting-started/tensorflow.md
index d9dd376..beea783 100644
--- a/docs/website/docs/getting-started/tensorflow.md
+++ b/docs/website/docs/getting-started/tensorflow.md
@@ -29,7 +29,7 @@
 !!! Caution
     The TensorFlow package is currently only available on Linux and macOS. It
     is not available on Windows yet (see
-    [this issue](https://github.com/iree-org/iree/issues/6417)).
+    [this issue](https://github.com/openxla/iree/issues/6417)).
 
 ## Importing models
 
@@ -97,13 +97,13 @@
 
 | Colab notebooks |  |
 | -- | -- |
-Training an MNIST digits classifier | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/iree-org/iree/blob/main/samples/colab/mnist_training.ipynb)
-Edge detection module | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/iree-org/iree/blob/main/samples/colab/edge_detection.ipynb)
-Pretrained ResNet50 inference | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/iree-org/iree/blob/main/samples/colab/resnet.ipynb)
-TensorFlow Hub Import | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/iree-org/iree/blob/main/samples/colab/tensorflow_hub_import.ipynb)
+Training an MNIST digits classifier | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openxla/iree/blob/main/samples/colab/mnist_training.ipynb)
+Edge detection module | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openxla/iree/blob/main/samples/colab/edge_detection.ipynb)
+Pretrained ResNet50 inference | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openxla/iree/blob/main/samples/colab/resnet.ipynb)
+TensorFlow Hub Import | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openxla/iree/blob/main/samples/colab/tensorflow_hub_import.ipynb)
 
 End-to-end execution tests can be found in IREE's
-[integrations/tensorflow/e2e/](https://github.com/iree-org/iree/tree/main/integrations/tensorflow/e2e)
+[integrations/tensorflow/e2e/](https://github.com/openxla/iree/tree/main/integrations/tensorflow/e2e)
 directory.
 
 ## Troubleshooting
diff --git a/docs/website/docs/getting-started/tflite.md b/docs/website/docs/getting-started/tflite.md
index 049c894..bb35995 100644
--- a/docs/website/docs/getting-started/tflite.md
+++ b/docs/website/docs/getting-started/tflite.md
@@ -138,7 +138,7 @@
 TensorFlow Lite's operations to TOSA, the intermediate representation used by
 IREE. Many TensorFlow Lite operations are not fully supported, particularly
 those than use dynamic shapes. File an issue to IREE's TFLite model support
-[project](https://github.com/iree-org/iree/projects/42).
+[project](https://github.com/openxla/iree/projects/42).
 
 ## Additional Samples
 
@@ -149,18 +149,18 @@
 models sourced from [TensorFlow Hub](https://tfhub.dev/).
 
 * An example smoke test of the
-[TensorFlow Lite C API](https://github.com/iree-org/iree/tree/main/runtime/bindings/tflite)
+[TensorFlow Lite C API](https://github.com/openxla/iree/tree/main/runtime/bindings/tflite)
 is available
-[here](https://github.com/iree-org/iree/blob/main/runtime/bindings/tflite/smoke_test.cc).
+[here](https://github.com/openxla/iree/blob/main/runtime/bindings/tflite/smoke_test.cc).
 
 | Colab notebooks |  |
 | -- | -- |
-Text classification with TFLite and IREE | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/iree-org/iree/blob/main/samples/colab/tflite_text_classification.ipynb)
+Text classification with TFLite and IREE | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openxla/iree/blob/main/samples/colab/tflite_text_classification.ipynb)
 
 !!! todo
 
-    [Issue#3954](https://github.com/iree-org/iree/issues/3954): Add documentation
+    [Issue#3954](https://github.com/openxla/iree/issues/3954): Add documentation
     for an Android demo using the
-    [Java TFLite bindings](https://github.com/iree-org/iree/tree/main/runtime/bindings/tflite/java),
+    [Java TFLite bindings](https://github.com/openxla/iree/tree/main/runtime/bindings/tflite/java),
     once it is complete at
     [not-jenni/iree-android-tflite-demo](https://github.com/not-jenni/iree-android-tflite-demo).
diff --git a/docs/website/docs/index.md b/docs/website/docs/index.md
index 61ed485..25d4fbc 100644
--- a/docs/website/docs/index.md
+++ b/docs/website/docs/index.md
@@ -138,7 +138,7 @@
 ## Communication channels
 
 * :fontawesome-brands-github:
-  [GitHub issues](https://github.com/iree-org/iree/issues): Feature requests,
+  [GitHub issues](https://github.com/openxla/iree/issues): Feature requests,
   bugs, and other work tracking
 * :fontawesome-brands-discord:
   [IREE Discord server](https://discord.gg/26P4xW4): Daily development
@@ -150,8 +150,8 @@
 
 IREE is in the early stages of development and is not yet ready for broad
 adoption. We use both
-[GitHub Projects](https://github.com/iree-org/iree/projects) and
-[GitHub Milestones](https://github.com/iree-org/iree/milestones) to track
+[GitHub Projects](https://github.com/openxla/iree/projects) and
+[GitHub Milestones](https://github.com/openxla/iree/milestones) to track
 progress.
 
 [^1]:
diff --git a/docs/website/mkdocs.yml b/docs/website/mkdocs.yml
index fcdc27b..c73ac56 100644
--- a/docs/website/mkdocs.yml
+++ b/docs/website/mkdocs.yml
@@ -1,7 +1,7 @@
 site_name: IREE
-site_url: https://iree-org.github.io/iree/
-repo_url: https://github.com/iree-org/iree
-repo_name: iree-org/iree
+site_url: https://openxla.github.io/iree/
+repo_url: https://github.com/openxla/iree
+repo_name: openxla/iree
 
 theme:
   name: material
@@ -60,7 +60,7 @@
 
   social:
     - icon: fontawesome/brands/github
-      link: https://github.com/iree-org/iree
+      link: https://github.com/openxla/iree
       name: IREE on GitHub
     - icon: fontawesome/brands/discord
       link: https://discord.gg/26P4xW4
diff --git a/docs/website/overrides/404.html b/docs/website/overrides/404.html
index 49caf6c..64dbee5 100644
--- a/docs/website/overrides/404.html
+++ b/docs/website/overrides/404.html
@@ -6,7 +6,7 @@
 
 <p>Sorry, we couldn't find that page.</p>
 
-<p>The <a href="https://github.com/iree-org/iree/tree/main/docs/developers"><code>docs/developers/</code></a> directory on GitHub might be helpful.
+<p>The <a href="https://github.com/openxla/iree/tree/main/docs/developers"><code>docs/developers/</code></a> directory on GitHub might be helpful.
 
 <p>Click <a href="{{ config.site_url }}">here</a> to go back to the home page.</p>
 
diff --git a/experimental/web/generate_web_metrics.sh b/experimental/web/generate_web_metrics.sh
index 6ac2daf..45af069 100644
--- a/experimental/web/generate_web_metrics.sh
+++ b/experimental/web/generate_web_metrics.sh
@@ -66,7 +66,7 @@
 # specific version when iterating on metrics is useful, and fetching is slow.
 
 python -m pip install --upgrade \
-  --find-links https://iree-org.github.io/iree/pip-release-links.html \
+  --find-links https://openxla.github.io/iree/pip-release-links.html \
   iree-compiler iree-tools-tflite iree-tools-xla
 
 ###############################################################################
diff --git a/experimental/web/sample_dynamic/index.html b/experimental/web/sample_dynamic/index.html
index 871d5c7..078aed4 100644
--- a/experimental/web/sample_dynamic/index.html
+++ b/experimental/web/sample_dynamic/index.html
@@ -52,8 +52,8 @@
 
     <p>
       This tool works similarly to
-      <a href="https://github.com/iree-org/iree/blob/main/tools/iree-run-module-main.cc"><code>iree-run-module</code></a>
-      (<a href="https://github.com/iree-org/iree/blob/main/docs/developers/developing_iree/developer_overview.md#iree-run-module">docs</a>).
+      <a href="https://github.com/openxla/iree/blob/main/tools/iree-run-module-main.cc"><code>iree-run-module</code></a>
+      (<a href="https://github.com/openxla/iree/blob/main/docs/developers/developing_iree/developer_overview.md#iree-run-module">docs</a>).
       <br>It loads a compiled IREE program then lets you call exported functions.
       <br><b>Note:</b> Some outputs are logged to the console.</p>
     </p>
@@ -122,7 +122,7 @@
       <div class="row" style="padding:4px">
         <div class="col-sm">
           simple_abs
-          (<a href="https://github.com/iree-org/iree/blob/main/iree/samples/models/simple_abs.mlir">source</a>)
+          (<a href="https://github.com/openxla/iree/blob/main/iree/samples/models/simple_abs.mlir">source</a>)
         </div>
         <div class="col-sm-auto">
           <button class="btn btn-secondary" onclick="loadSample('simple_abs')">Load sample</button>
@@ -131,7 +131,7 @@
       <div class="row" style="padding:4px">
         <div class="col-sm">
           fullyconnected
-          (<a href="https://github.com/iree-org/iree/blob/main/tests/e2e/models/fullyconnected.mlir">source</a>)
+          (<a href="https://github.com/openxla/iree/blob/main/tests/e2e/models/fullyconnected.mlir">source</a>)
         </div>
         <div class="col-sm-auto">
           <button class="btn btn-secondary" onclick="loadSample('fullyconnected')">Load sample</button>
@@ -140,7 +140,7 @@
       <div class="row" style="padding:4px">
         <div class="col-sm">
           collatz
-          (<a href="https://github.com/iree-org/iree/blob/main/tests/e2e/models/collatz.mlir">source</a>)
+          (<a href="https://github.com/openxla/iree/blob/main/tests/e2e/models/collatz.mlir">source</a>)
         </div>
         <div class="col-sm-auto">
           <button class="btn btn-secondary" onclick="loadSample('collatz')">Load sample</button>
diff --git a/integrations/tensorflow/iree_tf_compiler/TFL/Passes.cpp b/integrations/tensorflow/iree_tf_compiler/TFL/Passes.cpp
index c0b7c7f..bcbb29a 100644
--- a/integrations/tensorflow/iree_tf_compiler/TFL/Passes.cpp
+++ b/integrations/tensorflow/iree_tf_compiler/TFL/Passes.cpp
@@ -55,7 +55,7 @@
   pm.addPass(createLowerGlobalTensorsPass());
 
   mlir::tosa::TOSATFTFLLegalizationPipelineOptions tosaOptions;
-  // Temporary work-around for https://github.com/iree-org/iree/issues/8974
+  // Temporary work-around for https://github.com/openxla/iree/issues/8974
   tosaOptions.dequantize_tfl_softmax = true;
   mlir::tosa::createTFTFLtoTOSALegalizationPipeline(pm, tosaOptions);
 
diff --git a/integrations/tensorflow/iree_tf_compiler/TFL/test/flex_ops.mlir b/integrations/tensorflow/iree_tf_compiler/TFL/test/flex_ops.mlir
index 0320ba4..5d403d7 100644
--- a/integrations/tensorflow/iree_tf_compiler/TFL/test/flex_ops.mlir
+++ b/integrations/tensorflow/iree_tf_compiler/TFL/test/flex_ops.mlir
@@ -1,5 +1,5 @@
 // RUN: iree-opt-tflite --split-input-file --iree-tflite-import-pipeline %s | FileCheck %s
-// Disabled as part of 2022-08-12 integrate: https://github.com/iree-org/iree/issues/10091
+// Disabled as part of 2022-08-12 integrate: https://github.com/openxla/iree/issues/10091
 
 // This test was generated by importing a TFLite model that contained flex ops.
 // The opaque data is a serialized tf-node proto and is not easily handwritten.
diff --git a/integrations/tensorflow/iree_tf_compiler/iree-import-tf-main.cpp b/integrations/tensorflow/iree_tf_compiler/iree-import-tf-main.cpp
index 30ac749..532dce1 100644
--- a/integrations/tensorflow/iree_tf_compiler/iree-import-tf-main.cpp
+++ b/integrations/tensorflow/iree_tf_compiler/iree-import-tf-main.cpp
@@ -130,7 +130,7 @@
 int main(int argc, char **argv) {
   tensorflow::InitMlir y(&argc, &argv);
   llvm::setBugReportMsg(
-      "Please report issues to https://github.com/iree-org/iree/issues and "
+      "Please report issues to https://github.com/openxla/iree/issues and "
       "include the crash backtrace.\n");
 
   static cl::opt<std::string> inputPath(
diff --git a/integrations/tensorflow/iree_tf_compiler/iree-import-tflite-main.cpp b/integrations/tensorflow/iree_tf_compiler/iree-import-tflite-main.cpp
index 2d0b6a3..bf7564b 100644
--- a/integrations/tensorflow/iree_tf_compiler/iree-import-tflite-main.cpp
+++ b/integrations/tensorflow/iree_tf_compiler/iree-import-tflite-main.cpp
@@ -37,7 +37,7 @@
 
 int main(int argc, char **argv) {
   llvm::setBugReportMsg(
-      "Please report issues to https://github.com/iree-org/iree/issues and "
+      "Please report issues to https://github.com/openxla/iree/issues and "
       "include the crash backtrace.\n");
   llvm::InitLLVM y(argc, argv);
 
diff --git a/integrations/tensorflow/iree_tf_compiler/iree-import-xla-main.cpp b/integrations/tensorflow/iree_tf_compiler/iree-import-xla-main.cpp
index d020532..a886824 100644
--- a/integrations/tensorflow/iree_tf_compiler/iree-import-xla-main.cpp
+++ b/integrations/tensorflow/iree_tf_compiler/iree-import-xla-main.cpp
@@ -104,7 +104,7 @@
 
 int main(int argc, char **argv) {
   llvm::setBugReportMsg(
-      "Please report issues to https://github.com/iree-org/iree/issues and "
+      "Please report issues to https://github.com/openxla/iree/issues and "
       "include the crash backtrace.\n");
   llvm::InitLLVM y(argc, argv);
 
diff --git a/integrations/tensorflow/iree_tf_compiler/iree-opt-tflite-main.cpp b/integrations/tensorflow/iree_tf_compiler/iree-opt-tflite-main.cpp
index d1248b4..3e89baa 100644
--- a/integrations/tensorflow/iree_tf_compiler/iree-opt-tflite-main.cpp
+++ b/integrations/tensorflow/iree_tf_compiler/iree-opt-tflite-main.cpp
@@ -24,7 +24,7 @@
 
 int main(int argc, char **argv) {
   llvm::setBugReportMsg(
-      "Please report issues to https://github.com/iree-org/iree/issues and "
+      "Please report issues to https://github.com/openxla/iree/issues and "
       "include the crash backtrace.\n");
   llvm::InitLLVM y(argc, argv);
 
diff --git a/integrations/tensorflow/iree_tf_compiler/iree-tf-opt-main.cpp b/integrations/tensorflow/iree_tf_compiler/iree-tf-opt-main.cpp
index 5da1227..f6c82c5 100644
--- a/integrations/tensorflow/iree_tf_compiler/iree-tf-opt-main.cpp
+++ b/integrations/tensorflow/iree_tf_compiler/iree-tf-opt-main.cpp
@@ -28,7 +28,7 @@
 
 int main(int argc, char **argv) {
   llvm::setBugReportMsg(
-      "Please report issues to https://github.com/iree-org/iree/issues and "
+      "Please report issues to https://github.com/openxla/iree/issues and "
       "include the crash backtrace.\n");
   llvm::InitLLVM y(argc, argv);
 
diff --git a/integrations/tensorflow/python_projects/iree_tf/setup.py b/integrations/tensorflow/python_projects/iree_tf/setup.py
index 103bf0d..f785da7 100644
--- a/integrations/tensorflow/python_projects/iree_tf/setup.py
+++ b/integrations/tensorflow/python_projects/iree_tf/setup.py
@@ -89,7 +89,7 @@
     description="IREE TensorFlow Compiler Tools",
     long_description=README,
     long_description_content_type="text/markdown",
-    url="https://github.com/iree-org/iree",
+    url="https://github.com/openxla/iree",
     classifiers=[
         "Development Status :: 3 - Alpha",
         "License :: OSI Approved :: Apache Software License",
diff --git a/integrations/tensorflow/python_projects/iree_tflite/setup.py b/integrations/tensorflow/python_projects/iree_tflite/setup.py
index 19d0f14..ed1d4c4 100644
--- a/integrations/tensorflow/python_projects/iree_tflite/setup.py
+++ b/integrations/tensorflow/python_projects/iree_tflite/setup.py
@@ -89,7 +89,7 @@
     description="IREE TFLite Compiler Tools",
     long_description=README,
     long_description_content_type="text/markdown",
-    url="https://github.com/iree-org/iree",
+    url="https://github.com/openxla/iree",
     classifiers=[
         "Development Status :: 3 - Alpha",
         "License :: OSI Approved :: Apache Software License",
diff --git a/integrations/tensorflow/python_projects/iree_xla/setup.py b/integrations/tensorflow/python_projects/iree_xla/setup.py
index 61e1892..5c24efc 100644
--- a/integrations/tensorflow/python_projects/iree_xla/setup.py
+++ b/integrations/tensorflow/python_projects/iree_xla/setup.py
@@ -89,7 +89,7 @@
     description="IREE XLA Compiler Tools",
     long_description=README,
     long_description_content_type="text/markdown",
-    url="https://github.com/iree-org/iree",
+    url="https://github.com/openxla/iree",
     classifiers=[
         "Development Status :: 3 - Alpha",
         "License :: OSI Approved :: Apache Software License",
diff --git a/integrations/tensorflow/test/iree_tf_tests/uncategorized/vulkan__fill.run b/integrations/tensorflow/test/iree_tf_tests/uncategorized/vulkan__fill.run
index 644ef45..f0592a3 100644
--- a/integrations/tensorflow/test/iree_tf_tests/uncategorized/vulkan__fill.run
+++ b/integrations/tensorflow/test/iree_tf_tests/uncategorized/vulkan__fill.run
@@ -1,3 +1,3 @@
 # REQUIRES: bugfix
-# FIXME(https://github.com/iree-org/iree/issues/11277): Re-enable this test after resolving the issue.
+# FIXME(https://github.com/openxla/iree/issues/11277): Re-enable this test after resolving the issue.
 # RUN: %PYTHON -m iree_tf_tests.uncategorized.fill_test --target_backends=iree_vulkan --artifacts_dir=%t
diff --git a/integrations/tensorflow/test/python/iree_tfl_tests/mobilebert_tf2_quant_test.py b/integrations/tensorflow/test/python/iree_tfl_tests/mobilebert_tf2_quant_test.py
index a8876ad..924537d 100644
--- a/integrations/tensorflow/test/python/iree_tfl_tests/mobilebert_tf2_quant_test.py
+++ b/integrations/tensorflow/test/python/iree_tfl_tests/mobilebert_tf2_quant_test.py
@@ -37,7 +37,7 @@
                                                 details)
     # We have confirmed in large scale accuracy tests that differences as large
     # as 5.0 is acceptable. We later further relaxed from 5.0 to 7.0 in
-    # https://github.com/iree-org/iree/pull/9337 when quantized Softmax got
+    # https://github.com/openxla/iree/pull/9337 when quantized Softmax got
     # de-quantized, which should be numerically correct albeit not bit-exact.
     # The actual observed max error was ~ 6.36. The value 7.0 is that rounded up
     # to the next integer.
diff --git a/runtime/bindings/python/CMakeLists.txt b/runtime/bindings/python/CMakeLists.txt
index 1759eaf..b1e4b83 100644
--- a/runtime/bindings/python/CMakeLists.txt
+++ b/runtime/bindings/python/CMakeLists.txt
@@ -179,7 +179,7 @@
 )
 
 # TODO: Enable this once the CI bots are updated to install the python3-venv
-# apt package. https://github.com/iree-org/iree/issues/9080
+# apt package. https://github.com/openxla/iree/issues/9080
 # iree_py_test(
 #   NAME
 #     package_test
diff --git a/runtime/bindings/tflite/README.md b/runtime/bindings/tflite/README.md
index 917ad41..c30000c 100644
--- a/runtime/bindings/tflite/README.md
+++ b/runtime/bindings/tflite/README.md
@@ -1,7 +1,7 @@
 # IREE TFLite C API Compatibility Shim
 
 **EXPERIMENTAL**: we are working towards making this a stable API but it has a
-ways to go still. Progress is being tracked in https://github.com/iree-org/iree/projects/17.
+ways to go still. Progress is being tracked in https://github.com/openxla/iree/projects/17.
 
 Provides a (mostly) tflite-compatible API that allows loading compiled IREE
 modules, managing tensors, and invoking functions with the same conventions as
@@ -174,7 +174,7 @@
 
 Custom ops in tflite map to functions imported into compiled IREE modules.
 The IREE tflite API shim could provide a wrapper implemented as an
-[iree_vm_module_t](https://github.com/iree-org/iree/blob/main/iree/vm/module.h)
+[iree_vm_module_t](https://github.com/openxla/iree/blob/main/iree/vm/module.h)
 that resolves and executes the functions as they are called by the VM. Having
 real IREE modules, though, provides significant benefits in representation
 such as the ability to have asynchronous custom behavior that interacts well
diff --git a/runtime/setup.py b/runtime/setup.py
index 2ce90d6..d9fc32f 100644
--- a/runtime/setup.py
+++ b/runtime/setup.py
@@ -379,7 +379,7 @@
         "Programming Language :: Python :: 3.10",
         "Programming Language :: Python :: 3.11",
     ],
-    url="https://github.com/iree-org/iree",
+    url="https://github.com/openxla/iree",
     python_requires=">=3.7",
     ext_modules=[
         CMakeExtension("iree._runtime"),
diff --git a/runtime/src/iree/hal/drivers/vulkan/descriptor_set_arena.cc b/runtime/src/iree/hal/drivers/vulkan/descriptor_set_arena.cc
index f0e36ce..381b7af 100644
--- a/runtime/src/iree/hal/drivers/vulkan/descriptor_set_arena.cc
+++ b/runtime/src/iree/hal/drivers/vulkan/descriptor_set_arena.cc
@@ -62,7 +62,7 @@
       // to match the ABI and provide the buffer as 32-bit aligned, otherwise
       // the whole read by the shader is considered as out of bounds per the
       // Vulkan spec. See
-      // https://github.com/iree-org/iree/issues/2022#issuecomment-640617234 for
+      // https://github.com/openxla/iree/issues/2022#issuecomment-640617234 for
       // more details.
       buffer_info.range = iree_device_align(
           std::min(binding.length, iree_hal_buffer_byte_length(binding.buffer) -
diff --git a/runtime/src/iree/task/pool.c b/runtime/src/iree/task/pool.c
index 80f926e..d02b3b5 100644
--- a/runtime/src/iree/task/pool.c
+++ b/runtime/src/iree/task/pool.c
@@ -93,7 +93,7 @@
 
   // Work around a loop vectorizer bug that causes memory corruption in this
   // loop. Only Android NDK r25 is known to be affected. See
-  // https://github.com/iree-org/iree/issues/9953 for details.
+  // https://github.com/openxla/iree/issues/9953 for details.
 #if defined(__NDK_MAJOR__) && __NDK_MAJOR__ == 25
 #pragma clang loop unroll(disable) vectorize(disable)
 #endif
diff --git a/runtime/src/iree/tooling/vm_util.h b/runtime/src/iree/tooling/vm_util.h
index 518eb6b..b72d4ed 100644
--- a/runtime/src/iree/tooling/vm_util.h
+++ b/runtime/src/iree/tooling/vm_util.h
@@ -47,7 +47,7 @@
 // Prints buffers in the IREE standard shaped buffer format:
 //   [shape]xtype=[value]
 // described in
-// https://github.com/iree-org/iree/tree/main/iree/hal/api.h
+// https://github.com/openxla/iree/tree/main/iree/hal/api.h
 iree_status_t iree_tooling_append_variant_list_lines(
     iree_vm_list_t* list, iree_host_size_t max_element_count,
     iree_string_builder_t* builder);
diff --git a/samples/colab/README.md b/samples/colab/README.md
index 7e7d7a7..33a8127 100644
--- a/samples/colab/README.md
+++ b/samples/colab/README.md
@@ -7,19 +7,19 @@
 Constructs a TF module for performing image edge detection and runs it using
 IREE
 
-[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/iree-org/iree/blob/main/samples/colab/edge_detection.ipynb)
+[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openxla/iree/blob/main/samples/colab/edge_detection.ipynb)
 
 ### [low_level_invoke_function\.ipynb](low_level_invoke_function.ipynb)
 
 Shows off some concepts of the low level IREE python bindings
 
-[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/iree-org/iree/blob/main/samples/colab/low_level_invoke_function.ipynb)
+[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openxla/iree/blob/main/samples/colab/low_level_invoke_function.ipynb)
 
 ### [mnist_training\.ipynb](mnist_training.ipynb)
 
 Compile, train and execute a TensorFlow Keras neural network with IREE
 
-[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/iree-org/iree/blob/main/samples/colab/mnist_training.ipynb)
+[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openxla/iree/blob/main/samples/colab/mnist_training.ipynb)
 
 ### [resnet\.ipynb](resnet.ipynb)
 
@@ -27,7 +27,7 @@
 [ResNet50](https://www.tensorflow.org/api_docs/python/tf/keras/applications/ResNet50)
 model and runs it using IREE
 
-[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/iree-org/iree/blob/main/samples/colab/resnet.ipynb)
+[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openxla/iree/blob/main/samples/colab/resnet.ipynb)
 
 ### [tensorflow_hub_import\.ipynb](tensorflow_hub_import.ipynb)
 
@@ -35,7 +35,7 @@
 [MobileNet V2](https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification)
 model, pre-processes it for import, then compiles it using IREE
 
-[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/iree-org/iree/blob/main/samples/colab/tensorflow_hub_import.ipynb)
+[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openxla/iree/blob/main/samples/colab/tensorflow_hub_import.ipynb)
 
 ### [tflite_text_classification\.ipynb](tflite_text_classification.ipynb)
 
@@ -43,7 +43,7 @@
 [TFLite text classification](https://www.tensorflow.org/lite/examples/text_classification/overview)
 model, and runs it using TFLite and IREE
 
-[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/iree-org/iree/blob/main/samples/colab/tflite_text_classification.ipynb)
+[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openxla/iree/blob/main/samples/colab/tflite_text_classification.ipynb)
 
 ## Working with GitHub
 
diff --git a/samples/colab/edge_detection.ipynb b/samples/colab/edge_detection.ipynb
index dd87c29..45e3c0d 100644
--- a/samples/colab/edge_detection.ipynb
+++ b/samples/colab/edge_detection.ipynb
@@ -66,7 +66,7 @@
         "outputId": "3ab6a4c6-46c2-45d2-9721-266b56d1d627"
       },
       "source": [
-        "!python -m pip install iree-compiler iree-runtime iree-tools-tf -f https://iree-org.github.io/iree/pip-release-links.html"
+        "!python -m pip install iree-compiler iree-runtime iree-tools-tf -f https://openxla.github.io/iree/pip-release-links.html"
       ],
       "execution_count": 2,
       "outputs": [
@@ -75,15 +75,15 @@
           "name": "stdout",
           "text": [
             "Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
-            "Looking in links: https://iree-org.github.io/iree/pip-release-links.html\n",
+            "Looking in links: https://openxla.github.io/iree/pip-release-links.html\n",
             "Collecting iree-compiler\n",
-            "  Downloading https://github.com/iree-org/iree/releases/download/candidate-20220929.281/iree_compiler-20220929.281-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (49.7 MB)\n",
+            "  Downloading https://github.com/openxla/iree/releases/download/candidate-20220929.281/iree_compiler-20220929.281-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (49.7 MB)\n",
             "\u001b[K     |████████████████████████████████| 49.7 MB 3.3 MB/s \n",
             "\u001b[?25hCollecting iree-runtime\n",
-            "  Downloading https://github.com/iree-org/iree/releases/download/candidate-20220929.281/iree_runtime-20220929.281-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.3 MB)\n",
+            "  Downloading https://github.com/openxla/iree/releases/download/candidate-20220929.281/iree_runtime-20220929.281-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.3 MB)\n",
             "\u001b[K     |████████████████████████████████| 2.3 MB 34.2 MB/s \n",
             "\u001b[?25hCollecting iree-tools-tf\n",
-            "  Downloading https://github.com/iree-org/iree/releases/download/candidate-20220929.281/iree_tools_tf-20220929.281-py3-none-linux_x86_64.whl (58.7 MB)\n",
+            "  Downloading https://github.com/openxla/iree/releases/download/candidate-20220929.281/iree_tools_tf-20220929.281-py3-none-linux_x86_64.whl (58.7 MB)\n",
             "\u001b[K     |████████████████████████████████| 58.7 MB 4.8 kB/s \n",
             "\u001b[?25hRequirement already satisfied: PyYAML in /usr/local/lib/python3.7/dist-packages (from iree-compiler) (6.0)\n",
             "Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from iree-compiler) (1.21.6)\n",
@@ -428,4 +428,4 @@
       ]
     }
   ]
-}
\ No newline at end of file
+}
diff --git a/samples/colab/low_level_invoke_function.ipynb b/samples/colab/low_level_invoke_function.ipynb
index 1c94892..5e6d4b6 100644
--- a/samples/colab/low_level_invoke_function.ipynb
+++ b/samples/colab/low_level_invoke_function.ipynb
@@ -66,7 +66,7 @@
         "outputId": "0339165a-a35f-4b46-9cf8-f22adc69a7fe"
       },
       "source": [
-        "!python -m pip install iree-compiler iree-runtime -f https://iree-org.github.io/iree/pip-release-links.html"
+        "!python -m pip install iree-compiler iree-runtime -f https://openxla.github.io/iree/pip-release-links.html"
       ],
       "execution_count": 2,
       "outputs": [
@@ -75,12 +75,12 @@
           "name": "stdout",
           "text": [
             "Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
-            "Looking in links: https://iree-org.github.io/iree/pip-release-links.html\n",
+            "Looking in links: https://openxla.github.io/iree/pip-release-links.html\n",
             "Collecting iree-compiler\n",
-            "  Downloading https://github.com/iree-org/iree/releases/download/candidate-20220929.281/iree_compiler-20220929.281-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (49.7 MB)\n",
+            "  Downloading https://github.com/openxla/iree/releases/download/candidate-20220929.281/iree_compiler-20220929.281-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (49.7 MB)\n",
             "\u001b[K     |████████████████████████████████| 49.7 MB 99 kB/s \n",
             "\u001b[?25hCollecting iree-runtime\n",
-            "  Downloading https://github.com/iree-org/iree/releases/download/candidate-20220929.281/iree_runtime-20220929.281-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.3 MB)\n",
+            "  Downloading https://github.com/openxla/iree/releases/download/candidate-20220929.281/iree_runtime-20220929.281-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.3 MB)\n",
             "\u001b[K     |████████████████████████████████| 2.3 MB 40.7 MB/s \n",
             "\u001b[?25hRequirement already satisfied: PyYAML in /usr/local/lib/python3.7/dist-packages (from iree-compiler) (6.0)\n",
             "Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from iree-compiler) (1.21.6)\n",
@@ -164,4 +164,4 @@
       ]
     }
   ]
-}
\ No newline at end of file
+}
diff --git a/samples/colab/mnist_training.ipynb b/samples/colab/mnist_training.ipynb
index 6f37d9f..69abaf7 100644
--- a/samples/colab/mnist_training.ipynb
+++ b/samples/colab/mnist_training.ipynb
@@ -64,7 +64,7 @@
       },
       "source": [
         "%%capture\n",
-        "!python -m pip install iree-compiler iree-runtime iree-tools-tf -f https://iree-org.github.io/iree/pip-release-links.html"
+        "!python -m pip install iree-compiler iree-runtime iree-tools-tf -f https://openxla.github.io/iree/pip-release-links.html"
       ],
       "execution_count": 1,
       "outputs": []
@@ -358,7 +358,7 @@
       },
       "source": [
         "# Compile the TrainableDNN module\n",
-        "# Note: extra flags are needed to i64 demotion, see https://github.com/iree-org/iree/issues/8644\n",
+        "# Note: extra flags are needed to i64 demotion, see https://github.com/openxla/iree/issues/8644\n",
         "vm_flatbuffer = iree.compiler.tf.compile_module(\n",
         "    TrainableDNN(),\n",
         "    target_backends=[backend_choice],\n",
@@ -622,4 +622,4 @@
       ]
     }
   ]
-}
\ No newline at end of file
+}
diff --git a/samples/colab/resnet.ipynb b/samples/colab/resnet.ipynb
index 3a9995c..1cd7b09 100644
--- a/samples/colab/resnet.ipynb
+++ b/samples/colab/resnet.ipynb
@@ -65,7 +65,7 @@
         "outputId": "e0dab7a1-0ce7-4e57-a68f-083580a0ce4f"
       },
       "source": [
-        "!python -m pip install iree-compiler iree-runtime iree-tools-tf -f https://iree-org.github.io/iree/pip-release-links.html"
+        "!python -m pip install iree-compiler iree-runtime iree-tools-tf -f https://openxla.github.io/iree/pip-release-links.html"
       ],
       "execution_count": 2,
       "outputs": [
@@ -74,15 +74,15 @@
           "name": "stdout",
           "text": [
             "Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
-            "Looking in links: https://iree-org.github.io/iree/pip-release-links.html\n",
+            "Looking in links: https://openxla.github.io/iree/pip-release-links.html\n",
             "Collecting iree-compiler\n",
-            "  Downloading https://github.com/iree-org/iree/releases/download/candidate-20220929.281/iree_compiler-20220929.281-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (49.7 MB)\n",
+            "  Downloading https://github.com/openxla/iree/releases/download/candidate-20220929.281/iree_compiler-20220929.281-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (49.7 MB)\n",
             "\u001b[K     |████████████████████████████████| 49.7 MB 80 kB/s \n",
             "\u001b[?25hCollecting iree-runtime\n",
-            "  Downloading https://github.com/iree-org/iree/releases/download/candidate-20220929.281/iree_runtime-20220929.281-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.3 MB)\n",
+            "  Downloading https://github.com/openxla/iree/releases/download/candidate-20220929.281/iree_runtime-20220929.281-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.3 MB)\n",
             "\u001b[K     |████████████████████████████████| 2.3 MB 41.1 MB/s \n",
             "\u001b[?25hCollecting iree-tools-tf\n",
-            "  Downloading https://github.com/iree-org/iree/releases/download/candidate-20220929.281/iree_tools_tf-20220929.281-py3-none-linux_x86_64.whl (58.7 MB)\n",
+            "  Downloading https://github.com/openxla/iree/releases/download/candidate-20220929.281/iree_tools_tf-20220929.281-py3-none-linux_x86_64.whl (58.7 MB)\n",
             "\u001b[K     |████████████████████████████████| 58.7 MB 4.6 kB/s \n",
             "\u001b[?25hRequirement already satisfied: PyYAML in /usr/local/lib/python3.7/dist-packages (from iree-compiler) (6.0)\n",
             "Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from iree-compiler) (1.21.6)\n",
@@ -327,4 +327,4 @@
       ]
     }
   ]
-}
\ No newline at end of file
+}
diff --git a/samples/colab/tensorflow_hub_import.ipynb b/samples/colab/tensorflow_hub_import.ipynb
index 047b0bd..aede463 100644
--- a/samples/colab/tensorflow_hub_import.ipynb
+++ b/samples/colab/tensorflow_hub_import.ipynb
@@ -58,7 +58,7 @@
         "\n",
         "At the end of the notebook, the compilation artifacts are compressed into a .zip file for you to download and use in an application.\n",
         "\n",
-        "See also https://iree-org.github.io/iree/ml-frameworks/tensorflow/."
+        "See also https://openxla.github.io/iree/ml-frameworks/tensorflow/."
       ]
     },
     {
@@ -77,7 +77,7 @@
       },
       "source": [
         "%%capture\n",
-        "!python -m pip install iree-compiler iree-runtime iree-tools-tf -f https://iree-org.github.io/iree/pip-release-links.html"
+        "!python -m pip install iree-compiler iree-runtime iree-tools-tf -f https://openxla.github.io/iree/pip-release-links.html"
       ],
       "execution_count": 2,
       "outputs": []
@@ -507,4 +507,4 @@
       ]
     }
   ]
-}
\ No newline at end of file
+}
diff --git a/samples/colab/tflite_text_classification.ipynb b/samples/colab/tflite_text_classification.ipynb
index f19ac62..54bac49 100644
--- a/samples/colab/tflite_text_classification.ipynb
+++ b/samples/colab/tflite_text_classification.ipynb
@@ -44,7 +44,7 @@
       "outputs": [],
       "source": [
         "%%capture\n",
-        "!python -m pip install iree-compiler iree-runtime iree-tools-tflite -f https://iree-org.github.io/iree/pip-release-links.html\n",
+        "!python -m pip install iree-compiler iree-runtime iree-tools-tflite -f https://openxla.github.io/iree/pip-release-links.html\n",
         "!pip3 install --extra-index-url https://google-coral.github.io/py-repo/ tflite_runtime"
       ]
     },
@@ -487,4 +487,4 @@
   },
   "nbformat": 4,
   "nbformat_minor": 0
-}
\ No newline at end of file
+}
diff --git a/samples/custom_dispatch/cpu/embedded/README.md b/samples/custom_dispatch/cpu/embedded/README.md
index 216f61b..46f0414 100644
--- a/samples/custom_dispatch/cpu/embedded/README.md
+++ b/samples/custom_dispatch/cpu/embedded/README.md
@@ -101,7 +101,7 @@
 ## Instructions
 
 This presumes that `iree-compile` and `iree-run-module` have been installed or
-built. [See here](https://iree-org.github.io/iree/building-from-source/getting-started/)
+built. [See here](https://openxla.github.io/iree/building-from-source/getting-started/)
 for instructions for CMake setup and building from source.
 
 0. Ensure that `clang` is on your PATH:
diff --git a/samples/custom_dispatch/cuda/kernels/README.md b/samples/custom_dispatch/cuda/kernels/README.md
index c2146bd..396ee84 100644
--- a/samples/custom_dispatch/cuda/kernels/README.md
+++ b/samples/custom_dispatch/cuda/kernels/README.md
@@ -101,7 +101,7 @@
 ## Instructions
 
 This presumes that `iree-compile` and `iree-run-module` have been installed or
-built. [See here](https://iree-org.github.io/iree/building-from-source/getting-started/)
+built. [See here](https://openxla.github.io/iree/building-from-source/getting-started/)
 for instructions for CMake setup and building from source.
 
 0. Ensure that the [CUDA SDK](https://developer.nvidia.com/cuda-downloads) and `nvcc` is on your PATH:
diff --git a/samples/custom_dispatch/vulkan/shaders/README.md b/samples/custom_dispatch/vulkan/shaders/README.md
index 4a2c9aa..556b754 100644
--- a/samples/custom_dispatch/vulkan/shaders/README.md
+++ b/samples/custom_dispatch/vulkan/shaders/README.md
@@ -105,7 +105,7 @@
 ## Instructions
 
 This presumes that `iree-compile` and `iree-run-module` have been installed or
-built. [See here](https://iree-org.github.io/iree/building-from-source/getting-started/)
+built. [See here](https://openxla.github.io/iree/building-from-source/getting-started/)
 for instructions for CMake setup and building from source.
 
 0. Ensure that `glslc` is on your PATH (comes with the [Vulkan SDK](https://vulkan.lunarg.com/sdk/home)):
diff --git a/samples/custom_module/async/README.md b/samples/custom_module/async/README.md
index dcd63ce..c93f462 100644
--- a/samples/custom_module/async/README.md
+++ b/samples/custom_module/async/README.md
@@ -34,7 +34,7 @@
     ```
     (here we force runtime execution tracing for demonstration purposes)
 
-    [See here](https://iree-org.github.io/iree/building-from-source/getting-started/)
+    [See here](https://openxla.github.io/iree/building-from-source/getting-started/)
     for general instructions on building using CMake.
 
 3. Run the example program to call the main function:
diff --git a/samples/custom_module/basic/README.md b/samples/custom_module/basic/README.md
index 77d6c28..fc1a49b 100644
--- a/samples/custom_module/basic/README.md
+++ b/samples/custom_module/basic/README.md
@@ -13,8 +13,8 @@
 [`main.c`](./main.c).
 
 This document uses terminology that can be found in the documentation of
-[IREE's execution model](https://github.com/iree-org/iree/blob/main/docs/developers/design_docs/execution_model.md).
-See [IREE's extensibility mechanisms](https://iree-org.github.io/iree/extensions/)
+[IREE's execution model](https://github.com/openxla/iree/blob/main/docs/developers/design_docs/execution_model.md).
+See [IREE's extensibility mechanisms](https://openxla.github.io/iree/extensions/)
 documentation for more information specific to extenting IREE and
 alternative approaches to doing so.
 
@@ -36,7 +36,7 @@
     python -m pip install iree-compiler
     ```
 
-    [See here](https://iree-org.github.io/iree/getting-started/)
+    [See here](https://openxla.github.io/iree/getting-started/)
     for general instructions on installing the compiler.
 
 3. Compile the [example module](./test/example.mlir) to a .vmfb file:
@@ -56,7 +56,7 @@
     ```
     (here we force runtime execution tracing for demonstration purposes)
 
-    [See here](https://iree-org.github.io/iree/building-from-source/getting-started/)
+    [See here](https://openxla.github.io/iree/building-from-source/getting-started/)
     for general instructions on building using CMake.
 
 4. Run the example program to call the main function:
diff --git a/samples/custom_module/sync/README.md b/samples/custom_module/sync/README.md
index 96f8616..c86dc8b 100644
--- a/samples/custom_module/sync/README.md
+++ b/samples/custom_module/sync/README.md
@@ -34,7 +34,7 @@
     ```
     (here we force runtime execution tracing for demonstration purposes)
 
-    [See here](https://iree-org.github.io/iree/building-from-source/getting-started/)
+    [See here](https://openxla.github.io/iree/building-from-source/getting-started/)
     for general instructions on building using CMake.
 
 3. Run the example program to call the main function:
diff --git a/samples/dynamic_shapes/README.md b/samples/dynamic_shapes/README.md
index e661aa7..2478603 100644
--- a/samples/dynamic_shapes/README.md
+++ b/samples/dynamic_shapes/README.md
@@ -13,7 +13,7 @@
 [`dynamic_shapes.ipynb`](./dynamic_shapes.ipynb)
 [Colab](https://research.google.com/colaboratory/) notebook:
 
-[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/iree-org/iree/blob/main/samples/dynamic_shapes/dynamic_shapes.ipynb)
+[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openxla/iree/blob/main/samples/dynamic_shapes/dynamic_shapes.ipynb)
 
 Step 3 should be performed on your development host machine
 
@@ -68,7 +68,7 @@
     generates
 
 2. Build the `iree-compile` tool (see
-    [here](https://iree-org.github.io/iree/building-from-source/getting-started/)
+    [here](https://openxla.github.io/iree/building-from-source/getting-started/)
     for general instructions on building using CMake)
 
     ```
@@ -77,7 +77,7 @@
     ```
 
 3. Compile the `dynamic_shapes.mlir` file using `iree-compile`. The
-    [CPU configuration](https://iree-org.github.io/iree/deployment-configurations/cpu/)
+    [CPU configuration](https://openxla.github.io/iree/deployment-configurations/cpu/)
     has the best support for dynamic shapes:
 
     ```
diff --git a/samples/dynamic_shapes/dynamic_shapes.ipynb b/samples/dynamic_shapes/dynamic_shapes.ipynb
index b548bd8..30c44f4 100644
--- a/samples/dynamic_shapes/dynamic_shapes.ipynb
+++ b/samples/dynamic_shapes/dynamic_shapes.ipynb
@@ -141,7 +141,7 @@
       },
       "source": [
         "%%capture\n",
-        "!python -m pip install iree-compiler iree-tools-tf -f https://iree-org.github.io/iree/pip-release-links.html"
+        "!python -m pip install iree-compiler iree-tools-tf -f https://openxla.github.io/iree/pip-release-links.html"
       ],
       "execution_count": 4,
       "outputs": []
@@ -230,7 +230,7 @@
         "\n",
         "_Note: you can stop after each step and use intermediate outputs with other tools outside of Colab._\n",
         "\n",
-        "_See the [README](https://github.com/iree-org/iree/tree/main/iree/samples/dynamic_shapes#instructions) for more details and example command line instructions._\n",
+        "_See the [README](https://github.com/openxla/iree/tree/main/iree/samples/dynamic_shapes#instructions) for more details and example command line instructions._\n",
         "\n",
         "* _The \"imported MLIR\" can be used by IREE's generic compiler tools_\n",
         "* _The \"flatbuffer blob\" can be saved and used by runtime applications_\n",
@@ -245,7 +245,7 @@
       },
       "source": [
         "%%capture\n",
-        "!python -m pip install iree-compiler -f https://iree-org.github.io/iree/pip-release-links.html"
+        "!python -m pip install iree-compiler -f https://openxla.github.io/iree/pip-release-links.html"
       ],
       "execution_count": 6,
       "outputs": []
@@ -293,7 +293,7 @@
       },
       "source": [
         "%%capture\n",
-        "!python -m pip install iree-runtime -f https://iree-org.github.io/iree/pip-release-links.html"
+        "!python -m pip install iree-runtime -f https://openxla.github.io/iree/pip-release-links.html"
       ],
       "execution_count": 8,
       "outputs": []
@@ -462,4 +462,4 @@
       ]
     }
   ]
-}
\ No newline at end of file
+}
diff --git a/samples/models/mnist.mlir b/samples/models/mnist.mlir
index 0da4f4c..01953ac 100644
--- a/samples/models/mnist.mlir
+++ b/samples/models/mnist.mlir
@@ -1,5 +1,5 @@
 // Trained MNIST model generated by
-// https://github.com/iree-org/iree/blob/main/samples/colab/mnist_training.ipynb.
+// https://github.com/openxla/iree/blob/main/samples/colab/mnist_training.ipynb.
 //
 // Model structure is from tf.keras:
 //
diff --git a/samples/static_library/README.md b/samples/static_library/README.md
index 855913a..1f3e5e8 100644
--- a/samples/static_library/README.md
+++ b/samples/static_library/README.md
@@ -33,7 +33,7 @@
 1. Configure CMake for building the static library then demo. You'll need to set
 the flags building samples, the compiler, the `llvm-cpu`
 compiler target backend, and the `local-sync` runtime HAL driver (see
-[the getting started guide](https://iree-org.github.io/iree/building-from-source/getting-started/)
+[the getting started guide](https://openxla.github.io/iree/building-from-source/getting-started/)
 for general instructions on building using CMake):
 
   ```shell
@@ -73,7 +73,7 @@
 compile the library and demo with different options.
 
 For example, see
-[this documentation](https://iree-org.github.io/iree/building-from-source/android/)
+[this documentation](https://openxla.github.io/iree/building-from-source/android/)
 on cross compiling on Android.
 
 Note: separating the target from the host will require modifying dependencies in
diff --git a/samples/variables_and_state/README.md b/samples/variables_and_state/README.md
index 92b3651..4a6d777 100644
--- a/samples/variables_and_state/README.md
+++ b/samples/variables_and_state/README.md
@@ -13,7 +13,7 @@
 [`variables_and_state.ipynb`](./variables_and_state.ipynb)
 [Colab](https://research.google.com/colaboratory/) notebook:
 
-[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/iree-org/iree/blob/main/samples/variables_and_state/variables_and_state.ipynb)
+[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openxla/iree/blob/main/samples/variables_and_state/variables_and_state.ipynb)
 
 Steps 4-5 are in [`main.c`](./main.c)
 
@@ -62,7 +62,7 @@
    `counter_vmvx.vmfb` files it generates
 
 2. Build the `iree_samples_variables_and_state` CMake target (see
-    [here](https://iree-org.github.io/iree/building-from-source/getting-started/)
+    [here](https://openxla.github.io/iree/building-from-source/getting-started/)
     for general instructions on building using CMake)
 
     ```
@@ -87,7 +87,7 @@
 
 For example, to use IREE's `cpu` target, which is optimized for CPU execution
 using LLVM, refer to the
-[documentation](https://iree-org.github.io/iree/deployment-configurations/cpu/)
+[documentation](https://openxla.github.io/iree/deployment-configurations/cpu/)
 and compile the imported `counter.mlir` file using `iree-compile`:
 
 ```
diff --git a/samples/variables_and_state/variables_and_state.ipynb b/samples/variables_and_state/variables_and_state.ipynb
index 9ec7236..0b6a091 100644
--- a/samples/variables_and_state/variables_and_state.ipynb
+++ b/samples/variables_and_state/variables_and_state.ipynb
@@ -146,7 +146,7 @@
       },
       "source": [
         "%%capture\n",
-        "!python -m pip install iree-compiler iree-tools-tf -f https://iree-org.github.io/iree/pip-release-links.html"
+        "!python -m pip install iree-compiler iree-tools-tf -f https://openxla.github.io/iree/pip-release-links.html"
       ],
       "execution_count": 4,
       "outputs": []
@@ -250,7 +250,7 @@
         "\n",
         "_Note: you can stop after each step and use intermediate outputs with other tools outside of Colab._\n",
         "\n",
-        "_See the [README](https://github.com/iree-org/iree/tree/main/iree/samples/variables_and_state#changing-compilation-options) for more details and example command line instructions._\n",
+        "_See the [README](https://github.com/openxla/iree/tree/main/iree/samples/variables_and_state#changing-compilation-options) for more details and example command line instructions._\n",
         "\n",
         "* _The \"imported MLIR\" can be used by IREE's generic compiler tools_\n",
         "* _The \"flatbuffer blob\" can be saved and used by runtime applications_\n",
@@ -265,7 +265,7 @@
       },
       "source": [
         "%%capture\n",
-        "!python -m pip install iree-compiler -f https://iree-org.github.io/iree/pip-release-links.html"
+        "!python -m pip install iree-compiler -f https://openxla.github.io/iree/pip-release-links.html"
       ],
       "execution_count": 6,
       "outputs": []
@@ -310,7 +310,7 @@
       },
       "source": [
         "%%capture\n",
-        "!python -m pip install iree-runtime -f https://iree-org.github.io/iree/pip-release-links.html"
+        "!python -m pip install iree-runtime -f https://openxla.github.io/iree/pip-release-links.html"
       ],
       "execution_count": 8,
       "outputs": []
@@ -485,4 +485,4 @@
       ]
     }
   ]
-}
\ No newline at end of file
+}
diff --git a/tests/e2e/linalg/BUILD b/tests/e2e/linalg/BUILD
index 74b44df..72e4559 100644
--- a/tests/e2e/linalg/BUILD
+++ b/tests/e2e/linalg/BUILD
@@ -8,7 +8,7 @@
 # Each test file should have a name matching the corresponding TOSA op and test only the
 # functionality of that op (though may make use of other ops where necessary). Tests should be
 # written using the IREE Check framework.
-# See https://github.com/iree-org/iree/blob/main/docs/developers/developing_iree/testing_guide.md#iree-core-end-to-end-tests.
+# See https://github.com/openxla/iree/blob/main/docs/developers/developing_iree/testing_guide.md#iree-core-end-to-end-tests.
 
 load("//build_tools/bazel:enforce_glob.bzl", "enforce_glob")
 load("//build_tools/bazel:iree_check_test.bzl", "iree_check_single_backend_test_suite")
diff --git a/tests/e2e/linalg_ext_ops/BUILD b/tests/e2e/linalg_ext_ops/BUILD
index 9966915..cf26d49 100644
--- a/tests/e2e/linalg_ext_ops/BUILD
+++ b/tests/e2e/linalg_ext_ops/BUILD
@@ -84,7 +84,7 @@
 
 iree_cmake_extra_content(
     content = """
-# Failing on Emscripten: https://github.com/iree-org/iree/issues/12129
+# Failing on Emscripten: https://github.com/openxla/iree/issues/12129
 if(NOT EMSCRIPTEN)
 """,
     inline = True,
diff --git a/tests/e2e/linalg_ext_ops/CMakeLists.txt b/tests/e2e/linalg_ext_ops/CMakeLists.txt
index 46a8b04..2de94c9 100644
--- a/tests/e2e/linalg_ext_ops/CMakeLists.txt
+++ b/tests/e2e/linalg_ext_ops/CMakeLists.txt
@@ -71,7 +71,7 @@
     "requires-gpu-nvidia"
 )
 
-# Failing on Emscripten: https://github.com/iree-org/iree/issues/12129
+# Failing on Emscripten: https://github.com/openxla/iree/issues/12129
 if(NOT EMSCRIPTEN)
 
 iree_check_single_backend_test_suite(
diff --git a/tests/e2e/models/edge_detection.mlir b/tests/e2e/models/edge_detection.mlir
index 9b1b948..0e8411d 100644
--- a/tests/e2e/models/edge_detection.mlir
+++ b/tests/e2e/models/edge_detection.mlir
@@ -3,7 +3,7 @@
 // RUN: [[ $IREE_VULKAN_DISABLE == 1 ]] || (iree-run-mlir --iree-input-type=mhlo --iree-hal-target-backends=vulkan-spirv %s --input=1x128x128x1xf32 | FileCheck %s)
 
 // Image edge detection module generated by.
-// https://github.com/iree-org/iree/blob/main/samples/colab/edge_detection.ipynb.
+// https://github.com/openxla/iree/blob/main/samples/colab/edge_detection.ipynb.
 //
 // Input : a single 128x128 pixel image as a tensor<1x128x128x1xf32>, with pixels in [0.0, 1.0]
 // Output: a single image in the same format after running edge detection
diff --git a/tests/e2e/regression/dynamic_matmuls_on_same_accumulator_issue_12060.mlir b/tests/e2e/regression/dynamic_matmuls_on_same_accumulator_issue_12060.mlir
index 547b310..606e0b1 100644
--- a/tests/e2e/regression/dynamic_matmuls_on_same_accumulator_issue_12060.mlir
+++ b/tests/e2e/regression/dynamic_matmuls_on_same_accumulator_issue_12060.mlir
@@ -1,4 +1,4 @@
-// Regression testcase from https://github.com/iree-org/iree/issues/12060
+// Regression testcase from https://github.com/openxla/iree/issues/12060
 
 func.func @matmul_i8(%lhs: tensor<?x?xi8>, %rhs: tensor<?x?xi8>, %acc: tensor<?x?xi32>) -> tensor<?x?xi32> {
   %result1 = linalg.matmul ins(%lhs, %rhs: tensor<?x?xi8>, tensor<?x?xi8>) outs(%acc: tensor<?x?xi32>) -> tensor<?x?xi32>
diff --git a/tests/e2e/regression/dynamic_tosa_quantized_fully_connected_issue_10859.mlir b/tests/e2e/regression/dynamic_tosa_quantized_fully_connected_issue_10859.mlir
index 4fcf764..9396113 100644
--- a/tests/e2e/regression/dynamic_tosa_quantized_fully_connected_issue_10859.mlir
+++ b/tests/e2e/regression/dynamic_tosa_quantized_fully_connected_issue_10859.mlir
@@ -1,4 +1,4 @@
-// Regression testcase from https://github.com/iree-org/iree/issues/10859
+// Regression testcase from https://github.com/openxla/iree/issues/10859
 
 func.func @main(%arg0: tensor<256xi8>, %arg1: tensor<2xi32>, %arg2: tensor<2x32xi8>, %arg3: tensor<32xi32>, %arg4: tensor<32x32xi8>, %arg5: tensor<32xi32>, %arg6: tensor<32x3360xi8>, %arg7: tensor<?x3360xi8>) -> (tensor<?x2xi8>) {
   %0 = "tosa.fully_connected"(%arg7, %arg6, %arg5) {quantization_info = #tosa.conv_quant<input_zp = -128, weight_zp = 0>} : (tensor<?x3360xi8>, tensor<32x3360xi8>, tensor<32xi32>) -> tensor<?x32xi32>
diff --git a/tests/e2e/regression/libm_linking.mlir b/tests/e2e/regression/libm_linking.mlir
index c266b79..acc4f79 100644
--- a/tests/e2e/regression/libm_linking.mlir
+++ b/tests/e2e/regression/libm_linking.mlir
@@ -13,7 +13,7 @@
 // This test checks that the LLVM lowerings for certain operations are
 // correctly covered by our linker configurations.
 //
-// See https://github.com/iree-org/iree/issues/4717 for more details.
+// See https://github.com/openxla/iree/issues/4717 for more details.
 
 // CHECK: vm.func private @tanh
 func.func @tanh(%input : tensor<f32>) -> (tensor<f32>) {
diff --git a/tests/e2e/tosa_ops/BUILD b/tests/e2e/tosa_ops/BUILD
index acdefb4..56f5a6f 100644
--- a/tests/e2e/tosa_ops/BUILD
+++ b/tests/e2e/tosa_ops/BUILD
@@ -8,7 +8,7 @@
 # Each test file should have a name matching the corresponding TOSA op and test only the
 # functionality of that op (though may make use of other ops where necessary). Tests should be
 # written using the IREE Check framework.
-# See https://github.com/iree-org/iree/blob/main/docs/developers/developing_iree/testing_guide.md#iree-core-end-to-end-tests.
+# See https://github.com/openxla/iree/blob/main/docs/developers/developing_iree/testing_guide.md#iree-core-end-to-end-tests.
 
 load("//build_tools/bazel:enforce_glob.bzl", "enforce_glob")
 load("//build_tools/bazel:iree_check_test.bzl", "iree_check_single_backend_test_suite")
@@ -118,7 +118,7 @@
     ],
     include = ["*.mlir"],
     exclude = [
-        "reduce.mlir",  # Currently flakey https://github.com/iree-org/iree/issues/5885
+        "reduce.mlir",  # Currently flakey https://github.com/openxla/iree/issues/5885
     ],
 )
 
diff --git a/tests/e2e/xla_ops/BUILD b/tests/e2e/xla_ops/BUILD
index 3c90766..373bbd0 100644
--- a/tests/e2e/xla_ops/BUILD
+++ b/tests/e2e/xla_ops/BUILD
@@ -8,7 +8,7 @@
 # Each test file should have a name matching the corresponding XLA HLO op and test only the
 # functionality of that op (though may make use of other ops where necessary). Tests should be
 # written using the IREE Check framework and should always pass on the reference VMVX backend.
-# See https://github.com/iree-org/iree/blob/main/docs/developers/developing_iree/testing_guide.md#iree-core-end-to-end-tests.
+# See https://github.com/openxla/iree/blob/main/docs/developers/developing_iree/testing_guide.md#iree-core-end-to-end-tests.
 
 load("//build_tools/bazel:enforce_glob.bzl", "enforce_glob")
 load("//build_tools/bazel:iree_check_test.bzl", "iree_check_single_backend_test_suite")
diff --git a/tests/microbenchmarks/mhlo_dot_general.mlir b/tests/microbenchmarks/mhlo_dot_general.mlir
index 38dbfd0..62bc5b3 100644
--- a/tests/microbenchmarks/mhlo_dot_general.mlir
+++ b/tests/microbenchmarks/mhlo_dot_general.mlir
@@ -1,5 +1,5 @@
 // The following ops are sampled from mobile_bert
-// https://github.com/iree-org/iree/blob/main/integrations/tensorflow/e2e/mobile_bert_squad_test.py
+// https://github.com/openxla/iree/blob/main/integrations/tensorflow/e2e/mobile_bert_squad_test.py
 
 func.func @dot_general_4x384x32x384() -> tensor<4x384x384xf32> {
     %lhs = util.unfoldable_constant dense<1.0> : tensor<4x384x32xf32>
diff --git a/tools/iree-run-mlir-main.cc b/tools/iree-run-mlir-main.cc
index 2097c29..96374ee 100644
--- a/tools/iree-run-mlir-main.cc
+++ b/tools/iree-run-mlir-main.cc
@@ -596,7 +596,7 @@
   // On Windows InitLLVM re-queries the command line from Windows directly and
   // totally messes up the array.
   llvm::setBugReportMsg(
-      "Please report issues to https://github.com/iree-org/iree/issues and "
+      "Please report issues to https://github.com/openxla/iree/issues and "
       "include the crash backtrace.\n");
   llvm::InitLLVM init_llvm(argc_llvm, argv_llvm);
   llvm::cl::ParseCommandLineOptions(argc_llvm, argv_llvm);