Add CUDA and ROCM docs on website (#8172)

diff --git a/docs/website/docs/deployment-configurations/gpu-cuda-rocm.md b/docs/website/docs/deployment-configurations/gpu-cuda-rocm.md
new file mode 100644
index 0000000..127c113
--- /dev/null
+++ b/docs/website/docs/deployment-configurations/gpu-cuda-rocm.md
@@ -0,0 +1,203 @@
+# CUDA and ROCM GPU HAL Driver
+
+IREE can accelerate model execution on NVIDIA GPUs using CUDA and on AMD GPUs using ROCm. Due to the similarity of CUDA and ROCm APIs and infrastructure, the CUDA and ROCm backends share much of their implementation in IREE. The IREE compiler uses a similar GPU code generation pipeline for each, but generates PTX for CUDA and hsaco for ROCm. The IREE runtime HAL driver for ROCm mirrors the one for CUDA, except for the command graph - where CUDA has "direct", "stream", and "graph" command buffers, and ROCM has only "direct" command buffers.
+
+## Prerequisites
+
+In order to use CUDA or ROCm to drive the GPU, you need to have a functional CUDA or ROCm
+environment. It can be verified by the following steps:
+
+=== "Nvidia/CUDA"
+
+Run the following command in a shell:
+
+``` shell
+nvidia-smi | grep CUDA
+```
+
+If `nvidia-smi` does not exist, you will need to [install the latest CUDA Toolkit SDK][cuda-toolkit]. 
+    
+=== "AMD/ROCm"
+
+Run the following command in a shell:
+
+``` shell
+rocm-smi | grep rocm
+```
+
+If `rocm-smi` does not exist, you will need to [install the latest ROCM Toolkit SDK][rocm-toolkit]. 
+    
+## Get runtime and compiler
+
+### Get IREE runtime with CUDA HAL driver
+
+Next you will need to get an IREE runtime that supports the CUDA HAL driver
+so it can execute the model on GPU via CUDA for Nvidia. Or the ROCM HAL driver to execute models on AMD hardware
+
+#### Build runtime from source
+Please make sure you have followed the [Getting started][get-started] page
+to build IREE for Linux/Windows. 
+
+=== "Nvidia/CUDA"
+
+The CUDA HAL driver is compiled in by default on non-Apple
+platforms.
+
+Ensure that the `IREE_HAL_DRIVER_CUDA` CMake option is `ON` when configuring
+for the target.
+
+=== "AMD/ROCm"
+
+Currently our support for ROCm/AMD hardware is still experimental. To enable it add:
+```
+-DIREE_HAL_DRIVER_EXPERIMENTAL_ROCM=ON
+```
+to the cmake build command.
+
+#### Download as Python package
+
+=== "Nvidia/CUDA"
+
+Python packages for various IREE functionalities are regularly published
+to [PyPI][pypi]. See the [Python Bindings][python-bindings] page for more
+details. The core `iree-compiler` package includes the CUDA compiler:
+
+``` shell
+python -m pip install iree-compiler
+```
+
+!!! tip
+    `iree-translate` is installed as `/path/to/python/site-packages/iree/tools/core/iree-translate`.
+    You can find out the full path to the `site-packages` directory via the
+    `python -m site` command.
+
+=== "AMD/ROCm"
+
+Currently ROCm is **NOT supported** for the Python interface.
+
+#### Build compiler from source
+
+Please make sure you have followed the [Getting started][get-started] page
+to build IREE for Linux/Windows and the [Android cross-compilation][android-cc]
+page for Android. The CUDA compiler backend and ROCM compiler backend is compiled in by default on all
+platforms.
+
+=== "Nvidia/CUDA"
+
+Ensure that the `IREE_TARGET_BACKEND_CUDA` CMake option is `ON` when
+configuring for the host.
+
+=== "AMD/ROCM"
+
+Ensure that the `IREE_TARGET_BACKEND_ROCM` CMake option is `ON` when
+configuring for the host.
+
+## Compile and run the model
+
+With the compiler and runtime for CUDA ready, we can now compile a model
+and run it on the GPU.
+
+### Compile the model
+
+IREE compilers transform a model into its final deployable format in many
+sequential steps. A model authored with Python in an ML framework should use the
+corresponding framework's import tool to convert into a format (i.e.,
+[MLIR][mlir]) expected by main IREE compilers first.
+
+Using MobileNet v2 as an example, you can download the SavedModel with trained
+weights from [TensorFlow Hub][tf-hub-mobilenetv2] and convert it using IREE's
+[TensorFlow importer][tf-import]. Then,
+
+#### Compile using the command-line
+
+Let `iree_input.mlir` be the model's initial MLIR representation generated by
+IREE's TensorFlow importer. We can now compile them for each GPU by running the following command:
+
+=== "Nvidia/CUDA"
+
+``` shell hl_lines="3 5"
+iree/tools/iree-translate \
+    -iree-mlir-to-vm-bytecode-module \
+    -iree-hal-target-backends=cuda \
+    -iree-cuda-llvm-target-arch=<...> \
+    -iree-hal-cuda-disable-loop-nounroll-wa \
+    iree_input.mlir -o mobilenet-cuda.vmfb
+```
+
+Note that a cuda target architecture(`iree-cuda-llvm-target-arch`) of the form `sm_<arch_number>` is needed
+to compile towards each GPU architecture. If no architecture is specified then we will default to `sm_35`
+Here are a table of commonly used architecture
+
+CUDA GPU  | Target Architecture
+:--------: | :-----------:
+Nvidia K80 | `sm_35`
+Nvidia P100 | `sm_60`
+Nvidia V100 | `sm_70`
+Nvidia A100 | `sm_80`
+
+
+=== "AMD/ROCM"
+
+``` shell hl_lines="3 6"
+iree/tools/iree-translate \
+    -iree-mlir-to-vm-bytecode-module \
+    -iree-hal-target-backends=rocm \
+    -iree-rocm-target-chip=<...> \
+    -iree-rocm-link-bc=true \
+    -iree-rocm-bc-dir=<...> \
+    iree_input.mlir -o mobilenet-rocm.vmfb
+```
+
+Note ROCm Bitcode Dir(`iree-rocm-bc-dir`) path is required. If the system you are compiling IREE in has ROCm installed, then the default value of `/opt/rocm/amdgcn/bitcode` will usually suffice. If you intend on building ROCm compiler in a non-ROCm capable system, please set `iree-rocm-bc-dir` to the absolute path where you might have saved the amdgcn bitcode.
+
+Note that a rocm target chip(`iree-rocm-target-chip`) of the form `gfx<arch_number>` is needed
+to compile towards each GPU architecture. If no architecture is specified then we will default to `gfx908`
+Here are a table of commonly used architecture
+
+AMD GPU  | Target Chip
+:--------: | :-----------:
+AMD MI25 | `gfx900`
+AMD MI50 | `gfx906`
+AMD MI60 | `gfx906`
+AMD MI100 | `gfx908`
+
+### Run the model
+
+#### Run using the command-line
+
+In the build directory, run the following command:
+
+=== "Nvidia/CUDA"
+
+``` shell hl_lines="2"
+iree/tools/iree-run-module \
+    --driver=cuda \
+    --module_file=mobilenet-cuda.vmfb \
+    --entry_function=predict \
+    --function_input="1x224x224x3xf32=0"
+```
+
+=== "AMD/ROCM"
+
+``` shell hl_lines="2"
+iree/tools/iree-run-module \
+    --driver=rocm \
+    --module_file=mobilenet-rocm.vmfb \
+    --entry_function=predict \
+    --function_input="1x224x224x3xf32=0"
+```
+
+The above assumes the exported function in the model is named as `predict` and
+it expects one 224x224 RGB image. We are feeding in an image with all 0 values
+here for brevity, see `iree-run-module --help` for the format to specify
+concrete values.
+
+[get-started]: ../building-from-source/getting-started.md
+[mlir]: https://mlir.llvm.org/
+[pypi]: https://pypi.org/user/google-iree-pypi-deploy/
+[python-bindings]: ../bindings/python.md
+[tf-hub-mobilenetv2]: https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification
+[tf-import]: ../ml-frameworks/tensorflow.md
+[tflite-import]: ../ml-frameworks/tensorflow-lite.md
+[cuda-toolkit]: https://developer.nvidia.com/cuda-downloads
+[rocm-toolkit]: https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation_new.html
diff --git a/docs/website/docs/deployment-configurations/index.md b/docs/website/docs/deployment-configurations/index.md
index 3f3ba5e..1b203a2 100644
--- a/docs/website/docs/deployment-configurations/index.md
+++ b/docs/website/docs/deployment-configurations/index.md
@@ -10,6 +10,7 @@
 * [CPU - Dylib](./cpu-dylib.md)
 * [CPU - Bare-Metal](./bare-metal.md) with minimal platform dependencies
 * [GPU - Vulkan](./gpu-vulkan.md)
+* [GPU - CUDA/ROCm](./gpu-cuda-rocm.md)
 
 These are just the most stable configurations IREE supports. Feel free to reach
 out on any of IREE's
diff --git a/docs/website/mkdocs.yml b/docs/website/mkdocs.yml
index a75eead..3c9e1fc 100644
--- a/docs/website/mkdocs.yml
+++ b/docs/website/mkdocs.yml
@@ -104,6 +104,7 @@
       - CPU - Dylib: 'deployment-configurations/cpu-dylib.md'
       - CPU - Bare-Metal: 'deployment-configurations/bare-metal.md'
       - GPU - Vulkan: 'deployment-configurations/gpu-vulkan.md'
+      - GPU - CUDA/ROCm: 'deployment-configurations/gpu-cuda-rocm.md'
   - 'Building from source':
       - 'building-from-source/index.md'
       - 'building-from-source/getting-started.md'