Switch to getProcessTriple when querying default target options (#11694)

This results in returning the triple that is appropriate for generating
code that can be loaded into current process rather than default target
LLVM was configured with at build time. The default configured at build
time corresponds to the host in most cases, so this is NOP in common
case.

Biggest difference (once related upstream change lands) is for cases
like universal binaries on Mac where the target at compile time of
compiler often differs from that of architecture at runtime on same
machine.

I considered excluding case where, say, IREE_DEFAULT_TARGET_TRIPLE is
set. But considered that in the cases folks are generating code for
architectures other than their own they are probably setting it
explicitly at iree-compile runtime and/or targeting multiple archs so
that the added complexity didn't see warranted.
diff --git a/compiler/src/iree/compiler/Dialect/HAL/Target/LLVM/LLVMTargetOptions.cpp b/compiler/src/iree/compiler/Dialect/HAL/Target/LLVM/LLVMTargetOptions.cpp
index d33237f..2272208 100644
--- a/compiler/src/iree/compiler/Dialect/HAL/Target/LLVM/LLVMTargetOptions.cpp
+++ b/compiler/src/iree/compiler/Dialect/HAL/Target/LLVM/LLVMTargetOptions.cpp
@@ -25,8 +25,8 @@
   static LLVMTargetOptions targetOptions;
   static std::once_flag onceFlag;
   std::call_once(onceFlag, [&]() {
-    // Host target triple.
-    targetOptions.target.triple = llvm::sys::getDefaultTargetTriple();
+    // Get process target triple along with host CPU name and features.
+    targetOptions.target.triple = llvm::sys::getProcessTriple();
     targetOptions.target.cpu = llvm::sys::getHostCPUName().str();
     {
       llvm::SubtargetFeatures features;