🔄 Why 23H2 Is Different
If you deployed Azure Stack HCI 22H2 or earlier, you experienced a platform that was Arc-capable. With 23H2, the platform is Arc-native. That distinction matters enormously for how you plan, deploy, and operate the cluster long-term.
The management model has changed at a fundamental level: there is no longer a standalone HCI management experience. Your cluster is an Azure resource. It has an ARM resource ID, it appears in your subscription, and it is governed by the same RBAC, Policy, and Cost Management tooling as any cloud resource you own.
For IT managers: This means CapEx infrastructure now appears alongside OpEx cloud spend in your Azure cost center. Showback and chargeback for on-prem workloads becomes much more straightforward — a significant operational benefit for organizations with mature FinOps practices.
🌟 New Control Plane Architecture
The 23H2 control plane consists of three components that work together to bridge on-premises hardware and Azure management:
- Azure Resource Bridge (ARB): A lightweight VM deployed on the cluster that proxies ARM API calls to local Kubernetes-based infrastructure. This is the core of Arc VM management.
- Custom Locations: An abstraction layer that lets you target the cluster as an Azure deployment target — the same way you'd target a specific Azure region.
- Cluster Deployed Services (CDS): Extension mechanism for deploying AKS, Arc VMs, Azure Virtual Desktop, and Azure Monitor at the cluster level through a unified interface.
🔁 Lifecycle Manager
The Lifecycle Manager (LCM) is the biggest operational change for existing HCI administrators. Updates — including cumulative OS updates, solution extensions, hardware firmware, and drivers — are now orchestrated centrally through Azure.
The LCM introduces solution updates as a new update unit. A solution update bundles OS + firmware + driver + extension updates into a validated, tested package. This eliminates the common 22H2 scenario where an OS patch broke a driver or vice versa.
Breaking change: You cannot apply individual Windows Updates directly to cluster nodes in 23H2. All updates must go through the LCM. Attempting to install updates directly via WSUS, Windows Update, or SCCM on individual nodes will fail cluster validation checks.
💻 Arc Virtual Machines
Arc VMs in 23H2 are provisioned through ARM, just like Azure VMs. The VM resource lives in your Azure subscription and can be managed with familiar tools: Azure portal, Azure CLI, ARM templates, Bicep, and Terraform.
az stack-hci-vm create --name "MyVM01" --resource-group "rg-hybrid-prod" --custom-location "/subscriptions/<sub-id>/resourcegroups/rg-azlocal/providers/microsoft.extendedlocation/customlocations/azlocal-cl" --image "win2022datacenter" --admin-username "localadmin" --memory-mb 8192 --nics nic01 --os-disk-size 128 --size Standard_D4s_v3
⎈ AKS Arc Improvements
AKS Arc (formerly AKS-HCI) in 23H2 gains automatic node provisioning and tighter integration with Azure Container Registry, Azure Monitor Container Insights, and Microsoft Defender for Containers. The manual VM sizing for AKS node pools is replaced by declarative node class definitions managed through ARM.
📊 Built-in Observability
23H2 ships with Insights for Azure Local — a curated Azure Monitor workbook that surfaces cluster health, storage utilization, VM performance, and network throughput without any manual agent configuration or custom query writing.
Key metrics surfaced out of the box:
- Cluster CPU, memory, and storage utilization with trending
- S2D pool health and capacity (per tier: NVMe cache, SSD, HDD)
- VM performance counters (CPU Ready, disk latency, network throughput)
- Hyper-V health indicators and live migration events
- Security posture scores from Defender for Cloud
🏆 What Was Announced at Microsoft Ignite 2025 (BRK147)
Microsoft Ignite November 2025, Session BRK147 "What is New in Azure Local" — presented by Dean Prone (Product Management Lead, Edge Infrastructure) and Nina Gowder — covered a significant set of new capabilities. Here is a structured summary of every major announcement from that session.
Azure Local as Microsoft's Adaptive Cloud Platform
The session reframed Azure Local clearly: it is Microsoft's first-party managed infrastructure solution that runs in your own datacenter, metered by the core, running on qualified OEM hardware. Azure Arc is the broader technology family with four pillars — centralising operations, scaling Kubernetes development across boundaries, unifying data and AI across the estate, and providing global infrastructure. Azure Local sits at the top of that fourth pillar as the full Microsoft-managed private infrastructure option.
Multi-Rack Scale Deployment (Preview)
Announced at Ignite as public preview: Azure Local can now scale to hundreds of nodes in a single cluster instance. Multi-rack deployments come as pre-configured integrated racks of compute, storage, and networking with a dedicated aggregation layer. Compute servers are added incrementally. Everything is managed from the same Azure portal experience — same APIs, same ARM templates — regardless of cluster scale.
External SAN Support (Public Preview)
A significant architectural expansion: Azure Local now supports external SAN connectivity alongside Storage Spaces Direct. This allows organisations with existing Fibre Channel storage investments from Pure Storage, NetApp, Dell, Lenovo, HP, and Hitachi to bring that storage forward into their Azure Local environment without forklift replacement. Fibre Channel is the first supported protocol, with additional protocols on the roadmap.
M365 Local on Azure Local (GA — Connected)
Exchange Server, SharePoint Server, and Skype for Business can now run on Azure Local through the M365 Local deployment model — generally available in connected mode at Ignite. The disconnected version ships alongside Azure Local Disconnected Operations in early 2026. This is particularly relevant for EMEA organisations with data residency requirements that prevent running M365 workloads in the public cloud.
NVIDIA RTX PRO 6000 Blackwell GPU Support
Support for the NVIDIA RTX PRO 6000 Blackwell Server Edition GPU on Azure Local was announced, targeting generative AI inference at the edge — large language models, multimodal AI, video analysis, and 3D/digital twin rendering. Validated hardware partners shipping with this GPU: Dell AX-770, Lenovo Agile M650, and HP ProLiant DL380 Gen12. Availability early 2026.
The session demo showed a two-node Azure Local cluster running GPTOSS 120B model locally via Ollama, analysing factory floor telemetry and generating architecture documentation from ~6,000 lines of code — all running on-premises without cloud connectivity for inference.
Azure Local Disconnected Operations (Preview → GA Early 2026)
Azure Local now supports a fully air-gapped, forever-disconnected mode with a local control plane. In disconnected mode: same Azure portal experience, same APIs and ARM templates, but running locally. Updates are applied via hand-carry — download bits and licence manifests from an Azure region, bring to the datacenter, sideload. Licence renewals require no internet connectivity. GA expected early 2026.
Rack-Aware Cluster (Public Preview)
A new resiliency option: deploy a single logical Azure Local cluster across two physical racks in two separate computer rooms, each with fully redundant power and networking. Workloads can be pinned to specific racks via availability groups to meet compliance requirements. Built on Storage Spaces Direct replication — if one rack goes down, workloads recover on the other rack automatically.
Network Security Groups (GA)
Full NSG support is now generally available for Azure Local VMs — full 5-tuple inbound/outbound allow/deny rules at the VM network interface level. Rules can be assigned per-VM or per-network for flexible workload segmentation and threat surface reduction.
Local Identity Without Active Directory (Preview)
A significant shift from the traditional on-premises model: Azure Local now supports identity management without Active Directory domain join, using Azure Key Vault as the secure store for keys, certificates, and credentials — automatically backed up to the cloud. Nodes do not need to be domain-joined before or after deployment. Moves towards GA soon.
Azure Migrate (GA)
Lift-and-shift migration from VMware to Azure Local is now generally available through Azure Migrate. The agent-based migration path keeps data on-premises while managing the replication and cutover process entirely through the Azure portal — no re-architecture of applications required.
Edge RAG Refreshed Preview
Azure Local's Edge RAG capability received major improvements: 100× faster ingestion of live-streamed images, 5× faster hybrid search query execution, multi-modal search, improved document/table/image parsing via OCR, and SharePoint integration. Relevant for manufacturing, retail, and healthcare AI scenarios that process high volumes of unstructured local data.
GSK Customer Story
Mo Khalid from GlaxoSmithKline (70,000 employees) presented their Azure Local deployment across manufacturing sites and R&D labs globally. Key outcomes: same Azure security policies (300+) applied on-premises identically to cloud, same CI/CD pipelines, self-service model for scientists and developers, smart manufacturing digital twin architecture (factory integration layer → cloud for modelling → downstream inference), and AKS cluster deployment in hours instead of months. GSK's takeaway: "Azure Local extended the management plane — every site became a subscription with resource groups, and we didn't have to change any of our pipelines."
Session recording: BRK147 "What is New in Azure Local" from Microsoft Ignite November 18–21, 2025. Presented by Dean Prone, Nina Gowder, Kim Lam, and Mohamed Khalid (GSK).
🚨 Migrating from 22H2
There is no in-place upgrade path from Azure Stack HCI 22H2 to Azure Local 23H2. Migration requires a fresh deployment, followed by workload migration. The recommended approach for live production clusters:
- Deploy a new 23H2 cluster alongside the existing 22H2 cluster (parallel infrastructure)
- Migrate VMs using Storage Migration Service or live migration over a dedicated network
- Validate workloads on the new cluster for a defined period
- Decommission the 22H2 cluster and re-use hardware for cluster expansion or refresh
MCT insight: In every 23H2 migration workshop I've run across EMEA, the biggest blocker is documentation debt — organizations don't have accurate records of their 22H2 VM configurations, network settings, or custom scripts. Start your migration project with a full configuration export before touching any infrastructure.