HomeArticlesAzure Local
Azure Local

Azure Local 23H2: Complete Architecture Guide for Enterprise Deployments

👤 Mohamed Dyabi | 📅 March 2025 | ⏰ 15 min read
← All Articles

☁️ What Is Azure Local 23H2?

Azure Local (formerly Azure Stack HCI) 23H2 represents a fundamental architectural shift. Where earlier releases treated on-premises HCI as a separately managed platform, 23H2 moves the entire control plane into Azure Arc — meaning your cluster is registered, configured, and orchestrated entirely through Azure Resource Manager from day one.

This is not a minor update. The deployment experience, update cadence, lifecycle management, and integration surface have all been redesigned around the principle that your on-premises cluster is simply a remote Azure resource that happens to run hardware you own.

💡

Key insight: In 23H2, there is no standalone HCI management console. Everything — cluster creation, updates, VM placement, monitoring — is driven through Azure portal, ARM templates, or Azure CLI. If you're used to Windows Admin Center as your primary management tool, the workflow has changed significantly.

🔄 Key Changes from 22H2

Understanding what changed helps you avoid the most common 23H2 deployment failures in the field:

  • Deployment via Azure Arc: The cluster bootstrap process now uses a deploymentSettings ARM resource. Manual cluster creation through WAC or PowerShell-only workflows is deprecated.
  • New Lifecycle Manager: Updates — OS, firmware, drivers — are orchestrated through the cloud-driven Lifecycle Manager. You no longer patch nodes independently.
  • Built-in observability: Azure Monitor, Insights for Azure Local, and Log Analytics are now first-class, built-in — not optional add-ons requiring manual agent configuration.
  • Stretched cluster changes: Stretched cluster support has been removed in 23H2 as the architecture moves to active/active replication through Azure Site Recovery for cross-site DR scenarios.
  • Network intent-based configuration: Network ATC is mandatory for production deployments, replacing manual NIC teaming and vSwitch configuration.

🏗️ Architecture Deep Dive

A production Azure Local 23H2 cluster consists of several interconnected planes that must be designed together rather than independently:

Network Design Patterns

23H2 enforces Network ATC intents — declarative network configurations that the OS validates and applies consistently across all nodes. You define intents, not individual NIC configurations.

The three primary intent patterns for production deployments:

Intent PatternNICs RequiredBest ForBandwidth
Fully Converged2 × 25GbE RDMASmall clusters (2–3 nodes)Shared across all traffic
Storage + Compute/Mgmt4 × 25GbEMost production scenariosDedicated storage bandwidth
Fully Disaggregated6 × 25GbEHigh-throughput workloadsFull isolation per intent
⚠️

RDMA requirement: Storage traffic in 23H2 requires RDMA-capable NICs (iWARP or RoCE v2). Standard Ethernet NICs will fail validation. For switchless deployments (2–3 nodes), direct-connect RDMA cross-cables between nodes eliminate the need for a ToR switch for storage traffic.

Storage Configuration

Storage Spaces Direct (S2D) in 23H2 defaults to 3-way mirror resiliency for clusters of 3 or more nodes, providing the best balance of performance, availability, and capacity for most enterprise workloads.

PowerShell — Verify S2D pool health post-deployment
Get-StoragePool -FriendlyName "S2D on ClusterName" | 
  Get-StorageTier | 
  Format-Table FriendlyName, ResiliencySettingName, 
    NumberOfDataCopies, PhysicalDiskRedundancy -AutoSize

🔗 Azure Arc Integration

Azure Arc is no longer optional in 23H2 — it is the only supported management plane. Every node in the cluster must be Arc-connected before deployment begins, and the Arc registration must occur using a service principal with specific RBAC permissions.

The Arc integration provides immediate access to:

  • Azure Policy: Apply compliance policies to VMs running on-premises identically to cloud VMs
  • Microsoft Defender for Servers: Unified threat protection across the hybrid estate
  • Azure Monitor / VM Insights: Performance counters, dependency maps, and alerting
  • Azure Update Manager: Unified patching for OS, extensions, and application updates
  • Azure Backup integration: MABS-free backup to Recovery Services Vault

🚀 Deployment Workflow

The 23H2 deployment follows a strict five-phase sequence. Skipping or partially completing any phase results in a failed deployment state that can be difficult to recover cleanly:

  1. Hardware validation: Run the Test-Cluster equivalent validation via the Azure portal Environment Checker before touching any configuration.
  2. Network configuration: Apply Network ATC intents on all nodes. Validate with Get-NetIntent and confirm all intents reach Provisioned state.
  3. Azure Arc registration: Register all nodes using Invoke-AzStackHciArcInitialization with the correct subscription, resource group, and service principal credentials.
  4. Cluster creation: Submit the deploymentSettings ARM resource via Azure portal or Bicep template. Monitor progress via Azure portal — this phase takes 60–90 minutes.
  5. Validation and workload onboarding: Verify cluster health, S2D pool status, and Arc connectivity before deploying any Arc VM or AKS workloads.
💡

Field tip: Always run the Azure Local Environment Checker (Invoke-AzStackHciEnvironmentChecker) at least 48 hours before your deployment window. It validates NTP sync, DNS resolution, hardware compatibility, firewall rules, and proxy settings — the four most common deployment blockers in EMEA environments.

🧮 Validated Hardware

Azure Local 23H2 requires hardware from the Azure Local Solutions Catalog. Choosing non-validated hardware is the single biggest cause of deployment failures and support escalations.

For Dell-based deployments, the validated options as of Q1 2026:

PlatformNodesCPUBest For
Dell APEX Cloud Platform2–16Intel / AMDEnterprise, full lifecycle management
Dell PowerEdge R7502–8Xeon Ice LakeMid-range general workloads
Dell PowerEdge R7602–16Xeon Sapphire RapidsHigh-density, AI-ready workloads

📄 Deploying via Azure Portal (Step-by-Step)

Microsoft's official deployment path for Azure Local is the Azure portal wizard, documented at Deploy Azure Local using the Azure portal. The wizard walks through five tabs in sequence — each must be completed before moving to the next.

Basics Tab

Select your Subscription and Resource Group, enter an Instance name, and choose the Region where Azure resources will be stored. Under Cluster options choose Standard or Rack aware (for two-rack HA deployments). Select your Identity provider — either Active Directory or the newer Local Identity with Azure Key Vault option (preview). Then use + Add machines to select your Arc-registered nodes.

Once machines are added, the portal automatically installs Arc extensions on each node — wait for all to reach Ready status before proceeding. Run Validate selected machines to confirm OS version consistency, extension versions, and symmetric NIC layout across all nodes. Finally, create or select an Azure Key Vault for storing BitLocker keys, local admin credentials, and cluster secrets.

⚠️

Key requirement: Machines must not be joined to Active Directory before deployment. Pre-joining to AD is a common mistake that blocks the deployment wizard from completing.

Configuration Tab

Choose New configuration to specify all settings manually, or load from a Template spec stored in your subscription. Template specs are valuable for repeatable multi-site deployments — define once, deploy consistently across all locations.

Networking Tab

For multi-node systems, choose whether storage traffic uses a switch (Network switch for storage traffic) or direct cross-connect cables (No switch for storage — switchless, for 2–3 node clusters). Then select your network traffic grouping pattern:

PatternIntents CreatedBest For
Group all traffic1 converged intentSmall clusters, cost-optimised
Group mgmt + computeMgmt+Compute / StorageMost production deployments
Group compute + storageManagement / Compute+StorageHigh-performance storage focus
Custom configurationPer-traffic intentsSpecific NIC isolation requirements

Storage, Management & Review Tabs

The Storage tab configures S2D cache and capacity drive assignments. The Management tab sets the cluster witness, drift control settings, and Lifecycle Manager schedule. Finally, Review + Create validates all settings and submits the deploymentSettings ARM resource — deployment typically takes 45–90 minutes to complete.

💡

Reference: Full step-by-step documentation with screenshots is available at learn.microsoft.com — Deploy Azure Local using the Azure portal.

✅ Pre-Deployment Checklist

  • ✅ All nodes running Windows Server 2025 Datacenter Azure Edition (not standard WS2025)
  • ✅ RDMA-capable NICs installed and firmware updated to validated version
  • ✅ NTP synchronized to the same source on all nodes (max 1s drift)
  • ✅ DNS resolution working for all Azure endpoints (run Environment Checker)
  • ✅ Outbound HTTPS (port 443) open to required Azure endpoints — see firewall requirements
  • ✅ Service principal created with: Contributor + Azure Connected Machine Onboarding + Azure Connected Machine Resource Administrator on target resource group
  • ✅ Active Directory: 2 OU-level computer account pre-staged objects created
  • ✅ Validated hardware from Azure Local Solutions Catalog
Azure Local23H2Azure ArcS2DNetwork ATCRDMAHCIDell PowerEdge