🛡️
Resiliency Requirements
High availability and storage fault tolerance
Number of nodes that can fail while the cluster keeps running.
Number of simultaneous storage failures the cluster can survive without data loss.
Reserve headroom on all resources for future growth.
📊 Node count is auto-calculated from your workloads, growth buffer & HA policy. Result shown in Step 4.
💾
Storage Preferences
Storage media type and per-node drive configuration
Excluding NVMe cache drives.
⚙️
Additional Options
Reserve extra compute for Kubernetes node pools.
Software Defined Networking — required for micro-segmentation.
💡
Sizing Reference
Storage Fault Tolerance
1 — Survives 1 simultaneous storage failure. For 2-node clusters the sizer recommends 2-way mirror.
2 (recommended) — Survives 2 simultaneous storage failures. For 3+ node clusters the sizer recommends 3-way mirror.
2 (recommended) — Survives 2 simultaneous storage failures. For 3+ node clusters the sizer recommends 3-way mirror.
OS & Hyper-V Overhead
Reserve 4 GB RAM per node for OS + Hyper-V + S2D. Reserve 4 GB per TiB NVMe cache.
V:P Ratio Reference
General: 6:1 SQL/SAP: 2–3:1
AVD Light: 6:1 AVD Heavy: 3:1
AKS / Critical: 1:1
AVD Light: 6:1 AVD Heavy: 3:1
AKS / Critical: 1:1
💻
Workloads
Enter the details of your on-premises workloads to generate a sizing recommendation.
💻
Get started by creating your workload
🌐
Network Design
Choose the topology for your Azure Local cluster
Topology
✓
🔗
Switchless (Direct-Connect)
2–3 nodes · Direct RDMA cables between nodes, no ToR switch required for storage traffic.
- Storage RDMA via direct cross-cables between nodes
- Management/compute uses an external switch
- Lowest cost — ideal for 2–3 node deployments
- Not scalable beyond 3 nodes without rewiring
2–3 NodesNo Storage ToR SwitchCost Optimized
🔀
Scalable (ToR Switch)
2–16 nodes · Redundant Top-of-Rack switches handle all traffic types.
- All traffic via two redundant ToR switches (no SPOF)
- Storage RDMA, management, and compute through switches
- Full scale-out: add nodes without cabling changes
- Required for 4+ nodes, SDN, and enterprise deployments
2–16 NodesDual ToR SwitchesiWARP / RoCE v2Recommended 4+
NIC Configuration
💡
Network Tips
When to use Switchless
Best for 2–3 node cost-sensitive deployments. If you plan to grow beyond 3 nodes, start with a ToR switch design.
iWARP vs RoCE v2
iWARP: RDMA over TCP — no special switch config needed.
RoCE v2: Lower latency but requires Priority Flow Control (PFC) and ECN on every switch.
RoCE v2: Lower latency but requires Priority Flow Control (PFC) and ECN on every switch.
East-West Rule
Storage bandwidth (East-West) must be equal to or greater than compute (North-South). Never mix 10 GbE storage with 25 GbE compute.