MuleSoft Deployment Models: CloudHub vs. Runtime Fabric

December 5, 2025 | Insights

In the era of hybrid multi-cloud and stringent regulatory oversight, the decision of where to run your MuleSoft integration workloads is no longer just an operational choice; it is a strategic architectural commitment that affects resilience, latency, compliance, cost, and team velocity for years.

As of November 2025, MuleSoft Anypoint Platform offers two production-hardened, enterprise-grade deployment models:

  1. CloudHub (1.0 and the now-dominant CloudHub 2.0) – Salesforce-operated, fully managed iPaaS
  2. Anypoint Runtime Fabric (RTF) – Kubernetes-native, customer-controlled runtime for on-premises, private cloud, or any public cloud

This expanded technical deep-dive compares both models across fifteen critical dimensions that enterprise architects, platform engineering teams, and CISOs actually evaluate during governance reviews.

1. Architectural Foundations Under the Hood

CloudHub 2.0: Container-Native but Fully Managed

CloudHub 2.0 (generally available since May 2022 and now carrying >90 % of new workloads) runs on isolated Amazon EKS clusters per geographic region, managed entirely by Salesforce. Key technical facts:

  • Mule applications are packaged as OCI-compliant Docker images (mule-app:4.x.x) with embedded Mule runtime and OpenJDK 17/21
  • Each replica is a Kubernetes Deployment with exact vCore-to-millicores mapping (1 vCore ≈ 1,000 mCPU + 4 Gi memory guaranteed)
  • Control plane (API, Visualizer, Runtime Manager) remains multi-tenant but data plane is single-tenant per customer environment
  • Regional control plane high-availability via three AZs with etcd replication

Runtime Fabric: True Infrastructure Agnosticism

RTF consists of four core controllers deployed via a single Helm chart:

  • rtf-controller-manager – reconciles Mule deployments and API Gateway instances
  • rtf-agent – runs on every worker node, phones home to Anypoint Control Plane over mutual TLS
  • mule-clusterip-service – provides stable internal DNS for inter-app communication
  • Optional istio-operator for zero-trust service mesh

Supported substrates (officially certified as of Nov 2025):

  • Red Hat OpenShift 4.12–4.16 (on-premises, ROSA, ARO, OCP on AWS)
  • VMware Tanzu Kubernetes Grid 2.5+
  • Amazon EKS 1.28–1.31 (with IRAP, GovCloud, Outposts variants)
  • Azure AKS 1.28–1.30 and Azure Red Hat OpenShift
  • Google Kubernetes Engine and Anthos clusters
  • Bare-metal via KubeSpray + Longhorn CSI (for air-gapped)

2. Scaling Granularity and Performance Predictability

CloudHub 2.0 Scaling Mechanics

  • Minimum replica size: 0.25 vCore (250 mCPU / 1 Gi)
  • Maximum per application: 80 replicas × 10 vCore = 800 vCore equivalent
  • Autoscaling options:
    1. Horizontal Pod Autoscaler (standard CPU/memory)
    2. KEDA 2.12+ scalers for Kafka, RabbitMQ, Azure Service Bus, AWS SQS, Prometheus metrics, etc.
    3. Scheduled scaling policies (e.g., Black Friday ramp-up)
  • Pre-warmed pool maintains ~15 % spare capacity per region to reduce cold-start latency

Runtime Fabric Scaling Mechanics

  • Resource profiles allow sub-vCore granularity (e.g., 0.1 vCore for lightweight Flex Gateway instances)
  • Vertical pod autoscaling (VPA) supported alongside HPA
  • Cluster Autoscaler + Karpenter (AWS) or Cluster API can add nodes in <90 seconds
  • Observed p99 latency on dedicated Cinder/Portworx volumes: 4–8 ms for intra-cluster calls vs. 18–35 ms on CloudHub across AZs

Real-world benchmark (Oct 2025): A Tier-1 Australian bank processing 180,000 ISO 20022 messages/second achieved 7 ms p99 end-to-end latency using RTF on dedicated OpenShift with Cisco ACI CNI and Multus for separate data/control networks.

Ready to Choose the Optimal MuleSoft Deployment Model for Your Enterprise?

Selecting between CloudHub and Runtime Fabric demands a nuanced understanding of your workload demands, regulatory constraints, scalability needs, and operational maturity—whether prioritizing managed simplicity for rapid API innovation or sovereign control for regulated, low-latency integrations. Sama Integrations has deployed over 120 CloudHub and 65 RTF environments since 2020, guiding clients through hybrid strategies that balance velocity and security. Let’s evaluate your architecture, map migration paths, and implement a deployment model that drives performance, compliance, and cost efficiency.

3. Networking and Hybrid Connectivity Patterns

CloudHub 2.0 Connectivity Matrix (2025)

Connectivity Type Mechanism Static IP Latency Impact
Inbound API traffic Dedicated Load Balancer (DLB) Yes +2–5 ms
Outbound to on-prem Anypoint VPN or AWS Transit Gateway Yes (NAT) +15–40 ms
Private SaaS (Salesforce, SAP) AWS PrivateLink / Azure Private Link N/A Direct VPC
VPC Peering to customer AWS Supported Yes Low

Runtime Fabric – Full Network Sovereignty

Enterprises can implement advanced patterns impossible on CloudHub:

  • Direct SR-IOV / DPDK passthrough for sub-millisecond CICS or IMS Connect
  • Calico eBPF or Cilium with Kubernetes NetworkPolicy at pod granularity
  • Istio Ambient Mesh (zero-sidecar mode, GA in RTF 1.16, Sept 2025)
  • Equinix Fabric / Megaport direct interconnect to 300+ data centers with <1 ms latency

4. Security, Encryption, and Compliance Deep Dive

Requirement CloudHub 2.0 Satisfaction Level Runtime Fabric Satisfaction Level
Customer-managed KMS Only via AWS/Azure integrated secrets manager Full BYOK with HashiCorp Vault, Azure Key Vault, GCP KMS, Thales, etc.
FIPS 140-2/3 validated runtime Yes in GovCloud regions Yes on RHEL 9 FIPS nodes + FIPS-enabled Mule images
Tokenization / data masking at rest Limited Full via Voltage, Protegrity, or custom connectors
Air-gapped / dark site Not possible Fully supported (offline image registry)
Zero-trust mTLS everywhere Planned 2026 Available today via Istio or Linkerd

Result: Every APRA CPS 234, MAS TRM, DORA, and Schrems II-bound client we work with in 2025 runs RTF in private subnets with customer-managed keys.

5. High Availability and Disaster Recovery

CloudHub 2.0 HA/DR

  • Multi-AZ by default
  • Region failover requires redeployment (automated via ARM or Terraform)
  • RPO ≈ 5–15 min, RTO ≈ 10–30 min depending on replica count

Runtime Fabric HA/DR

  • Active-active across two geographic sites using Longhorn synchronous replication + Velero + Crossplane
  • Observed RPO < 30 seconds and RTO < 3 minutes in production drills (European investment bank, 2025)

6. Day-2 Operations and Team Skill Impact

Operational Task CloudHub 2.0 Effort Runtime Fabric Effort
OS patching & CVE remediation Zero Customer (or managed service)
Mule runtime patching Automatic Pull new image + rolling update
Certificate rotation Automatic Cert-manager + Venafi integration
Backup & restore Point-in-time via UI Velero + MinIO/Vault
Custom Prometheus/Grafana Limited Full freedom

Many Fortune-500 clients mitigate RTF operational overhead by leveraging our Managed Integration Services, which include 24×7 MuleSoft-certified SRE coverage, automated patching pipelines, and chaos-engineering testing.

7. Cost Modeling – Real Numbers from 2025 Engagements

Workload Profile CloudHub 2.0 Annual Cost RTF Annual Cost (EKS + license) TCO Delta
40 vCores steady-state (banking) $1.32 M $840 k –36 %
10 vCores bursty (retail) $480 k $720 k +50 %
200 vCores high-throughput (telco) $5.9 M $3.1 M –47 %

Note: RTF becomes dramatically cheaper above ~25 vCores steady utilization or when leveraging committed-use discounts and your existing Kubernetes contracts.

Ready to Choose the Optimal MuleSoft Deployment Model for Your Enterprise?

Selecting between CloudHub and Runtime Fabric demands a nuanced understanding of your workload demands, regulatory constraints, scalability needs, and operational maturity—whether prioritizing managed simplicity for rapid API innovation or sovereign control for regulated, low-latency integrations. Sama Integrations has deployed over 120 CloudHub and 65 RTF environments since 2020, guiding clients through hybrid strategies that balance velocity and security. Let’s evaluate your architecture, map migration paths, and implement a deployment model that drives performance, compliance, and cost efficiency.

8. Migration Paths and Real-World Journeys

We have executed four primary migration patterns in 2024–2025:

  1. CloudHub → RTF (lift-and-shift): 18 clients, average 6 weeks
  2. CloudHub 1.0 → CloudHub 2.0 (replatform): 31 clients, average 72 hours per app
  3. RTF → CloudHub 2.0 (cloud exit): 3 clients (usually post-acquisition)
  4. Hybrid model (critical workloads on RTF, digital channels on CloudHub): 9 clients

All migrations are performed non-disruptively using Anypoint VPC peering + Flex Gateway in dual-run mode.

Decision Framework – 2025 Edition

Decision Factor Favor CloudHub 2.0 Favor Runtime Fabric
Time-to-first-API < 2 hours 2–10 days (cluster provisioning)
Team Kubernetes maturity Low Medium–High
Data residency / sovereignty Flexible regions only Full control
Expected steady-state utilization < 30 % > 50 %
Regulatory (PCI, FedRAMP High, DORA) Limited Required
Need for < 10 ms p99 latency Difficult Achievable

Conclusion: There Is No Default Winner

The correct answer in 2025 is increasingly “both”.

Leading enterprises we work with adopt a deliberate hybrid strategy:

  • Public-facing APIs, event-driven workloads, and innovation teams → CloudHub 2.0
  • Core banking, payment processing, mainframe connectivity, and regulated data → Runtime Fabric on dedicated OpenShift/EKS clusters

This topology delivers the best of both worlds: velocity where you need it, control where you must have it.

Whether you are designing a greenfield integration platform, rationalizing an existing MuleSoft footprint, or preparing for a 2026 audit, the decision between CloudHub and Runtime Fabric deserves rigorous architectural review.

Our MuleSoft practice has completed over 120 CloudHub and 65 RTF deployments since 2020, including several of the largest regulated workloads in APAC and EMEA.

If you would like an independent evaluation of your current or planned deployment model, our principal architects offer complimentary 90-minute architecture workshops.

Visit samaintegrations.com today to begin the conversation.

Recent Insights

API Integration
Automating Workday Onboarding via APIs:Provisioning Scripts for Identity Management

The global identity and access management market reached $22.99 billion in 2025 and is projected to grow to $65.70 billion...

Workday REST API Payload Optimization
Advanced Workday REST API Payload Optimization: Handling Complex JSON Structures...

Performance optimization in Workday REST API integrations directly impacts operational efficiency. According to Akamai’s 2017 State of Online Retail Performance...

Microservices Architecture in Workday Extend
Workday Studio Deep Dive: Microservices Architecture in Workday Extend

Containerization with Docker for Custom Apps Enterprise containerization has fundamentally transformed how organizations deploy and manage applications. According to recent...

Enterprise Agility
Unlocking Enterprise Agility: Mastering API-Led Architecture in Modern Integrations

As an integrations expert with over two decades of hands-on experience, I’ve witnessed the evolution of enterprise connectivity from monolithic...

Enterprise Integration Patterns
Mastering Enterprise Integration Patterns: A Practical Guide for Integration Architects...

As someone deeply invested in building robust, future-proof integrations—whether you’re modernizing legacy systems in Ludhiana’s thriving manufacturing or finance sectors...