How VMware is Shaping the Future of Multi-Cloud with Project Monterey & AI at Explore 2025

How VMware is Shaping the Future of Multi-Cloud with Project Monterey & AI at Explore 2025

As enterprises continue to adopt multi-cloud strategies, the need for advanced, secure, and performance-optimized infrastructure has never been greater. At VMware Explore 2025 Las Vegas, VMware unveiled bold steps toward this future with Project Monterey and its deep integration of AI/ML workloads into the core of modern datacenters. This evolution reflects a shift not just in technology—but in how organizations design, operate, and scale infrastructure across clouds.

What is Project Monterey?

Initially announced as a next-generation architecture for modern applications, Project Monterey redefines infrastructure by offloading key functions from the CPU to SmartNICs (DPUs). These Data Processing Units bring distributed compute capabilities directly to the network interface layer—enabling faster, more secure data flows and freeing up CPU cycles for business-critical applications.

In 2025, VMware has taken this a step further by aligning Project Monterey with its AI & ML strategy, allowing organizations to run GPU-intensive, latency-sensitive, and security-enforced workloads closer to the edge or within distributed cloud environments.


AI, ML & the DPU Revolution

At VMware Explore 2025, sessions and keynotes revealed the company’s increased focus on AI-first architectures, with three core pillars:

  1. AI-Ready Infrastructure with vSphere & DPUs
    The combination of vSphere 8, DPUs, and Project Monterey enables low-latency, high-throughput environments ideal for AI inference and training. VMware showcased partnerships with NVIDIA BlueField, AMD Pensando, and Intel IPU, underlining its commitment to hardware acceleration.
  2. Data Sovereignty & Federated Learning
    By embedding confidential computing capabilities into DPUs, VMware supports federated AI use cases where data cannot leave the local datacenter—crucial for industries like finance, healthcare, and government.
  3. Multi-Cloud AI Lifecycle with Aria & Tanzu
    Through VMware Aria Operations for Applications and Tanzu Application Platform, AI models can be trained, deployed, and monitored consistently across AWS, Azure, Google Cloud, and on-prem vSphere. This ensures observability, security, and cost optimization for AI across clouds.

Multi-Cloud, Redefined

In a multi-cloud world, enterprises face three major challenges: complexity, cost, and control. VMware’s 2025 approach, with Project Monterey at the core, solves this in several ways:

  • Distributed Security Policies enforced directly at the NIC level, reducing east-west threat vectors.
  • Improved Resource Efficiency by moving traditional hypervisor functions (e.g., NSX, storage I/O, telemetry) off the CPU.
  • Unified AI Fabric, where data, compute, and model orchestration work across cloud boundaries using consistent APIs and governance models.

Lab Tested: Monterey + AI + Tanzu = Performance Leap

During Explore 2025, VMware engineers demonstrated real-world benchmarks comparing traditional infrastructure with Project Monterey-enabled environments:

  • 20% improvement in GPU inference speed due to lower CPU contention.
  • Up to 40% reduction in latency for microservices deployed via Tanzu and served via DPU-enhanced NSX stacks.
  • 50% faster provisioning of AI pipelines across hybrid clouds.

Final Thoughts

VMware is no longer just a virtualization company—it’s an enabler of intelligent, sovereign, and multi-cloud-native enterprise computing.
By combining Project Monterey with its expanding AI ecosystem, VMware is giving organizations the performance, security, and scalability they need to lead in an AI-driven future.

As we move deeper into the AI era, one thing is clear: VMware is not just adapting—it’s architecting what’s next.

Join the discussion

Bülleten