Cyxtera offers Al/ML Compute as a Service, powered by NVIDIA DGX A100 systems

Yael Davidowitz-Neu • April 15, 2021 • 3 minute read

AI/ML, Hybrid IT in Data Centers



Modern analytics workloads are very bandwidth intensive, requiring high levels of computational power to support large data sets, complex training algorithms, and real-time inputs. This adds a new layer of complexity to infrastructure planning: how can you quickly scale up projects while also maintaining the high levels of performance, speed and reliability that successful artificial intelligence (AI) and machine learning (ML) initiatives require?

There are multiple options for hosting your AI workloads, these include: public cloud, on-premises, traditional colocation, and AI/ML Compute as a Service.

Public Cloud

For enterprises looking to quickly deliver an AI or ML proof-of-concept, public cloud solutions can be an excellent option as they offer plug-and-play tools to get you started. In addition to rapid time-to-solution, with little upfront investment needed for implementation and nominal maintenance and management required once things are up and running, the public cloud also benefits businesses looking to get started with AI initiatives by providing significant operational and financial flexibility.

However, despite its many benefits, leveraging the public cloud for AI projects can require considerable trade-offs. From issues with latency to the risk of network outages, public cloud environments do not always offer the performance and predictability of more traditional on-premise solutions. In addition, as initiatives grow in scope, the flexible cost structure of the cloud may prove to be a challenge. Some firms will face “cloud bill-shock” as fees for storing, downloading and processing continue to expand alongside data volume and project complexity.

On-premises

To avoid the security, performance, and financial challenges associated with the cloud, one alternative would be for businesses to buy their own servers and host AI workloads in- house. Dedicated hardware gives firms the control they need to manage their infrastructure as well as avoids the security and performance risks associated with the public cloud and public internet.

Yet, on-premises solutions come with their own set of challenges. Building and managing an in-house data center is expensive, requiring a large capital investment upfront and ongoing costs associated with management and maintenance. Capacity planning can also be a challenge; with significant lead time required to get up and running in a new location, on-premises solutions can limit a business’s agility and slow the pace of innovation.

Traditional Colocation

In addition to the public cloud and on-premises model, some enterprises choose to leverage a colocation solution to manage their AI projects. This model provides a managed service, solving for the maintenance and management challenges of hosting on-premise, while having businesses bring in their own gear.

While traditional colocation is great for reducing operational costs and ensuring reliability and performance, it doesn’t spare businesses the upfront expenses associated with large hardware purchases or enable them to quickly get up and running in new markets.

Fortunately, for enterprises who don’t want to compromise, there is another option available that offers the best of all worlds.

AI/ML Compute as a Service

Cyxtera’s Al/ML Compute as a Service, powered by NVIDIA DGX A100 systems, is provided directly within our data centers. Delivering the best features of cloud and on-premises offerings, Cyxtera and NVIDIA have collaborated to provide businesses with the opportunity to build the nimble architecture AI workloads require.

Benefits include:

  • Exceptional performance at massive scale: Massive GPU-accelerated compute combined with state-of-the-art networking hardware and software optimizations enables bandwidth-intensive initiatives, such as conversational AI and large-scale image classification.
  • Faster Time-to-Solution: Consolidate training, inference, and analytics into a single unified AI infrastructure that can be deployed in a single business day, speeding up the time-to-solution for new AI/ML workloads.
  • Predictable Costs and Accelerated ROI: No longer rely on testing economies of scale in the cloud as you ramp your AI/ML programs. Manage budgets and upfront cost by shifting from a CapEx to a predictable OpEx model you control.
  • Maximum Control: Retain full control from hardware up through applications. Provisioning via Cyxtera offers the security and control provided by your own dedicated infrastructure, combined with a cloud-like buying experience.


Views and opinions expressed in our blog posts are those of the employees who made them and do not necessarily reflect the views of the Company. A reader should not unduly rely on any statements made therein.



How to Scale AI Initiatives as You Grow

Learn more about maximizing the performance of your AI/ML initiatives with Cyxtera

Download White Paper Now
Yael Davidowitz-Neu Director of Product Marketing, Cyxtera

Yael Davidowitz-Neu

Director of Product Marketing, Cyxtera