The AI Paradox: Why are AI Projects Easy to Start, but Difficult to Scale?

Yael Davidowitz-Neu • January 27, 2021 • 3 minute read

AI/ML in Business



It is easy to take your first steps in Artificial Intelligence (AI). Popular frameworks and pretrained models provide a foundation you can build on so that often, your existing hardware, using general purpose processors, can give you the performance you need for those early experiments and initial deployments.

But, once your ambitions grow and the scope and complexity of your AI projects inevitably expand, how does this impact your infrastructure?

If you’ve already ventured into AI, you’ll know that training deep neural networks demands a lot more processing power than legacy hardware can deliver at scale. Using existing infrastructure to expand your AI projects can be inflexible, costly and cumbersome.

AI Puts Unprecedented Demand on The Data Center

AI puts unprecedented compute demand on the data center. In fact, as AI picks up pace in the modern world, Gartner predicts that computational resources used in AI will increase by five times between 2018 and 2023.

With AI deployments growing in scale and sophistication, many companies are finding that they are quickly outgrowing the capabilities of their general-purpose infrastructure. As a result, NVIDIA accelerated computing platforms have become the de facto standard for AI development, with mature enterprises leveraging multi-GPU systems that are interconnected to enable AI at scale.

GPUs are designed to carry out lots of calculations at the same time, which accelerates AI processing, and have a high memory bandwidth to handle incoming data. In addition, by substituting CPU servers with purpose-built systems designed to handle AI workloads at significantly lower power requirements, projects become substantially less resource intensive, improving processing speed and enhancing the end user experience.

Can Businesses Achieve Both Agility and Performance in AI?

Traditional solutions for scaling AI require tradeoffs between the convenience and flexibility of cloud-based services and the performance and control afforded by on-premise infrastructure.

While both cloud-based and on-premise AI infrastructure typically work well for proof of concept, layering in complex algorithms and ever-expanding data sets are often more than these systems can handle.

According to Accenture: 9 out of 10 C-suite executives in the UK believe they must leverage artificial intelligence (AI) to achieve their growth objectives, yet 87% report they struggle with how to scale.

The complexity and tradeoffs are often a daunting mountain for business leaders to climb. And the options for scaling infrastructure typically come with compromises.

Option 1: Public Cloud-based AI Infrastructure

For businesses looking to quickly deliver an AI proof-of-concept, cloud providers offer plug-and-play AI tools to get you started. But, as with all cloud-based services, costs, security and performance issues can become increasingly complicated as you scale.

Option 2: Private AI Servers in a Data Center or On-Prem

Building your own workstations, or buying pre-built deep learning servers in the data center or on-prem can resolve some of these headaches, but greater control and performance will come at a cost. Businesses can expect higher upfront costs (both time and financial), less flexibility, and additional operational hurdles.

The Best of Both Worlds, Without Compromise...

Option 3: Cyxtera’s AI/ML Compute as a Service

At Cyxtera, we’ve raised the bar by becoming the first global data center operator that can deliver subscription access to A100 systems via our landmark NVIDIA DGX-based compute-as-a-service (CAAS) offering. Cyxtera’s solution offers enterprise customers greater agility and rapid deployments for their AI workloads.

Drive AI/ML success across your organization with a simplified buying experience, rock-solid security and high-performance compute power.



Views and opinions expressed in our blog posts are those of the employees who made them and do not necessarily reflect the views of the Company. A reader should not unduly rely on any statements made therein.



Cyxtera Partners with NVIDIA to Enable AI Workloads

AI applications are growing at an aggressive rate. With more IoT and machines sensors being deployed, the amount of data that will need to be managed and processed at low-latency rates is skyrocketing.

Find out more here
Yael Davidowitz-Neu Director of Product Marketing, Cyxtera

Yael Davidowitz-Neu

Director of Product Marketing, Cyxtera