The Case for a Compute-as-a-Service Approach
A few decades ago, it wasn’t hard to see why people thought of Artificial Intelligence/Machine Learning (AI/ML) as the next shiny bright thing. Its use cases are myriad and the rate at which it can crunch data is staggering. Fast forward to the present day and AI/ML is now seen by many leading-edge organizations as a critical, if not mandatory, part of their digital transformation. In fact, a recent study 451 Research found that 95 percent of companies reported AI as being important to their digital transformation efforts.
Move out of the shadows
Of course, wanting something and actually getting it are two different things. For most businesses, the challenges surrounding AI are significant. First, there’s the hurdle of storing and accessing the vast amounts of data needed to build and train AI models. Then, there’s the effort required to move the model from proof of concept (POC) to deployment at scale and the costs involved, and that doesn’t begin to touch on the struggle to get a project approved. No wonder that many AI projects are dead before they begin. The 451 Research Pathfinder Report also found that 39 percent of ML projects in the POC stage never make it to production.
Rather than wait for the go-ahead, which may take months, some data scientists take it upon themselves to launch their own “shadow” AI projects without the approval of IT and DevOps. While easy to launch (all you need is a credit card and a public cloud account), most of these shadow projects will fail as they lack, among other things, the proper infrastructure, operational processes, rigor, and platform necessary to make it into production – and all with the added “bonus” of further exposing the company to security risks. This model debt – the amount of time spent on projects that never come to fruition – is considerable given that AI/ML projects are costly and involve some of the company’s most valuable employees – data scientists and machine-learning engineers.
Why can’t I just do AI in the cloud?
Public cloud remains a popular option for many enterprises, especially those just embarking on their digital transformation. Start-up costs are low, you can scale on-demand, and IT infrastructure costs can be paid out of operating rather than capital expenses. Even so, the use of public cloud for AI/ML projects is a bit of a honey trap.
The lure of transferring data to the cloud at no cost can’t be denied, but good luck extricating that data, especially once it’s scaled. The difficulty that comes with moving and storing increasingly larger data sets makes the initial cost savings less attractive. Other downsides associated with relying too heavily in public cloud for AI include the lack of control over how data is used and where it’s stored.
For companies that have relied on public cloud for their AI/ML project needs and then decide to pull their data out, egress fees can come with considerable sticker shock. They pack a wallop and can escalate quickly and, when coupled with the cost of storage and compute, can make it financially impossible for some companies to further their AI journey. Nor is it guaranteed that public cloud can provide the infrastructure needed to handle any application changes that might occur between testing and development.
The proverbial bloom may be off the cloud rose, as nearly half (48%) of the companies we surveyed indicated that they had repatriated their AI workloads from public cloud back to other venues.
Helping hands carry the burden
Obviously, not every company has its own data center, and among those that do, not everyone has a data center optimized to run AI workloads. Even assuming that your data center is optimized, owning and running one isn’t without its challenges. First and foremost, there’s the issue of cost. Building a data center is expensive and running it is even more so – there’s the cost of power and cooling, staffing, maintenance, and the need to refresh your hardware every three to five years to be considered. IT staff also need to be trained, as do business end users. Additionally, there can be costs related to integrating disparate systems with the new AI platform. And on top of everything, organizations are responsible for in-house security and disaster-recovery systems.
It’s a lot, and not for the faint of heart. Unsurprisingly given the challenges, hybrid infrastructure, where AI workloads are spread across both public cloud and data centers, is increasingly seen as an attractive choice. In fact, research indicates 80 percent of companies currently or plan to use a hybrid approach with their AI infrastructure.
Here’s where colocation facilities factor in. Colocation takes the burden off companies. Not only are enterprises sharing in the cost of power and cooling, but colocation facilities such as those offered by Cyxtera, offer value-added services optimized for AI workloads that provide high-performance compute, storage, and networking opportunities all in a prescriptive design optimized for AI. With colocation and colocation-based managed services, enterprises are able to take advantage of optimized facilities that can accommodate tooling to streamline workflow. What’s more, because colo providers strategically place their data centers close to major public cloud providers and connect via interconnects or SDN fabrics, there’s low latency associated with data transfers.
Rx for Success
When deciding how to best tackle AI at scale, CIOs need to consider a variety of factors, the first being whether on-premises, public cloud, or colocation is the model best optimized for their needs. The right choice dramatically minimizes the risk of shadow IT if data scientists and ML engineers are able to deploy infrastructure that lets them scale AI apps quickly and efficiently.
Remember, too, that there’s no need to reinvent the wheel. DIY-ing your platform or applications can lead to missteps at best and project failure at worst. Proven architectures and application models abound and can be used as the foundation for your own models and apps, and taking a prescriptive approach to architecture puts you in the driver’s seat.
If you want to combine the simplicity of cloud with the determinism of owned infrastructure, Compute-as-a-Service (CaaS) is worth a second look. To learn more about Cyxtera’s colocation data centers and managed services, contact firstname.lastname@example.org.
Views and opinions expressed in our blog posts are those of the employees who made them and do not necessarily reflect the views of the Company. A reader should not unduly rely on any statements made therein.
The Case for a Compute-as-a-Service Approach
To ensure your AI infrastructure is both affordable and on the mark, download the report to help you better understand which infrastructure option will work best for your company while keeping costs under control.Download 451 Research’s Pathfinder Report on AI/ML Trends