Site icon IT Vortex

8 WAYS THE CLOUD IS MORE COMPLEX THAN YOU THINK

8 WAYS THE CLOUD IS MORE COMPLEX THAN YOU THINK - itvortex

For a growing number of organizations, it’s not a question of whether they should move applications and development platforms to the cloud, but when.

The cloud has become so well entrenched in corporate IT that it’s increasingly difficult to imagine business without it. Still, the move to cloud services is not without its share of hardships, some of which can be totally unexpected.

A recent report by professional services and consulting firm Accenture notes that two-thirds of large enterprises are not realizing the full benefits of their cloud migration journeys, with the complexity of business and operational change among the key barriers.

Of the 200 senior IT professionals from large businesses surveyed, 55 percent cited business complexity and organizational change as a barrier to realizing the benefits of the cloud. Only security and compliance risk was cited more frequently.

While the promise of the cloud is automated scale at the push of a button, capturing the benefits of the cloud takes time, and there is a learning curve that’s influenced by many variables, Accenture says.

Here are some of the unexpected ways the cloud is more complex than it might seem.

PROVISIONING IT SERVICES

With on-premises IT, companies typically own and manage their business applications software and the internal IT group manages the software environments.

In this scenario, the IT department ensures that implementation projects have the environments needed to support the projects — whether they be for development, testing, training, or production —  and that users have the infrastructure required.

“This is not the case for hosted cloud solutions,” says Chris Lilley, technology solutions principal at business advisory firm Grant Thornton. “In the hosted cloud world, the cloud provider is responsible for provisioning the environments required to support the implementation.”

In addition, the decisions around the environments, such as the number of users supported, when applications will be made available, and other factors, are generally made at the time the software is licensed, Lilley says.

“Given the changing nature of large, complex IT implementation projects, the need for and the timing of environments may change,” Lilley says. Also, “the cloud provider is working with hundreds of clients, so its ability to support the specific timing of your needs may not align with the project schedule,” he says.

As a company’s needs change, the cloud provider will require additional time to adjust to and adopt environments to those needs. “Organizations should develop an environment strategy and then collaborate with the cloud provider to ensure it is aware of your needs — and you are aware of any limitations or constraints,” Lilley says.

Someone from the organization should be accountable for ongoing collaboration with the cloud provider to ensure that project and production environment needs are addressed, Lilley adds.

HANDLING GOVERNANCE AND COST CONTROL

“The ease of use is wonderful but also quickly leads to challenges around governance and cost,” says Mark Nunnikhoven, vice president of cloud research at security provider Trend Micro. “Striking the right balance for your teams that allows them to deliver quickly without creating additional risk takes a few tries to get things right.”

Trend Micro delivers several services to its customers directly from the cloud. These include Deep Security as a Service, which uses a number of cloud services, from virtual machines, to managed databases, to serverless workflows for operational tasks.

“As our teams develop and test our products, the cloud is an invaluable tool that helps us automate and accelerate building for multiple environments,” Nunnikhoven says. “This use case in particular has not only helped accelerate our development efforts but also reduced their cost at the same time.”

Where the governance issue pops up most prominently for Trend Micro is in testing environments. With a single template, any team member could replicate hundreds of different environments concurrently. “The team’s first instinct was to test almost any change in every environment, since it didn’t slow the development process down and spotted any issues extremely quickly,” Nunnikhoven says.

But this tactic was costly. To address this, the team eventually settled on a tiered approach for testing in multiple environments. This new strategy balanced the impact of the changes with the cost of concurrent testing at scale.

“Fortunately, rich metrics around code quality, cost of changes at different stages of the build pipeline, and a solid understanding of cloud billing made implementing this strategy straightforward,” Nunnikhoven says.

ADDRESSING THE NEED FOR REGRESSION TESTING

Another critical change that companies have to deal with when moving to the cloud is the fact that vendors release new versions of cloud software on a regular basis.

“Some vendors are doing monthly releases, but most are moving toward either quarterly or twice per year software releases,” Lilley says. “While organizations are not necessarily required to ‘activate’ all new features [and] functions, they are required to accept the new release within a certain timeframe.”

This means they must accept the change and then regression-test their environment to ensure that the changes don’t adversely impact the current production environment.

“Given the frequency of releases and the fact that organizations are required to accept the new release into their environment, clients must create a robust process of working with the software provider to understand the release schedule and the impact to current workload,” Lilley says. They also need a robust process for testing and validating that the new releases will not affect any aspects of the existing product environment, he says.

DEALING WITH THE MAGNITUDE OF A MOVE TO THE CLOUD

Last year, the IT team at networking and cyber security company Juniper Networks’ shut down the company’s final physical corporate data center, officially wrapping up a seven-year transition from 18 data centers to the cloud. The company uses a collection of cloud services, leveraging public cloud providers for infrastructure and private clouds to host applications, says CIO Bob Worrall.

One of the biggest challenges has been what Worrall calls “cleaning the garage,” the process of moving everything to the cloud. Seven years to move IT functions fully to the cloud might sound like a long time, and Juniper didn’t think it was going to take that much time.

“But think about it, we had a cluttered garage with 20 years’ worth of legacy applications, infrastructure, and more sitting around,” Worrall says. “When you move servers and applications, it’s a delicate dance. The machinery doesn’t respond well to change, nor do people.”

The company began with a basic inventory of its entire IT infrastructure. “When we found hundreds of applications running in data centers around the world, we could only find owners for 80 to 85 percent of applications. No one knew what they did, who owned them, how they were configured.”

IT “took care of the easy stuff first, and then hit a multi-year delay because of the old stuff,” Worrall says. “We had to find owners, modernize the applications, and such. We invested a lot of time cleaning the garage, transforming the applications to be more cloud native and in turn, taking advantage of the cloud-native capabilities.”

BANDWIDTH LIMITATIONS

The movement of large data sets from on-premises systems to the cloud can be limited by network bandwidth, notes Brad Powell, Design and Evaluation Team lead in the CERT Division of the Carnegie Mellon University Software Engineering Institute.

“In one example, it took four days to move one day’s worth of data over an existing pipe,” Powell says. “This can cause data to back up and be stale, which is not ideal for time-sensitive analytic workloads or for ongoing and real-time feeds.”

The options to improve the speed either come at a higher cost or additional complexity, Powell says. “You could pay for increased bandwidth from your ISP, but only up to certain limits specified by the cloud provider and the service,” he says.

Some cloud providers offer accelerated transfers at an additional cost, but are limited by the region into which data is being transferred, Powell says. Another option is to use parallel or multi-part uploads that rely on the configuration of APIs or other on-premises tools to initiate the transfers.

Yet another alternative is to have cloud providers ship physical disks in order to copy data and transport it to the vendor. For 100 terabytes of data, this would take more than 100 days over a dedicated 1Gbps connection, Powell says. 

VENDOR MANAGEMENT AND LOCK-IN

There are various issues related to selecting a vendor, Powell says. “First, the services you’re using on-premises may not be readily available in the cloud,” he says.

You can get these services working using an infrastructure as a service (IaaS) offering. But this comes with the same configuration and maintenance challenges as on-premises hardware and does not take advantage of the cloud promise of simplifying the infrastructure.

“Cloud providers may have alternative or competing services available to meet your needs, but these come with new terminology and a learning curve,” Powell says.

These services might also be limited by geography, as not all services are available in all regions offered by the provider. “The regions are not always in sync, especially if you are looking to take advantage of regulated cloud offerings in the healthcare, financial services, or government sectors,” Powell says.

If you’re trying to avoid vendor lock-in by using services across multiple clouds, resources will need to not only connect the clouds, but also translate terminology and services across the vendors, Powell says. “This will either require more advanced skills, or duplicate resources dedicated to each vendor,” he says.

Since there is already a skills gaps, finding these resources will add to the cost and complexity. “These vendors are in a heated competition of adding services as quickly as possible to stay ahead of each other,” Powell says. “You will need to be continuously learning to keep up with the latest offerings from each.”

MAINTAINING STRONG SECURITY

Many of the cloud service providers tout the strong security of their infrastructures. That doesn’t mean use of the cloud is without data protection issues.

“The cloud introduced a number of challenges related to security and access control,” says Mike Novak, global vice president of IT and CIO at hospitality company Hakkasan. “As we acquired new companies and expanded the number of remote employees who don’t sit in [Microsoft’s] Active Directory, we needed another system to manage employee permissions in an efficient way.”

Merging Active Directories with varying customizations is time consuming, Novak says, so Hakkasan used a platform from Egnyte to quickly add users to group permissions to ensure it was still controlling and securing the data for all employees and all endpoints.

“A byproduct of shifting to the cloud was the explosion of point solutions that needed to be secured,” Novak says. “It also introduced technology overlap, so we worked at reducing the overall number of solutions in order to maintain high security standards.”

The company examined the security of data being shared through email. “We restricted the use of attachments in emails, since email is not encrypted [and] required users to send file links so we could better control data that was being shared externally.”

BUILDING THE CLOUD MINDSET

Even though cloud services have been around for years, the concept is still relatively new for many workers as well as executives, so change management is a challenge.

At family history site Ancestry.com, which runs its entire operational and backend processes including the company website, historical records, and DNA science workloads on Amazon Web Services (AWS) public cloud, managing the paradigm shift to the cloud was a challenge.

“Resource choices were plentiful, with multiple instance sizes, services, and storage types to choose from,” says Darek Gajewski, principal infrastructure analyst at Ancestry. But moving to the cloud introduced architectural changes. Provisioning and pipelines were different, and security practices needed to be updated.

“Everyone in the organization changed their mentality to align new choices with needed changes,” Gajewski says. “Where it made sense, we enabled our teams to manage these challenges directly, while we manage enterprise-wide issues centrally. Building tools that exposed [cloud services] costs directly to teams allowed them to make changes and see the benefits immediately to their budgets.”

Exit mobile version