The term ‘cloud’ in isolation of context has often had its meaning imaginatively exploited either by those with a vested interest in doing so, or through lack of understanding, or both. It appears the buzzword of ‘cloud fatigue’ has now entered into the lexicon!
In the previous post, the intersection of technology drivers was noted as creating transformative new opportunities for businesses. The mobile device has become the primary form factor and information distribution point, and is becoming wearable and embedded (Pervasive Mobility). Then there is an upheaval in social and marketing interaction of connected communities, and the concept of the digital persona which can be marketed and sold to by those willing to exploit algorithms which mine our interactions (Social). Add to this the explosion of data and emerging analytics tools which provide the opportunity to tie relevance and context to user information and data (Information). The public cloud as the fourth pillar, offers the promise of mass scale, elastic delivery of IT resources, consumed on demand. Cloud enables the delivery of information, and allows the other drivers to evolve.
Cloud, and specifically the software that controls it, disintermediates users from the underlying technology. Open Source tools and software frameworks have had a democratizing effect on both consumers and users of technology. Disruption is occurring on both the supply and demand side: the value chain is shifting, and new business models are emerging resulting in value creation, and value destruction.
Using servers as an example, the data centre has already been commoditized. The hyper-scale web providers bypassed leading traditional vendors and in favour of ‘white boxes’ - low cost, stripped down hardware, often directly sourcing processors, customizable to match their own customer workloads, orchestrated and managed with their own proprietary software. Open Compute plans to take this a stage further by standardizing specifications for ‘vanity free’ hardware based on interchangeable components, with the aim of reducing further cost and waste at the hardware layer and creating a foundation for customers and businesses to build custom and modular servers. Many of those traditional vendors have been hit hard, with value and share shifting to the thriving ODMs.
Open Source technology has also evolved and matured over recent years, and is being increasingly used as a tool to drive disintermediation and alternative business models. If we look at Open Source clouds, storage or networks, for example, we see the development of a “mixed source” ecosystem to maximise product distribution and ecosystem development on the back of the primary ‘for free’ software. Primary contributors hope for related increased commercial activity in adjacent areas, for example, servers, hypervisors, and support services. Other models include dual licencing, open and closed source offerings with full support, or discrete commercial packages available as-a-Service for white label distribution by a Service Provider. What we are seeing is a maturity of business models within and around Open Source, and importantly, increased comfort within the developer community, where the current generation is provided with (and expect) excellent tools, documentation, tutorials and support.
If we extend these concepts further to the latest buzzwords – Software Defined Networking (SDN) and the Software Defined Data Center – further opportunity exists to abstract management and control to storage and network infrastructure layers. Networks have traditionally been overprovisioned and hard to manage from an enterprise and service provider perspective, and there has been no standard way of managing network devices remotely. Extending the principles of server virtualization to network switches, firewalls, devices and potentially higher level services running on virtual machines, offers huge potential benefits of provider automation, cost reduction, economies of scale, as well as greater feature velocity and innovation.
There is complexity, however, and it is still early days, in particular for the enterprise. Today, the major networking vendors provide a complete stack of networking hardware and software, whose products interoperate at the network packet level, but management and provisioning of their devices plus certain services remain proprietary, enabling higher margins and product stickiness. Levels of true sustainable openness, interoperability and programmability – i.e. whether Openflow will truly be adopted as an open industry standard – are theoretically viable, but sceptics point that this is likely to be determined by the tactics of incumbent vendors, either by those looking to protect their installed bases in legacy hardware, or those seeking greater customer lock-in (ergo, the sky-high valuations and acquisitions of SDN start-ups in 2012).
Despite this, there is already thriving and evolving SDN ecosystem at various levels of the SDN ‘stack’ and above, including virtual controllers, switches, routers, and overlay management and orchestration services. Technology democratization has already arrived to networking.
So while this is all very interesting from a macro market and competitive analysis standpoint, how is this all relevant to Enterprise IT? While many enterprises have already reaped cost and efficiency benefits from server virtualization, and some are extending virtualization into private cloud domains (often locked into a management platform), in many cases they have yet to exploit the potential of the public cloud from a mass scale and efficiency perspective to deliver ‘IT as a Service’.
The main ‘issues’ with public cloud infrastructure tend to be specific to a combination of : a) SLA’s for reliable performance in the provider platform; b) availability, predictability and compliance; c) true configurability enabling developer autonomy and self-service; d) price, but not at the expense of a), b) and c).
If we look at provider approaches to delivering infrastructure as a service, they are generally split across two models:
1) Web-Scale Providers: with massive scale using clustered servers and cheap components, used and designed for their own web businesses. Advantages include developer-friendly tools and aggressive pricing, which is sustained through an almost entirely self-service model. These architectures present significant challenges for enterprises looking to leverage the public cloud into business process design, in terms of predictable performance, availability and security.
2) Cloud Service Providers (including Verizon Terremark): scale depending on global footprint, with a focus around automation of infrastructure. Advantages: high performance and reliability, embedded security, SLA’s for 100% uptime (Verizon Terremark), managed and unmanaged options. Limitations include configurability of different server hardware, developer tools, and pressures on maintaining the platform at an acceptable cost/price ratio.
A single platform does not yet exist that blends “the best of both worlds” – high availability and performance, hybrid ready and configurable, enabling workload flexibility combined with quality of service selection and SLAs, embedded security and integrated networking, AND Web-Scale cost competitive based on true performance. This is when cloud can truly enable the delivery of information at mass scale, and what enterprises and their users require. You will be hearing a great deal more about this evolution throughout 2013.