The lack of innovation from service providers is a constant and mournful refrain echoing around the industry. This plaintive and mournful dirge reminds me of Sisyphus, who was cursed to endlessly roll a boulder up a hill, only to watch it tumble back down, never achieving satisfaction. Likewise, the unending efforts of service providers to provide innovation to their customers seem similarly futile, resulting in the same frustrating lack of satisfaction for either provider or customer.

Why are these efforts doomed?

Service providers and their customers have different goals. Providers invest in initiatives that drive growth or improve profitability for the provider. Customers want lower cost, increased productivity and more functionality. These goals seldom align and the parties often work at cross purposes.

What do we do about it? 

The answer is that the customer must take the responsibility for defining the innovation agenda. The customer must outline what will be impactful and make a difference in its business and then share that agenda with the service provider. Whatever the issue is — reducing receivables, stock-outs in retail, more productivity, faster time to market — the customer must illuminate and define the target for the provider.

What if the provider is reluctant to pursue the innovation agenda?

Our experience is that providers often are willing to fund innovation and work across the customer’s agenda when it’s clear that it will make a difference to the customer. In these situations, the exercises in innovation lead to higher customer satisfaction and also lead to an extended contract or changing the relationship in mutually beneficial ways for both parties.

But a provider may be reluctant to pursue some aspects on an innovation agenda. An example is driving increased productivity in the provider’s organization. In a world of (P) Price x (Q) Quantity = Revenue, the provider wants to keep Q as high as possible, and productivity issues bring quality down.

From the provider’s perspective, there are two categories of innovation:

  1. Those that the provider wants to pursue and naturally aligns with (the opportunities that give new revenue opportunities or better industry insight)
  2. Those that the provider likely won’t want to pursue (things that negatively affect its commercial environment, especially its productivity).

To avoid continuously pushing your innovation boulder uphill, keep the provider’s perspective in mind. If your innovation agenda focuses on category #1, you can expect a rewarding discussion around the areas where you and the provider are aligned. But you will need to take a much more active role in driving the category #2 initiatives that are not aligned with the provider’s interests.


Photo credit: Kristina Alexanderson

 
  1. Bureaucratic hierarchies are replaced by networks — engendered by low cost ubiquitous communication
  2. Unbundling — specialization and mass customisation
    • Banking
    • Education & Scientific Research
    • Entertainment
  3. Mobile devices — we are all nodes on a network
    • Transportation
    • Delivery
    • Banking & Payments
    • Healthcare & Wellness

Following on from the bold assertion that 20% of IT departments would have no need for physical assets by 2012, Gartner have now turned their attention to the future of the Personal Computer. According to their research, “the Personal Cloud Will Replace the Personal Computer as the Center of Users’ Digital Lives by 2014“.

There is plenty of evidence of the truth of this already and the adoption of Drop Box, Google Docs, iTunes, iCloud, and a myriad mobile-centered iPhone apps has demonstrated the huge demand for always-on, always-connected mobile services.

Consumers, who also happen to be bosses and employees, are increasingly tech-savvy and their expectations are moving beyond what corporate IT departments can meet with existing infrastructure and skill sets. Users’ expectations of rapid change have been ameliorated somewhat by infrastructure technologies such as virtualization, which has improved IT’s operational agility, but the growing availability of consumer-centric mobile apps has led to a disconnect between what’s available in the consumer space and what companies provision for their staff. The lack of application development skills or a general understanding of the wider ramifications of mobile is becoming a visible weakness for many organizations.

Not surprisingly, employees take what’s available in the consumer world and bend it to their needs, often working around the compliance and security requirements of their employers and the command and control mindset of enterprise IT.

Writing in CIO magazine, Bernard Golden outlines some of the concepts that need to be understood when performing an OpEx versus CapEx calculation for IT infrastructure. For example, no-commitment OpEx (such as the classic Amazon AWS pricing model) should always cost more per service hour given that there is a cost to a no-commitment relationship1 that must be borne by the service provider. He uses the car-rental business as an example—which may not be the best analogy, but it makes the point.

Another question is that of utilization. Forrester analyst James Staten coined the term “Down and Off“, an idea somewhat analagous to switching off the lights in an empty room. Prior to the cloud, the argument goes, “Down and Off” is a) too hard to do, and b) there is little economic imperative to overcome the challenges to implementing it as the cost of computing is wrapped up in CapEx that has already been accounted for.

The difficulty in making use of Down and Off is what economists call “friction”, and one of the benefits of a highly automated cloud computing model is the elimination of barriers to reducing unwanted operational overhead.

As such costs change in response to technical innovation, Golden points out that

… input assumptions to financial analyses will change as IT organizations begin to re-evaluate application resource consumption models. Many application designs will move toward a continuous operation of a certain base level of resource, with additional resources added and subtracted in response to changing usage. The end result will be that the tipping point calculation is likely to shift toward an asset operation model rather than an asset ownership one.


1. Both Amazon and IBM, amongst others, offer reduced hourly rates for customers that sign-up for a fixed-length commitment period.

Richard Fichera at Forrester reports a fascinating development in the server marketplace and its big new customer—the large scale cloud computing environments that are now the biggest single purchasers of server hardware.

HP is creating a new “hyperscale business unit” to exploit the very low power ARM-based server designs being developed by Calxeda. According to Fichera, HP’s move is

“..based on the premise that very high-volume data centers will continue to proliferate, driven by massive continued increases in demand for web and cloud-based applications handling massive amounts of data, and that the trajectory of current systems technology with respect to power, cooling and density may be inadequate for emerging requirements.”

This all becomes particularly interesting given that Microsoft and NVIDIA demonstrated Windows 7 running on the NVIDIA Tegra (dual-core ARM @1.3 GHz) at CES 2011.

Ever since the original version of NT, Microsoft has ensured that Windows is portable across architectures and in the past have targeted MIPS, DEC Alpha and Itanium as well as the ubiquitous x86 and x86-64. The benefits of this commitment are now becoming apparent.

Preintegrated and, in many cases, simplified platforms for the development of general-purpose business applications will become a serious alternative for developing custom applications; ISVs will also find them a highly attractive option for delivering software-as-a-service (SaaS) applications.

infrastructure and platform as a service

The recent Amazon outage has created some heated discussion as to whether Amazon’s services are enterprise ready or not. Much of the discussion seems to miss the point. For example, saying that Amazon is not enterprise-class is like saying an IBM x-server is not enterprise-class. Not very helpful and not very meaningful.

Amazon is a provider of compute and storage, like the aforementioned server. Give that server RAID direct-attached-storage, or dual-homing to a SAN, power from two UPSs and a mirror image of itself in another data center and you can perform synchronization between the two. Lo and behold, enterprise-class computing!

This can all be achieved with Amazon using different ‘Availability Zones’ in more than one Region and the appropriate software. And of course there is an associated price.

The reality is that the majority of Amazon’s clients are startups (many in the social networking space) that are willing to take the risk (or don’t comprehend it) in return for scalability, agility and above all the right price. Another significant group of clients are enterprises in search of cheap, agile compute for problems requiring mass horizontal scalability, but not persistence.

The really fascinating question behind this outage is the economic one, i.e. what level of risk/cost ratio are companies willing to tolerate for Information Technology.

Countless small enterprises that make heavy use of IT don’t have diesel backup and rely on their electrical utility to provide adequate uptime… sans SLA I might add. This is exactly the calculation that anyone using Amazon and its ilk is making–whether they are aware of it or not.

The cloud is all about economics—as are public electrical utilities—and we are in an important phase in the ongoing maturation of Information Technology: a field who’s economics have long been cloudy (pun intended) to say the least.

Randy Bias at CloudScaling has put together some interesting metrics on Amazon’s release cycle for the EC2 platform. The implication being that Amazon is growing its investment in EC2 at a significant rate to ensure it further enhances its already significant leadership position.

Based on prior years activity, Randy estimates 66 feature releases this year: or more than one per week.

EC2 feature release rate

He has also published the source data in a Google Doc and it makes fascinating reading.

Clearly Amazon want to stay on the crest of the cloud computing wave and they recognize that providing superior functionality is going to be critical to defend against a growing array of competition—-all of whom are presently tiny in comparison but still have the potential to compete on feature set, particularly in the enterprise IT and public/private cloud space.

From IBM developer works

A new developerWorks global survey of 2,000 IT professionals indicates cloud computing and mobile application development are hot topics today, and are expected to emerge as the most in-demand platforms for software development over the next five years.

Although we don’t know exactly who the respondents are, the majority of developerworks readership are engaged in development of one sort or another and so the survey is a clear indication of how much developer mind-share is supportive of cloud computing in general—-irrespective of particular platforms and technologies.

In fact,

Nine out of 10 respondents to the survey, which reached more than 2,000 IT professionals worldwide, anticipate cloud computing overtaking on-premise computing by 2015 as the primary way organizations acquire IT.

With this level of enthusiastic support, developers are way ahead of their colleagues in Network Administration, Security and the CIO’s office. Of course this is nothing new: IT management have traditionally been initially against many of computing’s major paradigm shifts: minicomputers, PCs and PC networks and mobile devices, and it has often been up to business users or their departmental development staff (often external consultants or ‘power’ business users) armed with corporate credit-cards to pave the way for each successive revolution.

The big disruptions caused by Minicomputers in the 70’s and IBM PCs in the 80’s are now over 25 years in the past, so we are well overdue for a disruptive new development and deployment paradigm.

So far the deployment of Saas products such as Salesforce.com have acted as the initial Trojan Horse in bringing in Cloud and proving its business and cost benefits. The rest of the story still has a long way to play out.