Most people have never seen a mainframe. They think of them as a huge iron cabinet filling up a large clean room with attached to it green screen terminals pouring out ASCII code: a remnant of the past. Yet they have been around since the 1960s and are alive and kicking today. While many predicted they would soon be extinct (Stewart Alsop from IDG’s InfoWorld predicted in 1991 that in 1996 the last mainframe would be unplugged), they have been proven wrong time and time again. As we are entering the era of the Cloud we see the mainframe concept return in the performance, availability and scalability requirements of cloud computing. Notwithstanding the financial and economic crisis, IBM introduced the Z10 EC and the Z10 BC mainframe in 2008. They may not look as awesome as they once did (about a 4 m2 footprint and 2 meters tall) but their capabilities are phenomenal. But do we really need them when workload data is becoming increasingly unstructured due to the explosion of social media?
Well, they are expensive so you won’t buy them by the dozen. Research firm ITCandor estimates that in 2010 IBM shipped 640 Z10s and some 5.6 billion MIPS. The Dutch mainframe market hardly exists. It showed a strong decline during the last 15 years. There are probably less than 40 mainframes still around in the Netherlands so the upgrade opportunity for IBM is limited. For IBM the big irons are a cornerstone of their private/hybrid cloud strategy. IBM is trying hard to position the mainframe as the reliable backbone of the datacenter. Companies such as Computer Associates (CA) have banked a substantial part of their software products on the IBM platform. And, as pointed out at the CA Technologies Open Day at Groenekan by Marcel Den Hartog, CA’s EMEA mainframe evangelist, the Z10 in a private cloud environment could seriously lower total cost of ownership, by reducing space, power, and cooling facilities. With cloud computing as the next big thing, banks, airlines and governments that opt for a private cloud will need reliability, performance and scalability for their transaction processing and other mission critical applications combined with agility and a low TCO.
There is no question about it that the first set of requirements are met by the mainframe. But the other two, cost and agility, bring about heated debates. As far as costs are concerned the debate is about utilization and price. For the IT department utilization is the measurement of all things. With utilization of mainframes usually over 80% and sometimes at 95%, they point out the mainframe architecture is clearly superior to the scale-out strategies in which, even with all kinds of virtualization software and workload management tools, utilization still hardly reaches more than 30%. But the CFO will rather look at the investments associated with a mainframe. The Z10 EC starts at a cool 1 million dollar and the Z10 BC at 100.000 dollar. No small beer compared to blades that start at 1000 dollar. But it is not the upfront hardware investment that determines TCO. For a good long term TCO measurement one will have to include software licensing, space rent, power cost for running and cooling systems, and people. On top of that, it would be wise to look at security and management cost as well. But the TCO discussion is a difficult one to win and depends very much on what cost variables are taken into account.
Judging from IBM’s estimated financial mainframe results by ITCandor the argument for the mainframe seems compelling. With space, power and cooling problems in existing datacenters increasingly being a hot issue, CIO’s start to rethink their server strategy. Distributed computing in private clouds may never come close to the utilization rate of a mainframe they reason. With an increasing variety of workloads being able to run on the mainframe, coupled with the ability to cover legacy systems, it also meets better the agility criteria that the enterprise requires. A combined scale-up scale-out strategy could be a potential winner for large private cloud environments.
Is there a downside to the mainframe apart from the initial high investment and endless TCO discussions? Maybe not from a technical perspective but there is one other major drawback with running mainframes tomorrow: the declining pool of people with mainframe knowledge and experience. Given the critical role the mainframe still plays in transaction processing this is something to worry about. It is for this very reason that during the CA conference a lot of time was spent on CA’s Chorus product. A product designed to harness the knowledge and routines of the mainframe and to increase the productivity of the mainframe manpower. It is typically aimed at mission critical mainframe db2 applications and is flexible and configurable.
Mainframe knowledge resides within a small group of hardcore ICT people most of whom will retire over the next 10 years. Sure, mainframe knowledge is still available through companies like IBM but we haven’t moved to Linux just to be locked in again by IBM on the skills issue. CA’s Chorus may just help to prevent that. As it was only released last year it is still unclear if it can really deliver on its promise. The METISfiles will keep track of it and will keep you posted on its progress. If you want to add to this discussion let us know.