When the Government Accountability Office (GAO) issued a report on federal government’s information technology (IT) systems last year, it caused a lot of noise, much of it warranted. Committee room props insinuated that the Department of Defense (DoD) needs to investigate the value of 8-inch floppy disks for critical systems. No one would disagree with that example.
At the House Oversight and Government Reform hearing on this issue last May, DoD chief information officer Terry Halvorsen thoughtfully responded that the department uses what works within its available budget. Still, some government technology leaders exploit these highly-publicized examples by advocating for a “rip, rewrite, and replace” strategy regarding federal IT. This idea is fueled by the misconception that the path of “cloud first” is the Holy Grail for modernization.
{mosads}As the chief executive officer of a U.S.-based, non-outsourced software company, with nearly 30 years of experience with major IT companies, I’ve seen how blanket replacement strategies and new unproven technologies have fared. They often result in a disastrous outcome.
No one likes to air their dirty laundry, especially when it comes to mismanagement of taxpayer dollars. But, what we rarely hear is that many of these “rip, rewrite, and replace” projects either never come to fruition, are over budget, or even resort back to the original platform. If you don’t believe me, ask taxpayers in Pennsylvania.
So as we begin down the path of modernizing our federal IT systems, it’s valuable to review what has worked and what hasn’t in the real world. In many cases, there is much the public sector can learn from the ingenuity found in the private sector.
Cloud mania
Last month there was a major outage in Amazon Web Services’ S3 cloud storage solution. This impacted large companies like Apple to small businesses. While Amazon’s track record is excellent overall, this is still a prime example of the risks when you outsource certain infrastructure to an external provider. At the end of the day, the people dependent on your products and services aren’t going to remember the Amazon outage, they are going to remember your outage.
Any internet-based organization is reliant on third parties, so it may seem there’s no way to avoid such dependencies. The key is getting specific — knowing when to trust these external, virtualized consumed cloud services and when to depend on your own tried and true infrastructure.
One example: large private sector organizations with the same “can’t fail” mandates as government long ago determined that no cloud solution used for systems-of-record can approach the rock-solid reliability, performance, security and total cost of ownership of a post-modern mainframe.
Then there’s the cost factor. Many large corporations using mainframes have learned the hard way that a wholesale move to the cloud or other commodity distributed servers ultimately proves to be more expensive. Virtualized x86 environments are prone to sprawl and demand constant attention — and constant spending.
Rubin Worldwide’s studies have shown costs at server-intensive IT shops are 65 percent higher than those of mainframe-intensive shops, and that mainframe companies earn 28 percent more profit per IT dollar spent than server-centric companies.
Two platforms
What is working well is a hybrid two platform approach that integrates the post-modern mainframe and the cloud. Here, powerful and cost-efficient systems of record — like the mainframe — remain in your data center to fuel the mission-critical and competitively differentiating side of the IT picture. The approach prudently leverages the cloud to handle common applications and non-mainframe infrastructure as a service that are essential, though not uniquely mission-critical.
In the case of the government, for example, consuming human resource administration as a cloud service makes sense. However, there is no U.S. federal tax system that can be consumed as a service from a public cloud provider. The only base of code to do this work is under the exclusive stewardship of the federal government. And this core systems of record runs, and should rightfully remain, on the mainframe.
This requires a fiscally responsible strategy of modernizing systems on the mainframe — speeding development to keep pace with the demands of a mobile, digital economy — versus moving off, to a lesser performing, more costly platform. Such pragmatic modernizing takes effort, but tremendous advantages are gained as a result.
Data security
Data privacy concerns underscored by recent cyber attacks are a major concern, and a national security issue. The dollars spent trying to protect on-premises distributed systems or cloud services from attack are growing. Meanwhile, a well-managed and maintained mainframe is the most secure computing infrastructure available, requiring much less protection from outside attack. IBM research has found that the post-modern mainframe requires 69 percent less effort to secure than other systems.
Legacy thinking
As new budgets and priorities are being proposed, we should understand that the real problem isn’t about legacy technology, but a pattern of legacy thinking, where hot, new technologies seduce us into believing that certain tried systems are no longer trustworthy or flexible enough to meet tomorrow’s needs and that some Holy Grail exists to fix all federal IT challenges.
While true for certain technologies, it’s not true for all. It’s time to discern and be specific about what needs to be modernized. What we’ve seen in the private sector offers a clue. Otherwise, we could very well be wasting time, effort and taxpayer dollars chasing the next big thing, when modernizing the original is the most effective and efficient option.
Chris O’Malley is chief executive officer of Compuware, an American software company with products aimed at the information technology departments of large businesses.
The views expressed by contributors are their own and are not the views of The Hill.