The age of cascade development and long drawn out software projects is in decline. Businesses have a need for continuous delivery of new functions, and this requires a far more streamlined
The path to continuous delivery
What is it that a company wants from its IT capability? High availability? Fast performance? The latest technology?
Hardly. Although these may be artefacts of the technical platform that is implemented, what the company actually wants is a platform that adequately supports its business objectives. The purpose of the business is to be successful - this means that its processes need to be effective and efficient. Technology is merely what makes this possible.
The problem has been that historically the process has been 'owned' by the application. The business had to fit the process to the application: flexibility was not easy. Under the good times (remember those?), poorly performing processes could be hidden - profit was still being made; however, more could have been made with optimised processes. As the bad times hit and customer expectations changed to reflect what they saw on their consumer technology platforms, poorly running processes became more visible, and the business people started to realise that things had to change.
In came the agile business - change had to be embraced and flexibility became king. Such agility was fed through into IT via Agile project management - applications became aggregations of chunks of function which could be developed and deployed in weeks rather than months. However, something was still not quite right.
Continuous delivery from the business angle needs small incremental changes to be delivered on a regular basis. Agile IT aims to do the same, but there are often problems in the development-to-operational stage. Sure, everything has been tested in the testing environment; sure, operations understand how to implement those changes in the run-time environment. According to a survey by VersionOne, 85% of respondents stated that they had encountered failures in Agile projects - a main reason stated was that the company culture was at odds with an Agile approach. No matter how agile the project methodology itself became, the impact of a wrong culture was far reaching: without changes in other areas of the business and in tooling used, the Agile process hit too many obstacles and the whole system would short-circuit.
DevOps has been touted as the best way to try and remove these problems - yet in many organisations where Quocirca has seen DevOps being embraced, the problems remain, or have changed to being different, but equally difficult ones.
The problems lie in many areas - some vendors have been re-defining DevOps to fit their existing portfolios into the market. Some users have been looking to DevOps as a silver bullet to solve all their time-to-capability problems without having to change the thought processes at a technical or business level. However, DevOps is becoming a core part of an organisation's IT: research by CA identified that 70% of organisations have identified a strong need for such an approach. Business needs for customer/end user experience and dealing with mobility are seen as major requirements.
Even at the basic level of DevOps creating a slicker process of getting new code into the production environment, there is a need to review existing processes and put in place the right checks and balances so that downstream negative impacts are minimised. At the higher end of DevOps, where leading-edge companies are seeing it as a means of accelerating innovation and transformation, a whole new mind-set that crosses over the chasm between the business and IT is required - the very chasm that was stated as the main reason for failure of Agile projects by VersionOne.
Traditional server virtualisation offers some help in here in speeding up things like image deployment. However, it only solves one part of the issue - if development is still air-locked in its own environment, then the new image still requires testing in the production environment before being made live. This not only takes up time; it will still be running in a different way due to running against different data. The proving of the system in production is not the same as the testing of the system in development: problems will still occur and iterations will slow down any chance of continuous delivery.
The issue is in the provisioning of data to the test systems. Short sprint cycles, fast provisioning and tear down of environments in the production environment and a successful Agile culture requires on-demand near-live data for images to run against. This is the major bottleneck to successful Agile and DevOps activities. Only through the use of full, live data sets can real feedback be gained and the loop between development and operations be fully closed. DevOps then becomes a core part of continuous delivery: IT becomes a major business enablement function.
Today's solution, taking snapshots of production databases is problematic: each copy takes up a large amount of space which can mean subsets end up being used, hoping that the data chosen is representative of the larger overall whole. The provisioning takes a lot of time through an overly manual process and typically the end result is that dev/test data is days, weeks or even months old.
DevOps requires a new type of virtualisation, going beyond the server and physical storage down into the data. Delphix, a US start-up has created an interesting new technology that I believe could finally unlock the real potential of Agile and DevOps - data virtualisation. More on this in a later post - but worth looking further into Delphix as a company.
Disclosure: Delphix is a Quocirca client.