Database Continuous Delivery Meets Your Application

[article]
Summary:
Continuous delivery meshes well with agile development: Both facilitate the need to move quicker and deal with ever-changing requirements, delivering the best quality possible but usually with not enough resources. Agility is what is expected from technology companies and IT divisions. So, what does it take to have continuous delivery in your database?

Research done by Perforce Software reveals interesting information about the popularity of continuous delivery (CD). The infographic based on the corporation’s findings from a Q4 2013 survey shows that 65 percent of the responding companies are in the process of using continuous delivery for their projects. But even more interesting is that 46 percent of the organizations are worried that their competitors are using CD to get ahead.

This is exactly why agile development was born: the need to move quicker and deal with ever-changing requirements, delivering the best quality possible but usually with not enough resources. Waiting six months until the next rollout or release is not acceptable in today's market. Waterfall methodology's big release concept doesn’t cut it anymore. Agility is what is expected from technology companies and IT divisions. So, what does it take to have CD?

The Two Basic Truths

Having CD relies on two simple truths. First is automation. Automate every step in building, testing, promoting, deploying, documenting, etc.

The second truth, which is a requirement for automation, is version control. All changes must be documented and stored in one place, available for the automation processes. The automation processes retrieve information only from the vault and no other location. Everything that was checked into the vault can be promoted, provided it passed all the steps.

With this in mind, we have to find a way to control versions of all the application components—which can be native code (C#, C++, Java), database code (schema structure, PL/SQL or T/SQL, application parameters stored in lookup tables), or configuration files.

The Missing Piece in Version Control

There are several types of version control solutions. The most common solution is based on files. All the artifacts are files. They can be simple text or XML, or even an image.

The second type of version control is application-dependent, like with Informatica—all changes being done within Informatica can be versioned within Informatica. Changes that are related but external, such as database procedures, are out of the scope for Informatica version control.

Including the database within the application version control is not new. Until recently the solution that was available was to use the file-based version control tools to manage scripts that alter the database objects. The main challenge within this method is the application is tested using a real database environment and the changes being promoted are taken from the file-based version control repository, so they can be out of sync.

Database development and file-based version control processes

Pages

User Comments

1 comment
Damon Jebb's picture

I must be missing something here; the best way to deliver any change, whether code or database structure, must surely be to develop and test the change before applying it with appropriate testing to qa, preprod and prod.  This is the build once, deploy many approach, that you suggest is a cause of problems with databases; I would suggest that the issue with database management is not this process but the failure to achieve the process. 

Your solution, allowing changes to be made in production and then back-ported to lower environments is designed to subvert the process, and results in what is, to my mind, a fundamentally flawed approach to delivery of change.  How does this process ensure that the changes that are developed and tested in the dev/int/qa environments *is* the code that eventually gets to production.  How does it ensure that the changes that are made in production are properly recorded in source control?  Are the delta scripts for the environments built from source control or by comparing the status of the 'source' database with that of the target database?

The problem here is that we are stuck in the mindset that the DBA is allowed to make ad hoc changes to the production system and those changes are manually hand-crafted in the system.  We would never allow a developer to do that to a piece of java code running in the app server, so why on earth is it acceptable to do so in the database?  The process seems to actively encourage 'out of process' changes rather than providing a clean and quick solution that would allow all changes to be 'in process'; it is a clean, repeatable, reliable change process which is used to deliver *all* change that has been proven as the route to reliable, high-quality software delivery for almost all other systems and is surely the thing to aspire to achieve for database systems too.

September 25, 2014 - 5:29am

About the author

Uri  Margalit's picture Uri Margalit

Uri Margalit is the Director of Product Management at dbMaestro LTD (www.dbMaestro.com). dbMaestro is a software company that is developing a line of products helping organizations enforce change management policyies on the database development. Prior to joining dbMaestro Uri Margalit lead the development of a line of products at Precise LTD.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!