Over the last few months at Clearent we’ve been rolling out new features like crazy, and have launched a handful of exciting new products to our online payment gateway. But, with the pace at which we’re developing new technology, we’re realizing one of our biggest pain points–our release process. We often look to the industry leaders to seek guidance, and one of the software development leaders that we look to is Martin Fowler. His favorite soundbite is “if it hurts, do it more often.” It essentially means to do painful things more frequently so that they eventually become less painful. What we needed was a way to reduce the pain associated with releases. So we’ve made big strides towards implementing continuous delivery for our development team.
What is continuous delivery?
As we’ve been building out our online payment gateway, we’ve been talking about this topic a lot. The basic concept is that every time a developer commits a new feature to our codebase, we take that and create a deliverable product. We want to automate our process of building our code, running our tests, staging our environments, and promoting our products to our higher environments.
What does that mean for developers integrating with Clearent? Many of us have worked in shops where releases were a tortuous process that could take weeks or even months. As our online payment gateway continues to grow, we wanted to avoid this pitfall and instead make our releases so painless that they become a non-event. The easier it is for us to “ship it!” means we can deliver new features and products developers need for integrated payments.
How we do it for our online payment gateway development
One of the keys to continuous delivery is automating as much of the process as possible. Without it, you’re reliant on humans who have time constraints and are error-prone. The base of our build pipeline is Jenkins, a continuous integration server. We utilize a number of plug-ins, like the Job DSL Plug-in, to create consistent and highly modifiable build processes. Our team mostly uses Java, and so we use Maven to manage our build life cycle. One of the biggest hurdles is maintaining our internal dependencies, which we rely on the Maven Reactor to help ease the pain of ensuring we have all of the components we need to build our projects. As a final step, we build a Docker image for our service. This allows us to build a reliable environment that gives us consistent behavior throughout our environments.
We built a deployment pipeline in Jenkins that contains all of the steps necessary to promote our builds, and looks a little like this:
- We have a number of steps that we both automate and allow manual control over. Our Git repository detects changes to our codebase and kicks off the ‘build’ step. This then triggers our dependencies to be built fresh, ensuring we have the components we need.
- Our ‘build’ step then runs our unit tests, analyzes our code, and gives us quality metrics. If the build is good, we then trigger the ‘deploy’ step to create a Docker image and deploy it to our dev environment.
- Once this is done, our integration tests are kicked off. These have been externalized in order to test not only our service but also our Docker configuration. Once these have passed, we can be reasonably assured that our code is healthy.
- Now our pipeline will pause in order for us to do code reviews and address any testing concerns. Once we’re satisfied with the code, we can approve the build, and promote it to our Q/A environment. There, further testing can happen before finally promoting the build to our higher environments.
Continuous delivery is a hard problem to solve. There are a lot of concerns that have to be addressed and tools to learn, which can be complicated to implement efficiently. Because of that, it’s rare to see companies doing this. As our build process continually evolves, we gain a competitive advantage where we can bring new features and products to market quickly. I hope these insights help inspire enhancements to your own build process.