The productivity of software developers at Griffin Group Global is the top priority of mine. Challenges abound within software development plan that curbs this productivity. At Griffin Group Global, I’m constantly working to meet and overcome those challenges, empowering my developers to produce high quality software, rapidly. My top goal is to be a high performing software organization!
The challenges we face are numerous. I ask myself these questions all the time?
- How do I know we’re developing quality software?
- How do I know my people are not becoming burned out?
- How fast can I deliver improvements to my clients?
As Griffin rises to these challenges, we set our foundation based on some great advice given by Gene Kim, author of the Phoenix Project.
- Use system thinking
- Amplify feedback loops
- Create a culture of continual experimentation & learning.
Each of these points drives me towards a solid solution, to each challenge, that is ever evolving. Let’s explore those challenges!
How do I know we’re developing quality software?
Plenty of organizations start their DevOps pipelines with a single CD (Continuous Delivery) tool. At Griffin we start with the developers themselves. With high profile security breaches, such as the homebrew unauthorized access, npm’s package issue or Snapchat’s leaked code, securing the supply chain is always forefront in our development.
We protect ourselves from accidental disclosures of credentials that might lead to breaches. Credentials, passwords, certificates and tokens are shared only by approved vault technology. This ensures that developers do not use mediums like chat, email, source repositories, docker containers or other means to share credentials. They are always injected into the environment we need.
We protect our supply chain with RBAC (Role Based Access Control) to our development repositories. This ensures that only the correct people have access. However, just because you have access, that does not mean you are authorized. We employ commit hooks to validate that you are authorized to be committing to the software. If you don’t have a valid development ticket, then that commit gets rejected! Every commit is scrutinized before we allow it to be used within the software. We lock down access to specific branches of software, ensuring changes only get pushed upon peer review approval and CD platform approval.
The CD platform must approve of every commit. To achieve this, we use a delivery pipeline that takes the software development plan through several stages.
- Dependency checking
- This step checks all libraries against available versions, taking a semver based approach, letting us know if any dependency is not up to date.
- Vulnerability scanning
- This step scans all software and libraries for known software vulnerabilities and remediates them.
- This step applies to our development best practices to the checked in source code, assuring we are not implementing questionable code.
- We also apply linting to our documentation to ensure consistency across generated docs.
- This step builds the software and creates an artifact that is publishable
- Unit Testing
- This step checks the built code against the unit tests and then applies quality metrics to it ensuring we cover branches, lines and functions, at a minimum. Our test methodology is based on the practical test pyramid.
- Documentation generation
- This step generates documentation that is used by the development staff.
- Documentation is generated and continuously deployed using downstream triggers for deployment.
- Release candidate publishing
- This step publishes the artifact to a repository where it can be picked up for further validation.
The goal of these steps is to provide extensive feedback about the quality of the software to the developer in just a minute or two. This allows them to evaluate and refactor while they are thinking about the problem at hand, not at some point later, when they have moved on switched tasks.
We employ downstream triggers from our CD tool that performs integration and performance testing on our release candidates. This type of feedback can take hours to perform. Concentrating on the steps leading to successful code commits provides the fastest productivity with the lowest concentration of defects.
How do I know my people are not becoming burned out?
A quick, naive measure is to look at hours worked. But that is not the standard I measure by as our people regulate themselves well. Rather the standards I look at are based on job satisfaction and low frustration.
I attempt to keep job satisfaction high through constant learning, giving each developer the opportunity to explore new topics related to our problem set. The development team is encouraged to share with each other through semi-regular meetups.
I pay close to attention to frustration levels. This is measured with a couple of metrics.
- Change Failure Rate
- Percentage of changes that require refactoring
- Software Deployment Plan Pain
- How hard is the application to deploy for a developer to support testing? To support defect investigation?
- I measure this in manual steps that must be performed and in the amount of time it takes to stand up. If either is too great, this leads to large amounts of frustration.
Another tool I use against burn out is to choose technologies that provide choice. For instance, our CD tools are plugin based, using containers. This prevents me from being locked into a specific DSL (Domain Specific Language) for solving a problem and ultimately allows me to empower the development staff using the best technology available that we are comfortable with.
How fast can I deliver improvements to my clients?
- Delivery to clients is the ultimate metric by which to gauge performance, assuming we have incorporated the process previously described. We measure this across several metrics.
- MTTR (Mean Time To Recover)
- How long does it take to remediate a problem due to unplanned downtime.
- Our goal is less than hour.
- Development cycle time
- How long does it take a developer to produce a release candidate, ready for production?
- Our primary goal is less than hour, with no more than a couple days if extensive regression and performance testing are required.
- Deployment time
- How long does it take to deploy a capability?
- Our goal is minutes with several times a day repetition.
- Change failure rate
- What percentage of changes lead to degraded service (performance, functionality, etc)?
- Our goal is 0-5%
These metrics provide Griffin with the data necessary to drive improvement through our standard feedback loops. Ultimately, delivering high value to our clients.