Quality automation
Quality in development is a close concept to the recently-hyped term software craftsmanship: it’s about caring about details big and small, making well-thought-out tech choices, writing code full of best practices that optimizes for maintainability, making sure we don’t waste time working on useless things or manually doing what could be automated…
Everybody wants it, (almost) everybody pretends they do it… but in reality, we’re often far off the mark.
Improving software quality relies on two core pillars. You need:
- devs with the right mindset, and
- automation of as many tedious facets as you can, so devs can focus on value added.
How do you improve the mindset?
Creating a craft culture (that looks as software engineering as craftsmanship, therefore aiming for high-quality outcomes that induce legit pride) is neither easy nor quick. It builds upon a combination of training and experience.
Trainings should therefore make sure tech learning never sacrifices quality facets; au contraire, it should emphasize best practices and the ease of implementing them. At Delicious Insights, we’ve always been adamant about offering training curriculums that banish the all-too-common, depressing “so, you wouldn’t do that in production, but…”, and make a point of demonstrating you can implement critical best practices (e.g. relevant, performant automated tests) with very little code, hence in a pretty short time.
In order to capitalize on production-based experience, you have to propagate best practices ASAP in your project’s codebase and project management (for instance, making quality use of issues and pull /merge requests in GitHub or GitLab). Devs are humans after all, so they learn a lot by imitation: looking everyday at quality code, clean patterns and thought-out, constructive code reviews, they progressively tend towards the same quality standard for their own work.
It so happens that by implementing as much automation as possible in both producing and asserting quality, we remove a lot of “noise” in code reviews (chief of which the infamous bikeshedding), which frees everyone involved to focus on value-added reviews: discussing architecture, tech patterns, performance, established best practices, etc.
What can we automated?
A ton of things, both at the deliverable level and quality control level. You’ll be careful to prioritize stuff that is tedious to do by hand; anyone is a bit lazy, devs included (and they should have better things to allocate their time to).
Here a partial list:
- For code (and contents in a wider sense), we can perform static analysis to verify it has valid syntax, uses best practices (both generic and house rules), and is free from well-known pitfalls. This is mostly the work of linters. We can also automatically format code according to rules that are widely accepted (and ideally, closely align with industry norms). This can all be done in the editor (on-the-fly or at save time) and applied automatically at commit or push time.
- In order to optimize for readability (and therefore usefulness) of the version management history graph, we can enforce well-established conventions for writing commit messages and naming branches.
- We can also automatically run tailored verifications on the contents of an ongoing commit (e.g. to avoid letting dangerous or obsolete stuff in, or inadvertent trims of the test suite).
- Automated tests can be run automatically on push, on all or part of the branches, and gatekeep merging into the main branch through their successful completion or even coverage ratio, which could for instance be configured for non-regression or minimum threshold.
- That kind of continuous integration (CI) can drive continuous delivery (CD) both for staging (with either a single environment or automated per-feature environments) and production, improving the reliability of version publishing and deployment.
- Drawing from code and configuration, many deliverables can be automatically generated, keeping them up-to-date: interactive technical documentation, style guides and visual component libraries, type definitions, API clients, publishing to registries…
- Given appropriate tooling, automated croos-references between issues, code reviews and commits allow automated progress of issues across feature boards (e.g. Kanbans) as they get worked on, all the way to their eventual closure.
What can we measure and track?
Without measurements, it is hard to know whether there’s actual progress, to determine whether our new policy had positive or negative effects, or simply to verify the outcome is good enough considering the resource investment it required.
Most automations have a measurable outcome that lets us define numerous metrics you can draw from to define the KPIs (Key Performance Indicators) most relevant to you. Here are a few ideas:
- Automated changes at the commit level (especially reformattings) can be measured for volume. Their dwindling down over time is a good indicator that they are getting used earlier in the dev process (e.g. on-the-fly in editors), or even that devs have internalized them enough to write stuff in these styles directly.
- Many automated complexity scorings exist, available for most popular languages. We can automate that measurement on every code review and track that over time at various levels of aggregation.
- Automated tests’ success and coverage ratios can be tracked (and gatekeep code review acceptance), and should tend towards 100%, with a minimum acceptable value of 90% for coverage.
- When devs use normalized messages and descriptions for commits, issues and code reviews, this opens the door to automated lifecycle analysis for the contents of your project (or even multiple linked projects). You can thus measure the velocity of your team, from the specification phase all the way to production deployment, at many levels of granularity.
- Performance is easily measured, both on the back-end and the front-end (e.g. Core Web Vitals), and can be tracked over time, just like build artifact sizes (JS bundles, CSS, images, videos, binaries, Docker images…)
- Errors happening at runtime anywhere in the code (back and front) should be systematically reported to the team for addressing, which lets you track their frequency, resolution rate, recurrence rate, etc.
How can we help you?
For starters, we define together what your priorities and quantified objectives are, along with a rough budget (in time or cost, usually), which can be split across several tranches. Then to do a good job of it, we’ll need to immerse ourselves in your activity: business context, teams, processes, tooling, legacy constraints, etc.
As a result of that, we’ll come up with numerous recommendations across all segments: tooling; techs, frameworks and libraries; best practices and conventions; etc. If you’d like us to, we can coach your teams in implementing these, and we can train them so they fully internalize all of it (at the very least, we can produce internal documentation for all these new practices).
Finally, we’ll help you put together automated reporting of the metrics that matter to you, so everyone can track progress resulting from all this, which helps keep everybody motivated!