A survey on release processes with organisations that 'sef-identify' as folowing agile practises.

Earlier this year I ran a survey around ‘release processes’ specifically with companies that follow agile practices. I promised that I would publish the results and apologies for the delay, I have collated the responses below. Thanks to everyone that participated, really appreciate your time and energy.
At iflix we're always looking to improve our release processes so it was really great to get so many good responses. We learnt we're not that far apart from our peers out there. We've got a lot to learn of course, some of that comes from hearing from others and some from the mistakes we make. The trick is to make your mistakes small ones and learn fast with minimal disruption to your customers.
This survey was run at least 5 months ago, so it’s a snapshot in time.  Most places I have worked change their processes (inspect and adapt!). I've reproduced the responses in their entirety, so it is a bit of a long read. I’ve anonymised the responses somewhat, retaining the business segment and org size. It might be fun to guess who said what!
This statement below, started the survey, with a note saying I would publish at some point (now)
"Context: We’re a young organisation with a growing number of teams. In the past, we’ve handled production releases very ad-hoc. Teams are able to make multiple releases per day on our web app. Our mobile apps have a different constraint (app store review processes etc). We’re concerned with some quality issues that could have been prevented and are looking at ways to improve our practices and processes.
We are very keen to maintain flexibility in our releases, but are interested in what other agile teams are doing."
Q1: Where do you work?
  1. Org A: Large Broadcast Media Company (UK Based)
  2. Org B: Mid sized print media (Multi-region)
  3. Org C: Large Services/Product Company (India/Global)
  4. Org D: Large Financial (Australia)
  5. Org E: Mid sized SAAS (Australia)
  6. Org F: Large Govt Service (Australia)
  7. Org G: Start up - SAAS (US)
  8. Org H: Mid Size - SAAS (Multi region)
  9. Org I: Mid size Desktop/SAAS (Australia)
Q: How often do you do a release to production:
Org A: When a feature is ready, ideally monthly to also push out low priority bug fixes.
Org B: At least once daily
Org C: Bi-Weekly
Org D: It depends. Fortnightly for the area in in
Org E: Depending on products - Best - multiple daily, Worst(desktop) - every 2-3 months
Org F: Monthly
Org G: As often as engineers want to be. After a feature is done. Typically multiple times a day.
Org H: A very wide question, Core app goes monthly, Smaller apps go ‘when the feature is ready’. Mobile apps go through the app store/play review process which introduces some uncertainty. Core is 4 weekly, faster things are 5-10 days. Some things are every 6-8 weeks
Org I: Deployment multiple times a day, release to the word every two weeks, on average. We manage 10’s of products and each of them we manage differently, dependent upon a few things; Language/Technology, Age of the Code, Dependencies, team maturity and other groups in the company who may have workload dependencies (Marketing, Sales, Partner Managers, Call centers etc).
So the following is what we strive to do, in the most part we manage this, in other cases one or more of the afore mentioned constraints means we adapt and evolve. We Split Deployment and Release with feature toggles. So we try to deploy to production often, with feature toggles off or only turned on for testers. Then the testers test in production, and we expand out the scope of the feature toggle for wider testing, then pilot group, then the world.
Q3: Do you use a release train method, where you have a regular fixed date for releases? If yes, what is your schedule?
Org A: No, but never on a Friday
Org B: For monolith, daily. For microservices component, on demand.
Org C: Biweekly
Org D: Yes. Fortnightly component releases lining up to quarterly enterprise releases
Org E: Nope
Org F: Operate in a SAFe model, but do not follow train release dates
Org G: No, we don't. We use tests (on multiple environments) as a deciding factor if a feature is good for release.
Org H: Core teams do. What’s ready on code freeze day. · Most others use feature ready as the trigger
Org I: No, OMG No, OMDoubleGod No!
Q 4: Do you have ‘gates’ on access to production?
Org A: Yes, Manual QA in 3 environments, SIT, UAT and Pre-Prod
Org B: No. Just gating on the prod db.
Org C: No
Org D: Millions
Org E: If I get this question right - Teams that practice Continuous Delivery definitely dont, other cases there are some people with more access than others
Org F:Yes
Org G: Yeah. Gates on multiple fronts: Network access, role access (authentication and authorization). Engineers still can have access to a prod console though, but changes need to be done in person with another engineer.
Org H: Our Core team has test phase gates. Most other teams do testing work. I’ve got the feeling that an in-team test phase is emerging as a problem. It’s not an impediment when the product is young but now is starting to hurt people.
Org I: We Strive to get into the production environment as quick as possible. However, each developer has a production environment on their machine, to unit and functional tests, then it will go through a build pipeline, to Functional test -> Integration test -> Production with Feature toggle off.
We have a central console that manages turning on Toggles and who has access to that on feature. We hand built the console and toggle mechanism. If there is a big group of functionality being released we have a release team who create a run book which covers the coordination of the internal tech/Internal business teams which need to do work to ensure the release hits the market correctly.
This run book will cover things like the Call center team have been informed of the new public feature and are trained on it, External Marketing Comms has been produced and will be released at the same time, Have the billing systems been reconfigured and does the corporate website have the right material ready and a way of purchasing – The run book is a process to get all of that aligned and releasing at the same time. Most of the features we release take 10+ tech teams to coordinate their effort and then there will be 5+ external teams.
Q: Do you have a separate or ‘on-demand’ QA team do testing on release candidates?
Org A: Dedicated QA Testers sat within the team
Org B: No
Org C: Yes
Org D: It depends. Not at component level. But yes at overall project level
Org E: nope. You want to get as much confidence from your automated suite as possible
Org F: The team who develop are the team who test the release candidate and are the same team who provide production support
Org G: No. Engineers write a lot of tests and we use that to give us feedback whether a version is good for prod
Org H: Both in team testing and a regression service.
Org I: No, All our teams are as cross functional as possible – Dev, Test, Product, UX, DevOps – We run weekly cross functional standups that keep all the teams aligned on a release. Each release has a set of objectives as metrics that a release is striving to achieve. Like “Reduce the number of SME Debtor days by 10%”.

Q: Do you do any manual smoke or exploratory testing before major releases?

Org A: Yes
Org B: We do it as we go, incremental
Org C: Yes
Org D: Yes
Org E: teams test individually at their level, no release level smoke or exploratory
Org F: All hands on deck for app shakeout before any release.
Org G: No, we automated all the tests. However we realize that we are missing some of exploratory testing. I am thinking about creating a role and/or encourage engineers to do more of those.
Org H: Yes always and everywhere (I think)
Org I: Yes, we smash every release, but strive to automate everything. If a test is done more than once it is automated. Smoke tests are just a way of identifying something that should be automated. The team live by a set of principles and practices One of the principles the delivery teams works by is “Anything that is repeated is Automated”. Practices: Always build functionality as independent re-usable services, accessible via an API. Always re-use or consume services via their API’s. Always design services to support Asynchronous interactions. Everything that could be repeated, is Automated. Everything must be built secure. Everything must be tested. Everything is built to be centrally monitored and audited. Everything is logged to a central service
Q: Do you use code freeze rules, as in, no changes to QA environments for X days, no changes to Production whilst Mobile Apps are tested/in between releases?
Org A: Yes, timeframe is dependant on how much time the QA needs in those environments
Org B: Hell NO
Org C: code freeze in the month of DEC
Org D: Yes
Org E: Nope
Org F: Release candidate is on a seperate branch to any additional work
Org G: Don’t do code freezes
Org H: Core formally does · Then everyone else has degrees of this – more of a convention around version control. T’s something the team is in charge of, not an outside group
Org I: No – Develop hard and fast, relentlessly move forward!
Q:If you do code freeze, how do you continue development while in code freeze? Feature branching? Other techniques
Org A: Feature branching, and dev team use a DEV environment
Org D: Branching
Org E: We use feature switches instead
Org F: Development can continue on seperate branch....when not helping test.
Org H: Yep. Each feature set team has their own QA environment. More or less. · They converge on a pre release environment (Ops1) o I think Ops 1 has a limited life and won’t be needed in 12- 18 months. Automated deployment · Downtime windows and change tickets · Logging changes · Feature switches · Feature branching in some instances
Q: Do you have multiple environments set up for Dev, QA and Staging? If so what are the advantages/disadvantages
Org A: Yes, all environments are set up differently to test different scenarios, SIT - First environment which the QA will deploy to, testing the feature and associated areas of the product, UAT - Environment is given to internal and external clients (inc Product Owner) to check the feature is doing what is expected. Pre-Prod - Build is tested on hardware which is exactly the same as Production - Exploratory testing also performed in addition to feature tests
Org B: Yes, just dev and prod. Don't really want to test in prod.
Org C: Yes
Org D: Yes. Not enough environments to satisfy the scale. But more environments isn't the best answer eithe
Org E: Yes. Rightly split environments are a big win especially with Mobile development ( assuming product api's are not developed by same team) to manage upstream & downstream dependencies, also support demo, UX research and feedback, early access to clients
Org F: Yes. Pro: Safe environment. Con: data setup and not same experience as prod
Org G: - We have a staging and then prod environment. We also have another one for sale/demo purpose. Depending on the feature, sometimes we have to spin up load test envs. If you have frequent releases what technical techniques do you use to handle this?, i. e. Features switches - Feature flags are used often. So the code can be landed and deployed to a small group of customers.
Org I: Yes as described above, but Im trying to get the company to a point where the products run on containers on dev machines, when that works and is checked into source control it is pushed straight into production and we test there using user scoping on feature toggles. When I start a new Job, that is going to be one of the Mantra’s we have.
We unit test on container environments on our dev machines, every successful build deploys straight to production and we only test on a production environment – We release from an environment that has been tested. Deployments and Releases are never tied together, they are separate activities. We accept a release may have bugs however we can release so fast that we relentlessly move forward – You also need to build resilient systems to take this approach.

Q: If you have frequent releases what techniques do you use to handle this?, i. e. Features switches

Org A: Feature switches
Org B: Feature toggles, rolling upgrades, roll forward, trunk based dev. Aiming to do blue green deployment in the future.
Org D: Toggles
Org E: Feature switches. Branching is a big no as no one bothers ensuring the entire test quite runs branches leading to poor quality and large bug discovery later.
Org F: Separate branches, MVP feature delivery, feature toggling and a great Iteration manager/team lead :-)
Q: Have you ever screwed up a release, and if so, what did you do? I asked this question later, here are some responses:
"Move forward – Try never to roll out a release.  But in all honesty, we screw up as much as anyone, and sometimes roll out release (I hate this!)"
Roll forward was a common response, with only a few people rolling back. In my experience this has definitely been the trend over the last few years. Years back, all release notes, (when we did them) had to include detailed roll back plans before being approved, this has certainly changed.
I hope this helps you folks as much as it helped us. If anyone wants to share what they do at their org in the comments below that would be great.

Comments

Popular posts from this blog

Using an agile storyboard as a scale of certainty

Mission Orders, 6 military principles that can work for agile teams