(to do: fix formatting)
Software Development Checklists
The documents listed below contain checklists for various phases within the lifecycle of a project. While they do tend to correspond to specific, familiar project phases, this shouldn't imply a particular favoured methodology. The point is that when any of these phases can be considered to be completed, these checklists might be useful to help ensure that some things have not been neglected.
The checklists are not intended to be a strict, proscriptive description of all things that must be done in the development lifecycle - they are a list of things we feel should at least be considered.
Main Processes:
- Project Inception
- Requirements
- Technical Specification
- Project Planning
- Development Process
- Deployment
Supporting Tasks:
The following tasks can occur at any particular time within the Project Lifecycle (none completed yet)
- * Raising Bugs
- * Change Management
- * Estimation
- * Peer Review
Checklist: Requirements
This is one of the main checklists. It's a framework so that we can be sure that things have not been missed in the requirements. This checklist should be referred to before requirements are signed off.
- Are requirements documented?
- Do we have stakeholder sign-off?
- Do we have UX buy-in?
- Who are the key audience for this work?
- Is priority clear?
- Have partners been identified?
- Are requirements of Editorial Team included?
- Are accessibilty requirementss included?
- Are the requirements implementation agnostic?
- Have assumptions been documented?
- Are success criteria clear?
- Are acceptance tests documented?
- Are performance criteria clear?
- What are KPIs?
- Are reporting requirements - particularly Click-Through Tracking - included?
- Might these requirements have any effect on existing stats gathering?
- Can this be verified with out automatic tests? (or verified in some other way if not?)
- Have we rejected possibility of buying solution?
- Are requirements sufficient for Technical Specification?
- Is UX work required prior to Technical Specification?
- Do we have a design document suitable for CSD work? (possibly should be in different list)
These are the things which should be considered whenever the Technical Specification for a project is considered finished (or indeed possibly at other review points).
- Are language, platform and frameworks specified?
- Is there an infrastructure diagram?
- Is there an application architecture (UML-type) diagram?
- Are the Inputs and Outputs defined?
- What are scalability characteristics?
- What are the performance criteria?
- Have KPIs been considered?
- Is instrumentation included?
- Is monitoring/alerting included?
- Do we need to include a proof of concept phase?
- Are external risks considered?
- What are dependencies on external teams?
- Which of our systems should this integrate with?
- What other BBC systems should we consider?
- Have we defined appropriate environments?
- Are differences between environments documented?
- Are we using appropriate standards?
- What libraries or frameworks should be used?
- Has hardware been specced?
- Has technical design been peer reviewed?
- Who needs to sign this off?
These items should be considered toward the end of the process of project planning (the estimation process could usefully have its own "Task Checklist").
- Have key risks been identified?
- Have external dependencies been included?
- Are all people involved identified?
- Have all roles (whatever they are) been assigned?
- Is Technical Specification complete?
- If not,
- has time been allowed for completion?
- is it logged as a risk?
- has the associated risk around the delivery schedule been adequately communicated?
- Has potential scope reduction been considered in the plan?
- Are all requirements covered?
- Is the delivery or due date clear?
- Is the priority clear?
- Has testing been included?
- If so, will this result in a set of programmatic tests which can be run by QA?
- Has documentation been included?
- Has time for deployment been included?
- Have Ops been involved?
- Have absences been taken into account?
- Have estimates been adjusted for velocity?
- Is there a Change Management process?
- Are there external events which will affect schedule?
- Have BAU tasks been taken into consideration?
- Have hardware leadtimes been taken into account?
- Has migration from old (or current) system been included?
- What is interaction between development & Product Management, Editorial, and stakeholders?
The actual Development Process is obviously one of the largest parts of the project cycle - essentially it runs from when code is first begun until a release candidate is prepared for acceptance testing. As such, this process might go on while other phases are going on at the same time. It's probably worth taking a look at this list at various points in the development phase, although it is really intended to be reviewed at the end, as with the other checklists.
- Should this be managed as a branch from the trunk?
- Is development being carried out against appropriate software versions?
- Has code been reviewed?
- Does code match existing standards?
- Has appropriate instrumentation been included in code?
- Are monitoring and alerting systems complete?
- Have tests been written?
- What is test coverage?
- Are tests automatic?
- Have tests been run?
- Have tests been run with sufficient realistic data?
- Have any changes in Live Environment been merged into this release?
- Has application documentation been completed?
- Has user documentation been completed?
- Has trouble-shooting documentation been completed?
these can be done during Acceptance Testing - should include these in Deployment (really project completion) checklist too
- Is rollback process clear and well understood?
This checklist is somewhat different from the other Checklists included here. Whereas, the others are intended to be things that should be remembered before a particular phase is considered complete, this is much more of a step-by-step procedural for a deployment.
- Document this release (in Deployment Log, or FogBugz Fix-For, or...)
- Inform Ops & Fix Target Release date
- Agree Deployment strategy with Ops
Standard deployment strategy is to deploy to a single server taken out of load-balancing; this server is checked; assuming no problems are seen, deploy to all servers. - Have key items to check been defined?
- Who will perform checks?
- Has rollback strategy been defined?
- If deployment method deviates from this
- has deployment method been documented and agreed with ops?
- Prepare Release Candidate (code, and deployment method documentation)
- Check diff of Release Candidate against Live Code-base
- Does QA environment match live?
- Perform pre-requisites on QA (as defined in deployment method documentation)
- Deploy RC codebase to QA
- ACCEPTANCE TESTING & QA - Outcome is a QAed Release Candidate
- While QA is performed (possibly), prepare monitoring and alerting
Once QA completed:
- Agree candidate deployment dates with Stakeholders (avoiding other deployments)
- Agree firm deployment date with Ops
- Disseminate deployment date as appropriate
- Should FM Ops be informed (presumption is not, but should be considered)
- Are Ops prepared to support new release?
- Is final load testing required?
- Create Deployment Fogbugz Case (basing information on what's needed for Change Control)
- Assign Deployment Fogbugz to Livesite Team
- Contact Livesite (in person or by phone)
- Shortly before deployment, Ops should send out Change Control
- Do any stakeholders or partners need to be informed?
- Perform deployment assume deploy to one server, check, then deploy to all servers
- Inform Search Team of deployment, and ensure release is checked
- Is Monitoring and Alerting on and working?
- Babysit application for a bedding-in period
- Inform stakeholders
Search Team, BBC, 2008