A piggy bank of commands, fixes, succinct reviews, some mini articles and technical opinions from a (mostly) Perl developer.

Jump to

Quick reference

Personal Software Development Checklists, 2008

To do: Fix formatting

They should be matched to each milestone in the development process. These aren't strictly defined. 


    • Requirements discussion : Are the requirements ok? Requirements checklist
    • Sprint planning / estimation : Are you planning and estimating well? Sprint planning and Estimation checklist
    • Adding an item to the backlog : No checklist, to make it as easy as possible to add ideas?
    • Construction : Are you designing and constructing well? Design checklist / Construction checklist / Database checklist / Monitoring checklist
    • Bug fixing/refactoring : Are you maintaining code well? Maintenance/Editing checklist
    • Deployment : Are you finishing up neatly? Agile checklist / I'm finished checklist / Deployment checklist / Documentation checklist

Requirements checklist

  • Have you removed any requirements that are not strictly necessary?
  • Have you removed any requirements which can be postponed until a second phase?
  • Have you made a decision on absolutely all of the open questions?
    Remember that if one person doesn't make the decision, someone else will be forced to.
    i.e. if Client doesn't decide, Engineer must. Or if Engineer doesn't decide, Client must.
SCRUM
  • Is the white card specified in the form "As an XXX I want to YYY". (This forces you to ask the question "When will this project really be considered 'done'?")
  • Have you identified the most risky requirements? (dependencies, unknown, etc).
  • Have you stated any assumptions explicitly?
Volere
  • Have you noted a rationale for the requirement?
  • Have you given the requirement a unique ID? (e.g. tracked in a ticketing system).
  • Have you noted who raised the requirement (person's role, if not their name).
  • Have you written fit criterion (How will I know when I'm done?)
  • Have you noted any dependencies?
  • Have you recorded the history of the requirement (changes made)?

Sprint planning and Estimation checklist

This checklist should be used in Sprint Planning.
NOTE: As Sprint Planning is often the first time we hear of a requirement, also use the Requirements checklist here.
  • Have you created a task to clarify requirements further, if necessary?
  • Is the white card in the form of a story?
  • Are the requirements framed as a problem, or are they really a suggested solution?
Preparation
  • Have you made some rough technical designs and considered several solutions?
Estimating
  • Has enough research been done to give a good enough estimate?
  • Have you tried to counteract your usual bias?
  • Have you compared it to previously completed tasks? (Add or subtract time depending on the differences between this task and the known one).
  • Have you thought of the best case, and the worst case, and done the calculation? (best + 2*worst) / 3
Have you created tasks to:
  • write test cases before writing code (Test Driven Development)
  • model the environment
  • only write code to pass tests
  • test the solution end-to-end
  • get a Dev/Test environment set up that is identical to live (e.g. Forge / ExtDev)
  • QA (testing by someone else)
  • check that foreign characters are encoded/decoded correctly (tests)
  • build in monitoring
  • deploy to a production environment
  • write documentation
  • get the solution signed off
  • clarify the requirements?

Design checklist

This should be read before you have completed the design of the system.
  • Have you looked over the other Technical Specification checklist?
  • Have you asked around to see if anyone's solved the problem already?
Have you considered:
  • using any off-the-shelf solutions (e.g. FeedEngine, Cocoon, IDOL)
  • whether or not using a database would be beneficial?
  • the use of any design patterns?
  • using the most appropriate language for the job?
  • Have you identified any gaps in knowledge, e.g. HTTP connection in C#

Maintainability

  • Have you made sure that especially any data-based parts of the system can be easily modified? 
    • e.g. Consider how much work is involved to add a new dataset to clickthrough tracking

Modularisation

  • Is any functional routine able to be called on its own from a stand-alone script?
  • Have you set out proper objects and their relationships?

On paper

  • Have you drawn a diagram of the information flow, at the very least? 
    • Any amount of UML diagrams at the most
  • Have you drawn a class diagram to begin with?
  • Have you defined the major data structures?
  • Have you written a pseudo-configuration file?
  • Has the main routine been written in pseudo-code?
  • Has each sub-routine been written in pseudo-code? 
    • Has each routine been broken down in enough detail so that it almost seems simpler to just write the code instead?
  • Has care been taken to contain all the business logic separately from the internal logic?
  • Does the design extend far enough in each direction? 
    • i.e. are there defined mechanisms for acquiring input, and publishing output?
  • Does the design take account of any possible non-ASCII characters?

Investigation

  • Have you fully investigated all of the options? (i.e. followed through on each design idea until you're sure it either will or won't work, possibly using prototypes)
  • Have you asked around for previous solutions to the various possible solutions? (engineers/leads from different teams, and people with experience).
  • Have you highlighted any risk? (risk = unknowns)

think about testing, during design

  • Have you thought about how you would test the system? 
    • NB: a test consists of an input and an expected output.

Agreement

  • Has the client explicitly agreed to the inputs they'll provide, and outputs they'll receive?

Construction checklist

  • Have you entered the RCS keywords into the file for Perforce? (replace with your own VCS)
# $File$  
# $Revision$  
# $Change$  
# $DateTime$  
# $Author$ 
  • Are you using Log4Perl?
  • Have you got all the modules/libraries you need?
  • Are you using strict/warnings?
  • Are you avoiding creating any unnecessary dependencies?
    (loose coupling)
  • Agree Upon a Coherent Layout Style and Automate It with perltidy (Damien Conway)
  • Throw Exceptions Instead of Returning Special Values or Setting Flags (Damien Conway)

Database checklist

  • Have you drawn it out on paper first?
  • Have you used foreign key constraints for referential integrity?
  • Have you avoided optimising early?
From the perspective of data warehousing:
  • Have you run every query with EXPLAIN to check that the correct index is being used and it's as efficient as possible?
  • Have you run every query with small and large ranges of data? (different indexes get used under different circumstances).
  • Have you considered the effect of indexes (or lack of them) on INSERT, DELETE and UPDATE as well as SELECT?
  • Have you built a set of test queries representing the full range of data, in order to test performance?
  • Have you considered using memcache to improve query response time?
  • Have you compared more than one type of each query? (sub-queries, temporary tables, etc).
  • Have you considered keeping lookup data tables in the application memory rather than using expensive joins?
  • Have you considered redesigning the data gathering, loading and schema to facilitate more efficient queries?
  • Data warehousing specifically: Have you avoided ENUMs and allowed any data to be inserted? (add to lookup tables and be given a description later).

Monitoring checklist

  • Have you considered using a framework that periodically publishes its status via HTTP? (e.g. BBC perforce //depot/osd/Importer)
  • Have you considered the need to write a CGI script to make the necessary checks and report a simple error code back for use in Nagios?
  • Are there fine-grained, customised, complex or non-standard requirements for monitoring? If so, consider using your own team's Nagios installation instead of Livesite's. 
    • Livesite like to send alerts to group email addresses, every hour, and use only simple regex matches. They are also geared towards critical systems that need to be supported 24/7.
Example of monitoring criteria:
For Search+ PAL/JBoss:
  • Load - warning at 15 load, critical at 30
  • Memory/Swap - warning at 70% used, critical at 85%
  • Diskspace - warning if any disk 80% full, critical at 90%
IDOLs (e.g. nolidol301):
IDOL [Collections2] - ACI STATUS     OK 07-02-2009 16:57:10 30d 1h 55m 42s    1/3 AUTONOMY OK
IDOL [Collections2] - INDEX QUEUE    OK 07-02-2009 16:59:58 1d 16h 41m 29s    1/3 DRE_INDEX_QUEUE OK
IDOL [Collections2] - PERFORMANCE    OK 07-02-2009 16:59:19 30d 1h 56m 3s     1/3 DRE_STATISTICS OK
IDOL [iPlayer]      - QUERY OK     07-02-2009 16:56:58 3d 17h 56m 11s    1/3 HTTP OK 

Maintenance checklist

To cover any changes made to code / configuration: Adding new features or debugging issues

  • have you sync'd from the repository? (checking first to see if there are any unsubmitted changes in your workspace)
  • have you re-read the design checklist with the proposed changes in mind?
  • is the system adequately documented?
  • have the requirements been specified in enough detail?
  • are you modifying the design, then implementing the design? (as opposed to modifying the code)

Agile checklist

To check that you're benefitting from agile methodologies
  • Have you split up tasks into MMFs? (Minimum Marketable Features)
  • Have you deployed as early as possible? (As soon as there is a stable feature to show the client).

I'm finished checklist

  • Could you rename some variables or routines to make them clearer?
  • Could you refactor the code to make it clearer? 
    • put code into subroutines
    • clean up comments
  • Perl: Have all print, die and warn statements been replaced with $LOG->info, $LOG->logdie and $LOG->warn statements? (log4perl/log4j)
  • is all the code checked into the versioning system? 
    • are only the changes since the last release checked in and tagged with a new release number?
  • does the system satisfy the original requirements? 
    • How can you demonstrate it? (Check against original requirements)
  • has the system been tested? 
    • Has it passed an end-to-end test in the production environment, with realistic levels of traffic?
    • How confident are you it will hold up?
    • Have you tried to break it with bad input?
  • are all paths set to the live environment? 
  • grep -r your_username packagedir/ 
  • is the system been deployed into a suitably monitored environment?
  • who will be notified of errors that occur during execution?
  • have you documented (from least technical to most technical): 
    • product details
    • user operation
    • support operation
    • flow of logic / requests / component interaction
    • database structure
    • data structures in the code
  • Create Standard POD Templates for Modules and Applications (Damien Conway)
  • have you told everyone about it that needs to know?
  • have you considered the need to restrict access?
  • will you be able to tell who is using it / how they are using it?

Deployment checklist

General:
  • Has it been installed onto an empty VM and run successfully?
  • Have all the steps to get it running on the VM been documented for Ops?
  • Have you run unit tests?
  • Have you run functional test manifests?
  • Have you performed the set of sanity checks or manual checks?
  • Have you updated the package with any hot fixes you've quickly made to get it working for the first time?
BBC Livesite conventions:
  • Have you considered to what data the app will need access, and which modules it will need?
  • Have you consulted Livesite on any alternative architecture that might simplify their deployment and maintenance work?
  • Have you set the configuration to write data and logs to the /data partition?
  • Have you considered packaging as an RPM for ease of install and upgrade?
  • Have you considered writing an init script instead of a crontab entry?
  • Have you arranged for access to the server and application logs in order to resolve issues faster? (e.g. they could be made available on the web servers at /apache_logs/).
Afterwards:
  • Have you updated the deployment log on the wiki?
  • Have you updated the operational info on the wiki?

Documentation checklist

Template for documentation

Written with a wiki in mind.

For each product/project (landing page)

  • At which stage of development is it?
  • Summary: What is it? What does it do? Why do we need it? Who uses it?
  • Contacts: Who owns it? Who develops it? Who supports it?
  • Access: How do you access it? Do you need permission?
  • Context/Scope: What similar systems are there? Why weren't they used? What does it integrate with? Where is the system used?
  • Components: What are the different parts of the system? What do they do?
  • Architecture: Is there an overview diagram?
  • Troubleshooting: What might go wrong? What has gone wrong in the past? How does one fix it?

For each component (separate page)

  • Summary: What is it? What does it do?
  • Contacts: Who develops and supports it?
  • Operational: Where is it hosted? Who administers the servers?
  • Architecture: Is there a diagram of this component?

Component development information (separate page)

  • Where is the code stored?
  • How does one install it/set it up?
  • What is the workflow/dataflow? (pseudocode/flowcharts)
  • What are the features?
  • What are the known bugs?
  • What work is planned?

Working notes (separate page)

  • Anything your colleagues would need to know, should you win the lottery and never be seen again!
  • An area to save ongoing development notes to share with your colleagues

Will Sheppard, BBC, 2008