Friday, 10 May 2013

A SPA seed - Javascript stack with node and angular.

Single Page Application built on node.js and angularJS


I've been looking at creating a SPA with a full javascript stack so decided to pull together a seed based on node and angular with jshint to test all the .js files, mocha to run the node tests, karma to run the browser based angular tests and cucumber for BDD (full stack testing/acceptance tests).

I did this because i could not find any examples of how to pull together angular and node in the same project along with testing of everything. This is a good start but until i use it in anger i wont really know if ive got it right, so when i do i will try and update it.

https://github.com/DamianStanger/NodejsAngularSPASeed

Details

Node

node.js, npm, angular, karma, Mocha, phantom.js, jshint, jshintRunner

Ruby (1.9.2)

Ruby is for cucumber that is used for the full stack acceptance testing
ruby 1.9.2, devKit, bundler, cucumber, capybara

Next

The readme.md file gives details on how to get it all running and get the tests working. Then just clone this repo and use it as a starting point for your next node SPA app.

Wednesday, 8 May 2013

Developing node modules, npm and git. how to publish npm packages from a windows machine whilst preserving the unix style line endings

I recently had a problem whilst publishing a new node.js module id written that is designed to run jshint recursively against a number of directories and or files.

I develop on windows and so the line endings are dos based (crlf) but the file to run your app as stored in the bin folder needs to have unix line endings (lf) for it to run on a mac or nix systems.

i save code in git and github with unix line endings turned on in the repository but on my working directory the file system is dos line endings. So when i publish using 'npm publish' the file in my bin is published with dos line endings.

this means that when you do an npm install -g on a mac or linux you get the error
env: node\r: No such file or directory

To fix this i developed a little batch script that i use to do the publishing of new versions its really simple and will change the line endings just before publishing.

dos2unix --d2u bin\jshintRunner
npm publish


it uses the built in dos2unix (im running windows 8 ultimate) to change the line endings. I hope this helps someone else out with a similar issue.

Link to the project on github : https://github.com/DamianStanger/jshintRunner and on npm https://npmjs.org/package/jshintrunner

Friday, 26 April 2013

A node application to count lines within a files

Node with Mocha, Should and Sinon to count file lines

I've recently had the requirement to count lines of source code in 3 or 4 different code bases, including a couple of single page web apps written in javascript, angularjs and karma, a couple of java server side services and an acceptance test suite again written in java and selenium.

I wanted to compare the code bases and to look into the ratios of test code to production code so we as a team could get some collective feel for the entire code base, which to me was a very valuable exercise.
I decided to write a command line app in node and javascript, mainly because at the moment I’m trying to boost my javascript knowledge and I’m really interested in node. This command line app would count the lines of code in a code base. There is nothing better than a real requirement to spur you into action.
You can find the source code here: https://github.com/DamianStanger/lineCounter
I’m quite pleased how it has turned out but as is the way with every piece of software I’ve run out of budget (free time) before completion. But I would have liked to enhance it further if I could find the time.

Enhancements:


  • Return a json string that has file and line counts for every directory in the codebase. This output could then be pushed into a d3 app to visualise the source code and the relative sizes, that would be cool.
  • Ability to customise the ignored files and directories.
  • Ability to hook into team city, this would need some new output reporter creating so we could track the lines of code over time.

Learnings

  • I started off using Karma and jasmine for running the tests but found that they were difficult to get to play well with the node modules I created so I switched to Mocha (http://visionmedia.github.io/mocha/) glad I did, because I love it. I especially like the BDD style tests I can write with many nested describes to get the test context. I’m not sure how I’m going to cope going back to the flat structure of nUnit.
  • I started to use Should (https://npmjs.org/package/should) as the preferred mechanism for asserting. The fluent interface is really appealing, it’s very similar to one I’ve been using in .net for a while now.
  • I’ve needed to do a bit of mocking in this project and for this I found Sinon (http://sinonjs.org/ ). Very powerful and flexible, its been capable of meeting all my stubbing and mocking needs up to now. Bit of a learning curve but its all good.

Wednesday, 2 January 2013

node.js an introduction and tutorial to javascript on the server

Presentation

I gave a presentation on node.js on my first day back from the Christmas holidays. It was fun, plenty of people in the room, all eager to learn the basics of node.

Firstly an overview of what node is, what its good for and an introduction to the event driven architecture behind node. I did the classic fast food example and then a coding session.

The live coding session consisted of an introduction to the node REPL, some basic examples of javascript on the server, followed by a simple web server and some performance testing with apache bench (https://httpd.apache.org/docs/2.2/programs/ab.html).

To finish up I demoed a reverse proxy written in one line of node. It's amazing how much power this has, and even more amazing that you can write something like that in such a concise manner but which is still understandable (read maintainable).

I thought it went really well. I don't give many presentations but when I do I like them to be good, relevant, interesting and entertaining (as far as a technical subject can be). Of course I was nervous, especially because i was videoing it. I wanted to actually see what i was like. It's the best way to improve, fast feedback reflection and improvement.

Live coding is always dangerous but it all went remarkably well. Although I did have a small hiccup in that i could not connect to the wireless network, but that only impacted one of my examples, I weathered that storm.

Video


So yes I videoed it and have uploaded to youtube http://youtu.be/vGBk8EB-Yz0 Check it out I'd be really interested to hear your feedback on my presentation style and content.



Attached below are the code examples from the talk so you can test them out if you like.

Enjoy.

Code demo

REPL

1+2;
var add = function(a,b){return a+b};
add(3,4);

process
process.pid
process.env.Path

fs.readFile('foo.txt', 'utf8', function (err,data) {
console.log(data);
});

foo bar code

setTimeout(function(){
console.log("foo");
}, 2000);
console.log("bar");

setInterval(function(){
console.log("bang!");
}, 1000);

hello world web server

var http = require('http');
var server = http.createServer(function (req, res) {
res.end('Hello World\n');
});
server.listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');

curl http://localhost:1337/
curl -i http://localhost:1337/

res.write('hello');
setTimeout(function() {
res.end('World\n');
}, 6000);

The following needs apache bench installed and on your path
ab -n 10 -c 10 http://127.0.0.1:1337/

new file express.js

var express = require('express');
var app = express();
app.listen(8080);
app.get('/', function(req, res){
console.log("get");
res.send("Welcome to node!");
});

node package manager

npm install express

app.get('/foo/', function(req, res){
console.log("getfoo");
res.send("Welcome to foo!");
});

a one line reverse proxy...

using request, a module to simplify the making of web requests
var http=require("http"),
request=require("request");
http.createServer(function (req, res) {
console.log(req.url);
req.pipe(request("http://www.xperthr.com/" + req.url)).pipe(res);
}).listen(1337);

Saturday, 6 October 2012

How to update/install Node on Ubuntu

I needed to upgrade node.js on my ubuntu dev machine but could not find any good instructions on the internet, i tried several suggestions and finally got it working using an amalgamation of a few blogs.
My current system setup before the process.
ubuntu64:~$ which node
/usr/local/bin/node
ubuntu64:~$ node -v
v0.4.9

Firstly make sure your system is upto date
ubuntu64:~$ sudo apt-get update
ubuntu64:~$ sudo apt-get install git-core curl build-essential
            openssl libssl-dev

Then clone the Node.js repository at git hub:
ubuntu64:~$ git clone https://github.com/joyent/node.git
ubuntu64:~$ cd node

I wanted the latest tagged version
ubuntu64:~/node$ git tag
....
....big list of all the tags
....
ubuntu64:~/node$ git checkout v0.9.2

Then I removed the old version of node
ubuntu64:~$ which node
/usr/local/bin/node
ubuntu64:~$ cd /usr/local/bin
ubuntu64:/usr/local/bin$ sudo rm node

Now to install the desired version, in may case v0.9.2
ubuntu64:/usr/local/bin$ cd ~/node
ubuntu64:~/node$ ./configure
....
ubuntu64:~/node$ make
....
ubuntu64:~/node$ sudo make install
....

Then I had to run the following to update the profile
ubuntu64:~/node$ . ~/.profile
Finally confirm that node is in fact upgraded, and npm has magically been installed too :-) bonus
ubuntu64:~/node$ which node
/usr/local/bin/node
ubuntu64:~/node$ node -v
v0.9.2
ubuntu64:~/node$ which npm
/usr/local/bin/npm
ubuntu64:~/node$ npm -v
1.1.61

Wednesday, 22 August 2012

Continuous Integration performance testing. An easily customisable solution.


Using JMeter to profile the performance of your web application and visualise performance trends, all within the CI pipeline.


The full solution as outlined here can be found on my GitHub repository at https://github.com/DamianStanger/CIPerformance

Introduction

Most companies care about the performance of their web sites/web apps but often the testing of this performance is left till the last minute with the hope that the devs will have been doing good job writing well performant code for the last x months whilst developing. I don’t know why this is often the way? If performance really is a major Non Functional Requirements (NFR) then you have to test your performance as you go, you can’t leave this until the last moment just before deployment / live and then when you find that performance is not good enough just try and hack in quick fixes. This is just not good enough, you can’t just hack in performance after the fact, it can take a substantial change to the design (to do it well).

On our team we have been performance profiling each important page of our app since month 1 of the development process, we are now live and are working towards the 4th major release. I (and the team) and have found our continuous performance testing invaluable. Here is the Performance graph as it stood a few weeks ago:

The process outlined below is not a method for stress testing your app. It’s not designed to calculate the load that can be applied, instead it's used to see the trend in the performance of the app. Has a recent check-in caused the home page perform like a dog? Any N+1 DB selects or recursive functions causing trouble? It’s a method of getting quick feedback within the CI pipeline minutes after a change is checked in.

The process

1. When we check in, our CI box (TeamCity) runs the build (including javascript tests, unit tests, integration tests, functional tests, acceptance tests), if all this is successful then the performance tests are kicked off.
2. Teardown the DB and restore a new copy (so we always have the same data for every run, this DB has a decent amount of data in it simulating the data you have in live in terms of volume and content).
3. Kick the web apps to prepare them for the performance tests, this ensures IIS has started up, and the in memory caches primed.
4. Run the JMeter scripts.
a. There are numerous scripts which simulate load generated by different categories of user. For example a logged out user will have a different performance profile to a fully subscribed user.
b. We run all the scripts in serial as we want to see the performance profiles of each type of user on each different site we run.
5. The results from each run are processed by a powershell script which extracts the data from the JMeter log files (jtl) and writes the results into a sql server database (DB). There is one record per page per test run.
6. We have a custom MVC app that pulls this data from the DB (using dapper) and displays it to the team on a common monitor (using JSON and RGraph) that is always updating. We see instantly after we have checked in if we have affected performance, good or bad. We could break the build if we wanted but decided this was a step too far as sometimes it can be a day or two to fix any poorly performing aspect of the site.

A stripped down version is avaliable on my GitHub account, Running the powershell script a few times and then running the mvc app you should see something like the following:

The juicy bits (interesting bits of code and descriptions)

Powershell script (runTest.ps1)

• Calling out to JMeter from powershell on line 112
& $jmeter -n -t $test_plan -l $test_results -j $test_log

• Parse JMeter results on line 133
[System.Xml.XmlDocument] $results = new-object System.Xml.XmlDocument
$results.load($file)
$samples = $results.selectnodes("/testResults/httpSample | /testResults/sample/httpSample")


Then iterate all the samples and record all the page times and errors

• Write results to DB on line 171
$conn = New-Object System.Data.SqlClient.SqlConnection($connection_string)
$conn.Open()
foreach($pagestat in $page_statistics.GetEnumerator())
{
    $cmd = $conn.CreateCommand()
    $name = $pagestat.Name
    $stats = $pagestat.Value
    $cmd.CommandText = "INSERT Results VALUES ('$start_date_time', '$($name)',
    $($stats.AverageTime()), $($stats.Max()), $($stats.Min()), $($stats.NumberOfHits()),
    $($stats.NumberOfErrors()), $test_plan)"
    $cmd.ExecuteNonQuery()
}

JMeter scripts

You can look up JMeter yourself to find suitable examples of this. My project posted here just has a very simple demo script which hits Google and Bing over and over. You can replace this with any JMeter script you like. The DB and the web app are page and site agnostic so it should be easy to replace with your own, and it will pick up your data and just work.
I recommend testing all the critical pages in your app, but I find the graphs get too busy with more than 10 different lines (pages) on them. If you want to test more stuff just add more scripts and graphs rather than have loads of lines on one graph.
The generic solution given here has two scripts but you can actually have as many as you like.Two would be a good choice if you had a public facing site and an editor admin site which both have different performance profiles and pages. But in the end it's up to you to be creative in the use of your scripts and test what really needs testing.

The results DB

The DB is really simple. It consists of just one table which stores a record per page per test run. This DB needs creating before you run the script for the first time. The file Database.sql will create it for you in SQL server.

The MVC app

Data layer, Dapper

Using dapper (a micro ORM installed through nuget) to get the daily results is done in the resultsRepository class:

var sqlConnection = new SqlConnection("Data Source=(local); Initial Catalog=PerformanceResults; Integrated Security=SSPI");
sqlConnection.Open();
var enumerable = sqlConnection.Query(@"
SELECT Url, AVG(AverageTime) As AverageTime, CAST(RunDate as date) as RunDate FROM Results
    WHERE TestPlan = @TestPlan
    GROUP BY CAST(RunDate as date), Url
    ORDER BY CAST(RunDate as date), Url", new { TestPlan = testPlan });
sqlConnection.Close();
return enumerable;

The view, JSON and RGraph

In this sample code there are four different graphs on the page, two for Google (test plan 1), and two for Bing (test plan 2). Heartbeat data shows a data point for every performance run. It shows you instantly if there has been a bad performance run. This shows all the runs over the last two weeks. The Daily Averages show a data point per day for all the performance data on the DB.
There are four canvases that contain the graphs, these graphs are all drawn using RGraph from some JSON data populated from the data pulled off the DB. It’s the javascript function configureGraph that does this work with RGraph, for details of how to use RGraph see the appendix.
The JSON data is created from the model using LINQ in the view as such:
dailyData: [@String.Join(",", Model.Daily.Select(x => "[" + String.Join(",", x.Results.Select(y => y.AverageTimeSeconds).ToList()) + "]"))],

This will create something like the following depending on the data in your DB:
dailyData: [[4.6,5.1],[1.9,2.2],[4.0,3.9],[9.0,9.0]],
Where the inner numbers are the data points of the individual lines. So the data above is four lines each with two data points each.
Customising things for your own purposes

Customisation

So you would like to customise this whole process for your own purposes? Here are the very simple steps:
  1. Edit the function CreateAllTestDefiniotions in RunTest.ps1 to add in any JMeter scripts that you want to run as new TestPlanDefinitions.
  2. Change or add to the JMeter scripts (.jmx) to exercise the sites and pages that you want to test.
  3. Add the plan definitions to the method CreateAllPlanDefinitions of the class PlanDefinition in the performance stats solution. This is all you need to edit for the web interface to display all your test plans. The graphs will automatically pick up the page names that have been put into the configured JMeter scripts.
  4. Optionally change the yMax of each graph so that you can more easily see the performance lines to a scale that suits your performance results.

Conclusion

We as a team have found this set up very useful. It has highlighted many issues to us including: n+1 select issues, combres configuration problems, and all number of issues with business logic usually with enumerations or recursive functions.
When set up so that the page refreshes every minute, it does a really good job. It has been a constant reminder to the team to make sure they are doing a good job with regard to the NFR which is performance.

A note on live performance/ stress testing

Live performance testing is a very different beast altogether, the objective of which is to see how the system as a whole reacts under stress: To determine the maximum number of page requests that can be served simultaneously. This is different to the CI performance tests outlined above. These tests run on a dev box and are only useful as a relative measure to see how page responsiveness is changing as new functionality is added.

Appendix

JMeter - https://jmeter.apache.org/
Dapper – Installed from Nuget
RGraph - http://www.rgraph.net/
GitHub - https://github.com/DamianStanger/CIPerformance
VS2012 - https://www.microsoft.com/visualstudio/11/en-us

Tuesday, 15 May 2012

A PowerShell script to count your lines of source code

We have been thinking about code quality and metrics of late and since im also learning more powershell decided to write a little script to do that for me. It basically finds all the code files in the project directories and counts lines and files:

Here it is:

$files = Get-ChildItem . -Recurse | `
    where-Object {$_.Name -match "^.+\.cs$"}
$processedfiles = @();
$totalLines = 0;
foreach ($x in $files)
{
    $name= $x.Name;
    $lines= (Get-Content ($x.Fullname) | `
        Measure-Object –Line ).Lines;
    $object = New-Object Object;
    $object | Add-Member -MemberType noteproperty `
        -name Name -value $name;
    $object | Add-Member -MemberType noteproperty `
        -name Lines -value $lines;
    $processedfiles += $object;
    $totalLines += $lines;
}
$processedfiles | Where-Object {$_.Lines -gt 100} | `
    sort-object -property Lines -Descending
Write-Host ... ... ... ... ...
Write-Host Total Lines $totalLines In Files $processedfiles.count


Line 00: Will get all the .cs files from the current working folder and below.
Line 07: Uses the measure-object cmdlet to get the number of lines in the current file being processed.
Line 09: Creates an object, lines 10 and 12 dynamically adds properties to that object for the file name and the line count.
Line 11: Adds the new object to the end of the array of processed files.
Line 17: Selects all the files from the array where the line count is greater than 100 (an arbitary amount, i only care about files longer than roughly 2 screens worth of text), Then print them out in descending order of line count.

My results:
our current project has a total of 154068 lines of code in .cs files.
2559 .cs files of which 312 files have a line count greater than 100 lines.
16 files are over 400 lines in length, but none of those were in the main product (All the worst classes are test classes and helpers which are not production code).

I also wondered about the state of my views:
320 .cshtml files a total of 10958 lines, the vast majority are less than 100 and only 6 over 150.

Tuesday, 17 January 2012

Linq performance problems with deferred execution causing multiple selects against the DB


We have some really good performance tests that run on every checkin providing
the team with an excelent view of how the performance of the software changes
due to different changes in the code base. We recently saw a drop in performance
and we tracked it down to a problem in our data layer.

The problem we encountered was within LINQ to SQL but will be a problem with
other types of LINQ if your not careful.

Personally i consider LINQtoSQL to be dangerous for a number of reasons and
would actually prefer not to be using it but we are where we are and we as a
team just need to be weary of LINQToSQL and its quirks.

This quirk is when the deferred execution of a linq to sql enumeration is
causing multiple selects against the DB.

As this code demonstrates.

public IList<IndustrySector> GetIndustrySectorsByArticleId(int articleId)
{
  var industrySectorsIds = GetIndustrySectorIds(articleId);
  return ByIds(industrySectorsIds);
}

private IEnumerable<int> GetIndustrySectorIds(int articleId)
{
  var articleIndustrySectorsDaos = databaseManager.DataContext.ArticleIndustrySectorDaos.Where(x => x.ArticleID == articleId);
  return articleIndustrySectorsDaos.Select(x => x.IndustrySectorID);
}

public IList<IndustrySector> ByIds(IEnumerable<int> industrySectorIds)
{
  return All().Where(i => industrySectorIds.Contains(i.Key)).Select(x => x.Value).ToList();
}


public IEnumerable<IndustrySector> All()
{
  //work out all the industry sectors valid for this user in the system, this doesn't make a DB call
}

So in the end this all causes an number of identical queries to be fired against the DB,
industrySectorsIds.count number of calls to the DB to be precise.
This is the select we were seeing:

exec sp_executesql N'SELECT [t0].[IndustrySectorID]
FROM [dbo].[tlnk_Article_IndustrySector] AS [t0]
WHERE [t0].[ArticleID] = @p0',N'@p0 int',@p0=107348

By forcing the ByIds() method to retreive all the ids from the DB before iterating All()
will mean that they are loaded into memory once only.

public IList<IndustrySector> ByIds(IEnumerable<int> industrySectorIds)
{
  var sectorIds = industrySectorIds.ToList();
  return All().Where(i => sectorIds.Contains(i.Key)).Select(x => x.Value).ToList();
}

now you only get one call to the DB, thanks LINQtoSQL, your great.

Monday, 7 November 2011

Installing node.js on windows 7 machine

Ive recently been looking for instructions on how to install node.js (the javascript powered web server) on windows, and it received some quite confusing answers involving cygwin, building things from source etc.
And yes it is possible to install node on windows 7, in fact.... Its really really easy actually.

1. download node.exe from http://nodejs.org/dist/v0.6.0/node.exe (link avaliable from http://nodejs.org)
2. create a sample node app

var http = require('http');
http.createServer(function (req, res) {
    res.writeHead(200, {'Content-Type': 'text/plain'});
    res.end('Hello World\n');
}).listen(1337, "127.0.0.1");
console.log('Server running at http://127.0.0.1:1337/');


save in a file 'example.js' and place in the same folder as node.exe (anywhere on your system)
open a command prompt in the folder with node.exe and your .js file

run > node example.js
and thats it, now open your browser and goto: http://127.0.0.1:1337/

Hello World

how easy was that?

Thursday, 20 October 2011

Kanban inspired card wall. Our example

Overview
Some call them 'card walls', some 'story walls', others just 'the wall'. There are many names given to the wall but its purpose is the same, to convey project information to the team and other interested parties / stakeholders. I work on a software development team but Kanban boards can be applied to any process, from manufacturing to household to-do lists. Honestly the uses are as wide ranging as your imagination. But this article concentrates on software development.

Some teams practice 'scrum' and put tasks on the wall, some XP and run on weekly/2 weekly/4 weekly iterations, but we are running in a kanban mode at the moment, applying Work In Progress (WIP) limits as best we can.

I've worked on many different teams, with many different types of wall/board, from scrum to xp, from big walls to tiny boards. I think the Kanban inspired wall we are currently running with is the best I've have the good fortune to use. So I thought I'd share - I'll try to run you through it as best as I can.

The wall
We are fortunate enough to have a very big space for our card wall (3-4 metres wide). I have been on teams that have had to manage with a small white board. Having said that, we have still filled every available space, and it would be nice if we had even more.

The size allows us to show lots of project relevant information and also include plenty of room for up stream activities like information architecture (IA), user experience (UX) and analysis. So we have lots of visibility as to the state of all aspects of the project.

The Columns
The columns are detailed below in their own sections, but to summarise they are:
  • Design
    • In visual design
    • In customer testing
    • In front end dev
  • Analysis
    • In analysis
    • Selected for dev
  • Development
    • In dev
    • Dev complete
  • QA
    • In QA
    • QA complete
  • Done
  • Technical tasks
  • Risks
The WIP Limits
Work In Progress limits help us to focus on the tasks currently in play. Too many tasks on the board at any one time can mean the team is spread too thinly. Traffic jams can occur and important bug fixes won't get through quickly.

Example without WIP limits:
Two big stories have just been finished and so QA now gets to work on them, whilst the developers start on new stories. At this point all four dev pairs are busy, as are the two QAs. A day later the testing is still not complete and another pair finishes their story, this story gets queued up in waiting for QA. If there were no WIP limits the pair would start a new story and get on with that work. But what happens if there is then a bug in one of the stories, a big bug? Who should pick it up? A dev pair need to stop what they are doing mid flow, shelve all their current checked out work, context switch to the bug, investigate and fix it. I guess you know where this is going, with effectively three stories in QA and four stories in dev everyone is going to be flip-flopping between stories and bug fixes. Context switching costs time and the whole process reduces our ability to respond to change. The main consequence being that throughput and velocity suffer, not to mention the headaches of leaving half finished code laying about whilst you fix something else.

By limiting the number of stories in each column this does not happen (well, much much less so). Developers are always available for fixes and everyone experiences less context switching and more uninterrupted flow.

The Story Cards
We have four different coloured cards on the wall: yellow, blue, white and pink. The different colours mean different things depending on where they are on the wall, but its quite straight forward.
  • Yellow: Design tasks, our UX guys work on the yellow cards in the first 3 columns, either working on the IA, wire frames, or styling of up coming stories.
  • Yellow: Development tasks, tech tasks and tech debt. All things that are not stories but need playing, e.g.: Set up performance test environment, or refactor X to remove duplication, or even spike out Y to prove integration points.
  • Blue: Features under analysis. These cards only appear in the analysis and done columns and represent whole features, big things. I think there are around 15 in release one.
  • Pink: Bugs, simples [sic].
  • White: Most of the cards flowing across the wall are white. These denote stories that require dev effort. We try to ensure that all the white cards can be finished in 1-4 days

White Story Card
  • Story number for reference in Mingle or TFS.
  • Story title, brief, must fit on one card in thick writing.
  • Days in play (the dots get added every morning before our daily stand up meeting) This allows us to see how long cards take to flow through the wall.
  • Story point estimate (size/complexity)
  • Relevant notes can be captured on the reverse if needed.

Yellow Tech Card
  • Usually these do not have reference numbers.
  • Usually no estimate either.
  • Just a brief description of the work required.
  • Again relevant notes can be captured on the reverse if needed.

Flow across the wall
Features start their life in the 'In analysis' column. All the features that are designated for release 1 are in this column in priority order. Whilst the BAs are working on splitting stories out, the UX and IA guys are also looking into how best to fulfil the users requirements from a design perspective, and they put their own cards up on the left of the board.
Once a story is ready to be developed, a white card is placed in the 'Selected for dev' column and waits here until a dev pair is ready to pick them up. At which point they are moved in to 'In dev' until the pair has showcased the story to the QA and BA's satisfaction at which point it moves into dev complete (awaiting QA).
The QAs pick up cards and work on them in the 'In QA' column. Cards NEVER move backwards, if there are bugs to fix a dev pair will move to fix it in the QA column, whatever they were doing will remain in the 'In dev' column. Once the QA is happy with the card it will move into 'QA complete' where the BAs now showcase the story to the product owner before moving it to 'Done'.

The Columns In Detail
Design
Walls I've often worked with in the past have not had a design area on them. This project has a heavy design and UX aspect and so it was important to have this area on the wall so we can all see what features are at which point in the process.
The columns we have here are: 'In visual design' for development of the wireframes and IA; 'In customer testing'; and 'In front end dev' for development of the UX including styling and image creating.

Analysis (WIP 6)

The Analysis columns consist of 'In analysis' which list all the features of the current release (release one is about nine months) we keep stickies on the features (blue cards) to show how complete features are. The 'Selected for dev' column is used for the next stories to be picked up by the dev team (white cards), these stories should have been through the analysis process and be ready for devs to pick up, this column also serves as a heads up for the QAs so they can start getting ready for stories, working out acceptance criteria and so forth with the BAs and devs.

Dev (WIP 4)

We currently have four dev pairs and so have set our WIP limit to four for the combination of the two dev columns. At the beginning of the project we set the WIP limit to three as we always had a pair on tech tasks, environment set up and the like, but now we are moving into the second half of the first release most of that type of task has been finished. If there are four white cards in the two dev columns the next pair to finish should help out with the testing effort or work on some tech tasks. This helps keep the actual work in progress on the board to a manageable amount.

QA (WIP 2)


The 'In QA' column has a WIP limit of two as we have two testers. If a story gets blocked due to a bug then devs need to come and help clear the blockage. The card can NEVER move back into dev and the QA can't pick up a new card from dev complete until the issues are resolved. This focuses attention and ensures that stories progress along the wall in a timly manner, always the empasis on getting completed work 'done done' so that we can claim the points, and move on.

Done (Done Done)

This column contains all the stories finished this week. Done Done. These stories are ready to be showcased to the product owner and will be demoed to the entire client user team (all stakeholders) on the Friday of each week. Every Monday all the cards are removed from this column so we can see what has been finished this week and so that the product owner and other stackholders can see the progress since last Friday's showcase.

Metrics


This area contains all the different metrics, the different graphs and guides.
  • Cumulative Flow
  • Weekly Velocity 
  • Points Burn Up
  • Risk Count
  • Story points Guide
  • Done Done Guide

Risks/Issues/Tech Tasks/Tech Debt Wall
There are four columns to this section of the wall. The left most three are for 'tech debt', and tech tasks. The right most is for capturing risks and making them more visible.

Column 1 (Things we must do to make the project successful).
Things like integrate with the corporate authentication system, spike asset management or setup performance testing environment.

Column 2 (Things that would enable us to deliver faster)
Includes things like refactor acceptance tests to reduce build time, and optimise windows configuration on all dev machines.

Column 3 (Things that are not essential to the success of the project but will give us a better solution)
Mainly includes lists of refactorings and areas we want to achieve better test coverage.

Column 4 (Risk wall)
Any risk to the project, from integration points, new technology, people, to computers and hardware. Listed in highest risk at the top. The arrow represents whether the risk is increasing or decreasing as time passes.

Summary
So that is our wall. It saves us time, keeps us focussed on what's important, and neatly tracks our progress.

How does your team work? I would be interested to know what other types of walls are out there. I've seen a few in my time but there is always another way of doing it - what does your wall do for you?


Glossary 
BA - Business Analyst
IA - Information Architecture
Kanban - http://en.wikipedia.org/wiki/Kanban
Mingle - http://www.thoughtworks-studios.com/mingle-agile-project-management
QA - Quality Analyst
Scrum - http://en.wikipedia.org/wiki/Scrum_(development)
Tech Debt - http://martinfowler.com/bliki/TechnicalDebt.html
TFS - Team Foundation Server
UX - User eXperiance
WIP - Work In Progress
XP - eXtreme Programming - http://en.wikipedia.org/wiki/Extreme_Programming