Thursday, 21 January 2016

To stand or not to stand?

Over my working life I've sat on many chairs. As a consultant you have to make do with what the client provides, but since I've become permie, I've been campaigning for a standing desk. Recently I got my wish, well, in part.

I don't get on with the chairs at work. They are not cheap ones by any means but by the end of the day I can really feel my neck and shoulders suffering. I'm not sure what it is with these ones in particular, I just don't get on with them. My cheaper chair at home is far better. I've actually been using a kneeling chair for the last year or so and that has been great, it really forces you to sit up straight. Its a little tiring at first but you get used to it, and I don't get the bad shoulders or neck that I used to get on the chair.

But like I said I really wanted to try a standing desk.

I managed to persuade the people that matter to give it a go and so between us we knocked up an IKEA hack standing desk made of coffee table and shelving unit (low budget experiment ~£30).

We had to make it low enough to allow the shortest member of the team to use it but we can 'jack up' the shelf using books and boxes to get it higher for other team members.

I can use it all day but I find my feet and legs get sore mid afternoon and I really need a sit down. The desk is a long way from hydraulic and its a very big faff to put all the stuff down on the desk so I've sacrificed my second screen in a mirrored set up so i can stand and sit as i please. I tend to do the morning stood up then split the afternoon alternating sitting and standing as I feel the need. It's a compromise and since the screens are big I can work split screen rather than multi monitor without losing too much.

Next stop walking desk :-) I really want to do this to alleviate the stress on the knees and feet. I've never tried it but from what I hear from people it's good. I can't see work going for that, maybe a wobble board though!

Monday, 30 November 2015

Do we need to deploy clean up work if there is no additional functionality? i.e no (perceived) business benifit.


We recently did some work to a number of services where all we were doing was removing some old functionality that is no longer used (obsolete code). Removing messages, handlers, classes, tests. I like deleting code, it makes things simpler, less logic to break, less places for bugs to hide. Once we finished, we wanted to get all of this released to prod ASAP. We tested all of the areas affected in a large system level test in the preprod environment, all of the services (and others that had not changed) working together.

But we had push back. 

Push back

The question was: Why do you need to release this clean-up in advance of any further work?
By doing so you are making this an active rather than passive deployment with associated extra risk and double the cost.
If you are removing unused code you can just deploy with the next addition/change to code because by testing that you are implicitly testing that absence of code. Even if the new code isn’t affected, then our deployment checklists cover that situation too – we have already double checked this removal/clean-up of code won’t have an impact on production


I broke this down into a number of sections. A number of questions that I thought were being asked in the statement.

Question: Why do many releases instead of one, isn’t that more risky?
This is a question from the old skool of thought, releases are big bad things we need to do as few as possible.
Answer: I would say many small releases are inherently less risky.
If (very unlikely but) if something goes wrong it will be less clear what caused the issue. Was it new functionality, or the clean-up work that is the culprit?
If we release now, we know what to monitor over the coming days, and have to monitor less
If we don’t deploy all 8 things, someone else will (at some point and in some cases many months in the future). This poor soul will need to decide if the changes we did need testing, what the consequences of the changes are, and worry about if there are other dependent services to deploy.
Each service is push button, so deploy time is small.
Rollback is not hard if we need it with no business consequences at the moment. In future if we release with other functionality and the changes we have made break something we need to rollback new functionality too.

Question: If we are removing code why the need to release anything? There is no new functionality to release.
I guess this is a question about business value, no new business value no need to release.
Answer: That is true, its mostly removing old code and cleaning up. But its just as critical, almost more so to get this out in a small release sooner, as we may (again unlikely but possible, we are human) have removed something we should not have.

Question: Won't it be more to test doing it twice?
This assumes that we manually test everything on every release. where actually we only test what has changed manually in conjunction with automated testing for the rest.
Answer: We have already done good full end to end testing last week of the 8 things affected. If we wait until next week or the week after we will have to do the tests again as the versions of things to be released will all be different by then, so will need to test the 8 deployable things again full stack = extra 1 day

Other reasons for deployment.
The changes, what we did and why we did it, are still fresh in our minds. The longer we leave it the less sure we are that we will be doing the right things.
Its not critical, but I'd prefer not to do a partial deployment (service-X is going to get released soon) I'd like the rest of the clean-up to be deployed too.
Ideally prod and preprod are as same as possible (for environmental consistency and testing reasons). Any differences between the test system of preprod and prod invalidates other testing efforts. Because in prod services will be integrating with a different version of other services than that are in preprod making like-for-like testing impossible.


I maintain there is business benefit in doing the deployment now, and deploying all 8 services at that. To be fair businesses, managers, stakeholders, even developers (especially senior ones) have all seen their fair share of long deployments, failure and difficult rollbacks. Leading, ultimately to a fear of deployment. So it's natural to want to avoid the perceived risks. But perversely by restricting the number of deployments you are actually increasing the likelihood of future fail.
A core philosophy of the devops culture is to release early and often (continuous delivery). By doing the things you find painful more often you master them and make them trivial, there by improving your mean time to recovery.

The business benefit is ultimately one of developer productivity, testability and system up-time

Tuesday, 17 November 2015

Splunk alerts to slack using powershell on windows

We use Splunk to aggregate all the logs across all our services and APIs on many different machines. It gives us an invaluable way to report on the interactions of our customers through new business creation on the back-end servers running on NServiceBus, to the day to day client interactions on the websites and mobile apps.

We have been investing in more monitoring recently as the number of services (I hesitate to use the buzzword micro, but yes they are small) is increasing. At present pace I'd say there is a new service or API created almost each week. Keeping on top of all these services and ensuring smooth running is turning into a challenge, which splunk is helping us to meet. When you add service control, pulse and insight from particular (makers of NServiceBus) we have all bases covered.

We have recently added alerts to splunk to give us notifications in slack when we get errors.

The Setup

We are sending alerts from splunk to slack using batch scripts and powershell.

Splunk Alerts

First set up an alert in splunk, This splunk video tells you how to create an alert from a search results. We are using a custom script which uses arguments as documented here. Our script consists of 2 steps a bat file and a powershell file. The batch file calls the powershell passing on the arguments.

SplunkSlackAlert.bat script in C:\Program Files\Splunk\bin\scripts
@echo off
powershell "C:\Program` Files\Splunk\bin\scripts\SplunkSlackAlert.ps1 -ScriptName '%SPLUNK_ARG_0%' -NEvents '%SPLUNK_ARG_1%' -TriggerReason '%SPLUNK_ARG_5%' -BrowserUrl '%SPLUNK_ARG_6%' -ReportName '%SPLUNK_ARG_4%'"

SplunkSlackAlert.ps1 lives alongside
param (
   [string]$ScriptName = "No script specified",
   [string]$NEvents = 0,
   [string]$TriggerReason = "No reason specified",
   [string]$BrowserUrl = "https://localhost:8000/",
   [string]$ReportName = "No name of report specified"

$body = @{
   text = "Test for a parameterized script `"$ScriptName`" `r`n This script retuned $NEvents and was triggered because $TriggerReason `r`n The Url to Splunk is $BrowserUrl `r`n The Report Name is $ReportName"

#Invoke-RestMethod -Uri -Method Post -Body (ConvertTo-Json $body)

Slack Integration

You can see the call to the slack API in the invoke-restmethod, the slack documentation for using the incoming web hook is here. there is quite a rich amount of customization that can be performed in the json payload, have a play.

Before you can actually use this you must first setup slack integration as documented here which requires you to have a slack account.

The fruits of our labor:

All the script code is given in my gist here.


Thanks to my pair Ruben for helping on this, good work.

Tuesday, 3 November 2015

Developer podcasts v2

A couple of years ago i wrote a blog post about podcasts for developers, this is a follow up as I've now got substantially more. That and a couple of my colleges have asked for my list recently.


.NET Rocks! : Feed Url
Adventures in Angular : Feed Url
All Chariot Podcasts : Feed Url
All Things Pivotal : Feed Url
Azure Friday - Channel 9 : Feed Url
CodeChat (Audio) - Channel 9 : Feed Url
Coding Blocks | Software and Web Programming / Security / Best Practices / Microsoft .NET : Feed Url
Debug : Feed Url
Devnology Podcast : Feed Url
DevRadio - Channel 9 : Feed Url
Full Stack Radio : Feed Url
Functional Geekery : Feed Url
Hack && Heckle : Feed Url
Hanselminutes : Feed Url
Herding Code : Feed Url
Javascript Jabber : Feed Url
Jesse Liberty - Silverlight Geek : Feed Url
MS Dev Show : Feed Url
NodeUp : Feed Url
PowerScripting Podcast : Feed Url
Radio TFS : Feed Url
Ruby Rogues : Feed Url
RunAs Radio : Feed Url
Simple Programmer Podcast : Feed Url
Software Engineering Radio - the podcast for professional software developers : Feed Url
STLTechTalk Podcast : Feed Url
The Azure Podcast : Feed Url
The Cognicast - Cognitect Blog : Feed Url
The Java Posse : Feed Url
The Static Void Podcast : Feed Url
This Week On Channel 9 (MP4) - Channel 9 : Feed Url
ThoughtWorks : Feed Url
WebDevRadio : Feed Url
Windows Weekly (MP3) : Feed Url
YAPP: Yet Another Programming Podcast : Feed Url


Arrested DevOps : Feed Url
DevOps Cafe Podcast : Feed Url
Devops Mastery : Feed Url : Feed Url
Ops All The Things! : Feed Url
The Food Fight Show : Feed Url
The Ship Show : Feed Url

Developer related

Developer On Fire : Feed Url
Get up and CODE! : Feed Url
Mastering Business Analysis : Feed Url
Programmer Vs World : Feed Url
Startups For the Rest of Us » Episodes : Feed Url
The Security Influencer's Channel : Feed Url


Agile Instructor - Coaching for Agile Methodologies such as Scrum and Kanban : Feed Url
Agile NYC : Feed Url
Agile Weekly Podcast : Feed Url
The Agile Coffee Podcast : Feed Url
This Agile Life : Feed Url

Non tech

60-Second Mind : Feed Url
99% Invisible : Feed Url
All items | LSE Public lectures and events | Audio : Feed Url
Freakonomics Radio : Feed Url
Friday Night Comedy from BBC Radio 4 : Feed Url
Haute Couture Podcast - Claudia Cazacu : Feed Url
Monstercat Podcast : Feed Url
NPR: Invisibilia Podcast : Feed Url
NPR: TED Radio Hour Podcast : Feed Url
Planet Money : NPR : Feed Url
Radiolab from WNYC : Feed Url
RI Blog : Feed Url
TEDTalks (audio) : Feed Url
TEDTalks (video) : Feed Url


Bear in mind that some of these podcasts are no longer active. I've kept them in my list because i find past episodes very relevant to the here and now. you can search back to find relevent episodes on anything you care to think of, super useful.

Incidentally my current podcast player of choice is podcastaddict, variable speed playback and great control of how to download files, brilliant search functionality for local podcasts and its very easy to search for and add new podcasts. use it :-)

My opml file extracted from podcast addict is located here:

Tuesday, 16 June 2015

Advanced filtering and navigation on Thoughtworks Go Cd with tamper monkey

Our problem

We use Go from Thoughtworks to manage the build and deployment of all our software. Attached below is a screen shot of the current pipelines.

As you can see we have quite a lot going on. In fact we have 390 pipelines (55 services and api builds, 40 nuget package build/publish, and the rest are deployments to our 5 environments from test through preprod and live). There are 730 stages in total (Build, test, deploy, ect). And we have 20 go agents running on different servers.

So you can imagine finding the pipeline you are looking for is tough. We have adopted naming conventions which help but its really quite difficult to find what you are after with the supplied search capabilities.

Also its very tricky to see if anything is broken (red), there is just no way you will notice it in all the pipelines.

As an aside we are using cradiator to give us an information radiator of all our build and deployment pipelines (I blogged about this a year ago) but with over 700 stages its really in need of an overhaul but that's for another blog post.

The solution

We have created a tamper monkey script (found here) that enhances the functionality of the Go pipelines view, allowing you to filter the visible pipelines by status or by keyword.

There is also the issue of navigating between the settings and the pipeline history, and visa versa. Often you are investigating a failure, you find the problem in the history and then want to change the settings. Well there is no link, so we created one for you to easily navigate between the 2 parts of the UI easily.

Notice the "Settings" link above and the "History" link below both in the header

The script can be found here:


Firstly you need to get tamper monkey installed in your browser, you can get it from the chrome web store or

Then from the tamper monkey dashboard add the script and tell it what urls to attach itself to. The update url needs to be your instance of Go (our update URL is https://goserver:8154/go/* ). On the settings tab I also add in a couple of user includes to https://goserver:8154/go/home and https://goserver:8154/go/pipelines.

That's it, instantly enjoy better productivity. I imagine this is a good enhancement even if you only have a tenth of the pipelines we have.

Wednesday, 15 October 2014

Blue green web deployment with powershell and IIS

I wanted to follow up my earlier post (about our current CD process) with a more technically focussed one, one that can describe the nuts and bolts of the actual BlueGreenDeployment.


Powershell, powershell and powershell, oh and windows, IIS and Go (build server)


As I described in my earlier post the blue green web deploy consists of these steps:

1. Deploy
1.1 Fetch artifact
1.2 Select the config for this deployment
1.3 Delete the other configs
1.4 Deploy to staging (delete then copy)
1.5 Backup live

2. Switch blue green
2.2 Point live to new code
2.3 Point staging to old code

Blue Green Deployment

Before diving in to the details I should firstly convey what blue green deployments are, and what they are not.
There are a few different ways to implement blue green deployments but they all have the same goals:
1. Allow testing on live without actually being live.
2. Enable deployments to have the smallest possible impact on the live service as possible.
3. Give you an easy roll-back path.

This can be accomplished in many ways. Techniques include DNS switching, directory moving, or virtual path redirecting. 
We have chosen to do IIS physical path redirecting. This allows us to do the same technique on all our environments from test to live, same scripts, same code, and doesn't cost as much as requiring multiple servers which DNS switching would require.

Commands used for this demo are

PS> .\Create-Websites.ps1 -topLevelDomain
PS> Deploy-Staging -source c:\tmp -websiteName foobarapi -domainName
PS> Backup-Live -WebsiteName foobarapi -DomainName
PS> Switch-BlueGreen -WebsiteName foobarapi -DomainName

The code I'm going to talk through is all located here:

Conventions used:

All websites are named name.domain and name-staging.domain
All backing folders are in c:\virtual and are named and
You don't know if blue or green is currently serving live traffic.
Backups are taken to c:\virtual-backups\name.domain
Log files always live in c:\logs\name.domain
There is always a version.txt and bluegreen.txt in the root of every website/api

In this example I'm using name=foobarapi and

The technical detail

This is the meaty stuff, it consists mainly of powershell, and should work no matter what CI software you are using. I can heartily recommend Go by Thoughtworks. It has a built in artifact repository and brilliant dependency tracking through its value stream map functionality.

Setup IIS and backing folders

To test my deployment scripts you will firstly need to set up the dummy/test folders and IIS websites. For this you can use this script: Create-Websites.ps1. I'm not going to go into detail of the script as its not the focus of this post but it creates your app pool and website.

The code is exercised with the following:
setupWebsite "foobarui" "foobarui-test" $true "green"
applyCert("*.foobar.*") <<optional if you want the sites to have an ssl cert applying>>

This will create 2 websites on IIS pointing to the green and blue folders as per the conventions outlined further above. Finally apply an SSL certification using powershell, this command will apply the SSL cert to all the websites in this instance of IIS.
To remove the created items from IIS issue commands similar to this:
PS> dir IIS:\AppPools | where-object{$_.Name -like "**"} | Remove-Item
PS> dir IIS:\Sites | where-object{$_.Name -like "**"} | remove-item
PS> dir IIS:\SslBindings | remove-item

Once you have the websites correctly set up you can then utilise the deploy blue green scripts :-)


The Blue Green deployment module is located here: BlueGreenDeployment.psm1 and will need importing into your powershell session with the following command:
PS> Import-module BlueGreenDeployment.psm1
Once you have the module imported you can issue the following commands:
PS> Deploy-Staging -source c:\tmp -websiteName foobarapi -domainName
PS> Backup-Live -WebsiteName foobarapi -DomainName
PS> Switch-BlueGreen -WebsiteName foobarapi -DomainName

Lets dig into these one by one.

1. Deploy-Staging
This is quite straight forward. Find the folder that is currently serving staging and copy the new version there. The interesting bit of code is the method of determining which folder to replace with the new version. IsLiveOnBlue and GetPhysicalPath work together to determine the folder in use on staging. Notice the retries inside GetPhysicalPath I found that sometimes IIS just doesn't want to play, but if you ask it a second time it will?? Don't ask..
The code that actually determines the physical path is:
$website = "IIS:\Sites\$WebsiteName.$domainName"
$websiteProperties = Get-ItemProperty $website
$physicalPath = $websiteProperties.PhysicalPath

The rest of the powershell is relatively straight forward

2. Backup-Live
Backing up live is again pretty standard powershell. Again determine the folder that is serving live then do a copy. Done.

3. Switch-BlueGreen
Performing the switch is actually really easy when it comes to it. Firstly determine which folder (blue or green) is serving live (same code as the deploy step) and then switch it with the staging website.
Set-ItemProperty $liveSite -Name physicalPath -Value $greenWebsitePath -ErrorAction Stop
The only added complication is the rewriting of the log file location in the web.config. Log4net only really works well if one process (web site) uses one log file. Again you can look this up yourselves as this is an aside to the main purpose of this post.


The interwebs in general are full of articles/opinions/tales of how bad windows is to automate, it actually winds me up. Maybe it used to be true but I've been finding that with powershell and Go I've been able to automate anything I need. It's so powerful. Don't let the microsoft haters stop you from doing what needs to be done.

The blue green deployment technique outlined here is working really well for us at the moment and has helped us to take our projects live sooner/quicker and with more confidence.

Automation for the win.

Sunday, 12 October 2014

Adventures in continuous delivery, our build/deployment pipeline


We have been undergoing a bit of a dev-ops revolution at work. We have been on a mission to automate everything, well as much is as possible. Exciting times but we are still only just setting out on this adventure, I wanted to document where we are currently at.

First a brief overview of what we have. We have many many small windows services, websites and apis each belonging to a service and performing a specific role. I must quickly add we are a microsoft shop. More and more are our services moving towards a proper service orientated architecture. I hesitate to use the term micro services as it's so hard to pin a definition on the term but let's just say they are quite small, focused on a single responsibility.

We have 5 or 6 SPA apps mainly written with durandal and angular. 7 or 8 different APIs serving data to these apps and to external parties. 10 to 15 windows services which mostly publish and subscribe to N service bus queues.

We currently have 8 environments that we need to deploy to (going to be difficult to do this by hand, me thinks) including CI, QA*, Test*, Pre-prod* and live* (* the last 4 are doubled as we deploy into 2 different regions which both operate slightly differently and have different config and testing). This list is growing with every month that passes. We really really needed some automation, when it was just 3 environments in the UK region we just about got by with manual deployments.

I'm going to outline how the build pipeline integrates with the deployment pipelines and the steps that we take in each stage. But I'm not really going to concentrate on the actual technical details, this is more of a process document. 

1.0 The build pipeline

We operate on a trunk based development model (most of the time) and every time you check in we run a build that will produce a build artifact, push that in to an artifact repository and then run unit and integration tests on the artifact. 

Fig 1. The build pipeline


1. Run a transform on the assembly info so that the resultant dll has build information inside the details. This aids us determine what version of a service is running on any environment, just look at the dlls properties.
2. Create a version.txt file that lives in the root of the service. This is easily looked at on an API or website as well as in the folder containing a service.
3. We check in all the versions of the config files for all the environments that we will be deploying to and use a transform to replace the specific parts of a common config file with environment specific details (e.g connection strings). Every environment's config is now part of the built artifact.
4. Build the solution, usually with msbuild, or for the SPA apps, gulp
5. If all this is successful upload the built artifact to the artifact repo (the go server)


6. Fetch the built artifact
7. Run unit tests
8. Run integration tests

The test stage is separate so that we can run tests on a different machine if necessary. It also allows us to parallelise the tests running them on many machines at once if required.

Not shown on this diagram are the acceptance tests, these are run in another pipeline. Firstly we need to do a web deploy (as below) then setup some data in different databases and finally run the tests.

2.0 The web deploy pipeline

So far so good, everything is automated on the success of the previous stage. We then have the deployment pipelines of which only the one to CI is fully automated so that acceptance tests can be run on the fully deployed code. All the other environments are push button deploys using Go.
The deployment of all our websites/APIs/SPAs are very similar to each other and the same across all the environments so we have confidence that it will work when finally run against live.

Fig 2. The web deploy pipeline


1. Fetch the build artifact
2. Select the desired config for this environment and discard the rest so there is no confusion later
3. Deploy to staging (I've written a separate article on this detailing how it works with IIS powershell and windows)
a. Delete the contents of the staging websites physical path
b. Copy the new code and config into the staging path

Switch blue green

We are using the BueGreenDeployment model for our deployments. Basically you deploy to a staging environment then when you are happy with any manual testing you switch it over to live with the use of powershell to switch the physical folders in IIS of staging and live. This gives a quick and easy rollback (just switch again) and minimises any down time for the website in question.

3.0 The service deployment pipeline

Much the same as the deployment of websites except for the fact the there is no blue green. The Services mainly read from queues and so this makes it difficult to run a staging version at the same time as a live version (not impossible but a bit advanced for us at the moment)

Fig 3. The service deploy pipeline


The install step again utilises powershell heavily, firstly to stop the services, then back things up and deploy the new code before starting the service up again.

There is no blue green style of rollback here as there are complications to doing this with windows services and with reading off the production queues. There is probably room for improvement here but we should be confident that things work by the time we deploy live as we have proved it out in 2 or 3 environments before live.


I'm really impressed with Go as our CI/CD platform it gives some great tooling around the value stream map, promotion of builds to the other environments, pipeline templates and flexibility. We haven't just arrived at this setup of course, its been an evolution which we are still undergoing. But we are in a great position moving forward as we need to stand up more and more environments both on prem and in the cloud.

Fig 4. The whole deployment pipeline

Room for improvement

There is plenty of room for improvement in all of this though

* Config checked into source control and built into the artifact
Checking the config into the code base is great for our current team, we all know where the config is, its easy to change or add new things to it. But for a larger team or where we didn't want the entire team to know secret connection string to live DBs it wouldn't work. Thank goodness we don't have any paranoid DBAs here. Also there is a problem if we want to tweak some config in an environment. we need to produce an entire new build artifact from source code, which might now have other changes in it that we don't want to go live. We can handle this using feature toggles and a branch by abstraction mode of working but it requires good discipline which we as a team are only just getting our heads around. Basically if the code is always in a releasable state this is not an issue.

* Staging and live both have the same config
When you do blue green deployments as we are doing, both staging and live always point to the live resources and databases, so it's hard to test that the new UI in staging works with the new API, also in staging as both the staging and current live UI will be pointing to the live API. Likewise the live and staging API will both be pointing to the live DB or other resources. Blue green deployments are not designed for integration testing like this, that's what the lower environments are for.
On a very similar vein, logging will go to the same log files which can be a problem if your logging framework takes out locks on files, we use log4net a lot which does. There are options to work in a lock when required mode with log4net but it can really hit performance. We have solved this by rewriting the path to the log file on blue green switch.

* No blue green style deployments of windows services
The lack of blue green deployment of services means that we have a longer period of disruption when deploying and a slower rollback strategy. Added to this you can't test the service on the production server before you actually put it live. There are options here but it gets quite complicated to do, and by the time the service is going live you should have finished all your testing by now.

* Database upgrades are not part of deployment
At the time of writing we are still doing database deployments by hand, this is slowly changing and some of our DBs do now have automated deployments, mainly using the redgate SQL tool set, but we are still getting better at this. It's my hope that we will get to the fully automated deployments of data schemas at some point, but we are still concentrating on the deployment of the code base

* Snowflake servers
All our servers both on prem and in the cloud are built, installed and configured manually. I've started to use chocolaty and powershell to automate what I can around set-up and configuration, but the fact still remains that its a manual process to get a new server up and running. The consequence of this is that each server has small differences to other servers that "should" be the same. This means that we could introduce bugs in different environments due to accidental differences in the server itself.

* Ability to spin up environments set up as needed for further growth 
Related to the above point, as a way to move away from the problem of snowflake servers we need to look at technologies like puppet, chef, Desired state configuration etc. If we had this automation we could spin up test servers, deploy to other regions/markets, or scale up the architecture by creating more machines. 

Relevant Technology Stack (for this article)

• Windows
• Powershell
• SVN and Git
• Msbuild and gulp

Next >>

Ive written a follow up article to this which details the nuts and bolts of the blue green deployment techniques we are currently using. blue-green-web-deployment-with-IIS-and-powershell.
The code for which can be found on my git hub here: