Wednesday, 22 August 2012

Continuous Integration performance testing. An easily customisable solution.


Using JMeter to profile the performance of your web application and visualise performance trends, all within the CI pipeline.


The full solution as outlined here can be found on my GitHub repository at https://github.com/DamianStanger/CIPerformance

Introduction

Most companies care about the performance of their web sites/web apps but often the testing of this performance is left till the last minute with the hope that the devs will have been doing good job writing well performant code for the last x months whilst developing. I don’t know why this is often the way? If performance really is a major Non Functional Requirements (NFR) then you have to test your performance as you go, you can’t leave this until the last moment just before deployment / live and then when you find that performance is not good enough just try and hack in quick fixes. This is just not good enough, you can’t just hack in performance after the fact, it can take a substantial change to the design (to do it well).

On our team we have been performance profiling each important page of our app since month 1 of the development process, we are now live and are working towards the 4th major release. I (and the team) and have found our continuous performance testing invaluable. Here is the Performance graph as it stood a few weeks ago:

The process outlined below is not a method for stress testing your app. It’s not designed to calculate the load that can be applied, instead it's used to see the trend in the performance of the app. Has a recent check-in caused the home page perform like a dog? Any N+1 DB selects or recursive functions causing trouble? It’s a method of getting quick feedback within the CI pipeline minutes after a change is checked in.

The process

1. When we check in, our CI box (TeamCity) runs the build (including javascript tests, unit tests, integration tests, functional tests, acceptance tests), if all this is successful then the performance tests are kicked off.
2. Teardown the DB and restore a new copy (so we always have the same data for every run, this DB has a decent amount of data in it simulating the data you have in live in terms of volume and content).
3. Kick the web apps to prepare them for the performance tests, this ensures IIS has started up, and the in memory caches primed.
4. Run the JMeter scripts.
a. There are numerous scripts which simulate load generated by different categories of user. For example a logged out user will have a different performance profile to a fully subscribed user.
b. We run all the scripts in serial as we want to see the performance profiles of each type of user on each different site we run.
5. The results from each run are processed by a powershell script which extracts the data from the JMeter log files (jtl) and writes the results into a sql server database (DB). There is one record per page per test run.
6. We have a custom MVC app that pulls this data from the DB (using dapper) and displays it to the team on a common monitor (using JSON and RGraph) that is always updating. We see instantly after we have checked in if we have affected performance, good or bad. We could break the build if we wanted but decided this was a step too far as sometimes it can be a day or two to fix any poorly performing aspect of the site.

A stripped down version is avaliable on my GitHub account, Running the powershell script a few times and then running the mvc app you should see something like the following:

The juicy bits (interesting bits of code and descriptions)

Powershell script (runTest.ps1)

• Calling out to JMeter from powershell on line 112
& $jmeter -n -t $test_plan -l $test_results -j $test_log

• Parse JMeter results on line 133
[System.Xml.XmlDocument] $results = new-object System.Xml.XmlDocument
$results.load($file)
$samples = $results.selectnodes("/testResults/httpSample | /testResults/sample/httpSample")


Then iterate all the samples and record all the page times and errors

• Write results to DB on line 171
$conn = New-Object System.Data.SqlClient.SqlConnection($connection_string)
$conn.Open()
foreach($pagestat in $page_statistics.GetEnumerator())
{
    $cmd = $conn.CreateCommand()
    $name = $pagestat.Name
    $stats = $pagestat.Value
    $cmd.CommandText = "INSERT Results VALUES ('$start_date_time', '$($name)',
    $($stats.AverageTime()), $($stats.Max()), $($stats.Min()), $($stats.NumberOfHits()),
    $($stats.NumberOfErrors()), $test_plan)"
    $cmd.ExecuteNonQuery()
}

JMeter scripts

You can look up JMeter yourself to find suitable examples of this. My project posted here just has a very simple demo script which hits Google and Bing over and over. You can replace this with any JMeter script you like. The DB and the web app are page and site agnostic so it should be easy to replace with your own, and it will pick up your data and just work.
I recommend testing all the critical pages in your app, but I find the graphs get too busy with more than 10 different lines (pages) on them. If you want to test more stuff just add more scripts and graphs rather than have loads of lines on one graph.
The generic solution given here has two scripts but you can actually have as many as you like.Two would be a good choice if you had a public facing site and an editor admin site which both have different performance profiles and pages. But in the end it's up to you to be creative in the use of your scripts and test what really needs testing.

The results DB

The DB is really simple. It consists of just one table which stores a record per page per test run. This DB needs creating before you run the script for the first time. The file Database.sql will create it for you in SQL server.

The MVC app

Data layer, Dapper

Using dapper (a micro ORM installed through nuget) to get the daily results is done in the resultsRepository class:

var sqlConnection = new SqlConnection("Data Source=(local); Initial Catalog=PerformanceResults; Integrated Security=SSPI");
sqlConnection.Open();
var enumerable = sqlConnection.Query(@"
SELECT Url, AVG(AverageTime) As AverageTime, CAST(RunDate as date) as RunDate FROM Results
    WHERE TestPlan = @TestPlan
    GROUP BY CAST(RunDate as date), Url
    ORDER BY CAST(RunDate as date), Url", new { TestPlan = testPlan });
sqlConnection.Close();
return enumerable;

The view, JSON and RGraph

In this sample code there are four different graphs on the page, two for Google (test plan 1), and two for Bing (test plan 2). Heartbeat data shows a data point for every performance run. It shows you instantly if there has been a bad performance run. This shows all the runs over the last two weeks. The Daily Averages show a data point per day for all the performance data on the DB.
There are four canvases that contain the graphs, these graphs are all drawn using RGraph from some JSON data populated from the data pulled off the DB. It’s the javascript function configureGraph that does this work with RGraph, for details of how to use RGraph see the appendix.
The JSON data is created from the model using LINQ in the view as such:
dailyData: [@String.Join(",", Model.Daily.Select(x => "[" + String.Join(",", x.Results.Select(y => y.AverageTimeSeconds).ToList()) + "]"))],

This will create something like the following depending on the data in your DB:
dailyData: [[4.6,5.1],[1.9,2.2],[4.0,3.9],[9.0,9.0]],
Where the inner numbers are the data points of the individual lines. So the data above is four lines each with two data points each.
Customising things for your own purposes

Customisation

So you would like to customise this whole process for your own purposes? Here are the very simple steps:
  1. Edit the function CreateAllTestDefiniotions in RunTest.ps1 to add in any JMeter scripts that you want to run as new TestPlanDefinitions.
  2. Change or add to the JMeter scripts (.jmx) to exercise the sites and pages that you want to test.
  3. Add the plan definitions to the method CreateAllPlanDefinitions of the class PlanDefinition in the performance stats solution. This is all you need to edit for the web interface to display all your test plans. The graphs will automatically pick up the page names that have been put into the configured JMeter scripts.
  4. Optionally change the yMax of each graph so that you can more easily see the performance lines to a scale that suits your performance results.

Conclusion

We as a team have found this set up very useful. It has highlighted many issues to us including: n+1 select issues, combres configuration problems, and all number of issues with business logic usually with enumerations or recursive functions.
When set up so that the page refreshes every minute, it does a really good job. It has been a constant reminder to the team to make sure they are doing a good job with regard to the NFR which is performance.

A note on live performance/ stress testing

Live performance testing is a very different beast altogether, the objective of which is to see how the system as a whole reacts under stress: To determine the maximum number of page requests that can be served simultaneously. This is different to the CI performance tests outlined above. These tests run on a dev box and are only useful as a relative measure to see how page responsiveness is changing as new functionality is added.

Appendix

JMeter - https://jmeter.apache.org/
Dapper – Installed from Nuget
RGraph - http://www.rgraph.net/
GitHub - https://github.com/DamianStanger/CIPerformance
VS2012 - https://www.microsoft.com/visualstudio/11/en-us