Monday, 7 April 2014

Thoughtworks Go, adding a version text file as a build artifact

Firstly let me be clear that we use Go from Thoughtworks but I'm sure you can use the same technique outlined below for other CI systems such as Teamcity or TFS.

When we deploy our built code to the live servers its good to be able to see what the version of the dlls/exes files is. To do this we put a file in the same directory as the built files called version.txt which contains the details of the build that has been deployed, the build number, the revision of SVN that formed the source for the build.

If you look in the console tab of the build job that you have set up you will see something similar to the following:
[go] setting environment variable 'GO_ENVIRONMENT_NAME' to value 'CI'
[go] setting environment variable 'GO_SERVER_URL' to value 'https://buildAgent01:8154/go/'
[go] setting environment variable 'GO_TRIGGER_USER' to value 'changes'
[go] setting environment variable 'GO_PIPELINE_NAME' to value 'Scoring'
[go] setting environment variable 'GO_PIPELINE_COUNTER' to value '81'
[go] setting environment variable 'GO_PIPELINE_LABEL' to value '81'
[go] setting environment variable 'GO_STAGE_NAME' to value 'Build'
[go] setting environment variable 'GO_STAGE_COUNTER' to value '1'
[go] setting environment variable 'GO_JOB_NAME' to value 'BuildSolution'
[go] setting environment variable 'GO_REVISION' to value '6343'
[go] setting environment variable 'GO_TO_REVISION' to value '6343'
[go] setting environment variable 'GO_FROM_REVISION' to value '6343'

With this data you can create a task which uses powershell to create the file
Command: powershell
Arguments: sc .\version.txt "GO_ENVIRONMENT_NAME:%GO_ENVIRONMENT_NAME%, GO_SERVER_URL:%GO_SERVER_URL%, GO_TRIGGER_USER:%GO_TRIGGER_USER%, GO_PIPELINE_NAME:%GO_PIPELINE_NAME%, GO_PIPELINE_COUNTER:%GO_PIPELINE_COUNTER%, GO_PIPELINE_LABEL:%GO_PIPELINE_LABEL%, GO_STAGE_NAME:%GO_STAGE_NAME%, GO_STAGE_COUNTER:%GO_STAGE_COUNTER%, GO_JOB_NAME:%GO_JOB_NAME%, GO_REVISION:%GO_REVISION%"

This produces a text file like this:

version.txt

GO_ENVIRONMENT_NAME:CI
GO_SERVER_URL:https://buildAgent01:8154/go/
GO_TRIGGER_USER:changes
GO_PIPELINE_NAME:Scoring
GO_PIPELINE_COUNTER:81
GO_PIPELINE_LABEL:81
GO_STAGE_NAME:Build
GO_STAGE_COUNTER:1
GO_JOB_NAME:BuildSolution
GO_REVISION:6343

make sure this file is included in the build output folder along with the build artifacts, good times.

Monday, 24 March 2014

Visualising the Thoughtworks Go pipline using Cradiator, a build information radiator/monitor


You know the old adage, 'out of sight, out of mind'? Like it or not, sometimes the state of the build on CI is forgotten about, and if you can't see the current state without going looking for it, it can stay red for a few days before it's noticed by someone.

I've always been a big fan of information radiators, build monitors, graphs and stats that are in peoples faces, and I just wanted to share our current solution to the whole 'who cares if the CI server is not Green' problem.

We use Go (the CI server from Thoughtworks) for our build and deployment pipeline, which is great. Although it doesn't ship with a build monitor that can be installed on a machine to show the state of the build, Thoughtworks do expose an API that allows you to build your own, for example on http://Server:Port/go/cctray.xml

We found an old-ish project called Cradiator that works with cruse control (remember that old build server? it turns out that Go has the same API, all be it on a slightly different URL). The problem was that we have secured our instance of Go so that you need to be logged in to access it. This caused problems with Cradiator and so we forked it and added the ability to set your own credentials in the config. The fork can be found here.

Below are a couple of screen shots showing the Go build server and also the corresponding Cradiator screen.
Our Go pipeline for this small part of the overall system

The Cradiator build monitor screen

As you can see every stage within a Go pipeline has a corresponding line in Cradiator. To achieve this, we use the following filter in the Cradiator config file: project-regex="^.*::.*::.*"

There is a robotic voice that announces who broke what, with some good catastrophic sound effects to accompany it. It's doing a great job of focusing people on fixing things if/when they go red.

Ever since we have started using this, the average time of a red build to a check in fixing it is less than an hour. Visibility for the win.

References:

https://github.com/DamianStanger/Cradiator

Thursday, 13 March 2014

Thoughtworks Go, asynchronously trigger a manual stage from a long running test

We are using Go from Thoughtworks Studios to manage our build pipeline, we have the builds generating artifacts that are then deployed and installed on to UAT servers, automation rocks. We then we have a long running test that runs out of process using lots of NServiceBus queues.
We have a stage that starts a process manager that fires fake messages in to the start of the system, then 3 different services pass messages along doing various things, all connected together via message buses. 

So at the start of the UAT test we kick off some powershell passing in the current pipeline counter (this is a vital detail, it tells the future API call which pipeline to kick)
. .\start_fullpipelinetest.ps1 %GO_PIPELINE_COUNTER%;
in this example the %GO_PIPELINE_COUNTER% variable is 11

Given the nature of the system once the message is sent to kick off the process the Go pipeline goes green and the next stage goes into an awaiting approval manual stage.



The last thing the test do is to send a message out to inform other downstream systems that things are ready to go, we hook into this and fire the following POST into to Go to run the next stage. If you were to do this manually you click the icon circled above, to do it programatically you send the following POST command:
curl --data "" http://user:password@server:8153/go/run/uat_start_FullPipelineTest/11/TestingComplete
Which kicks off the final stage of the pipeline. For us this does some verification as to the expected state of the system and passes or fails accordingly.

I really like it, we get an asynchronous test that does not hog the go agent resources and will instantly tell you about failures once a test run has finished.

Go is very flexible and the API lets you do all sorts of cool things, like uploading artefacts and triggering pipelines.
curl -u user:password -F file=@abc.txt http://goserver.com:8153/go/files/foo/1243/UATest/1/UAT/def.txt
curl -u user:password -d "" http://goserver.com:8153/go/api/pipelines/foo/schedule

Wednesday, 18 December 2013

Search the whole SVN repository for a given filename

The SVN repository at work is huge, and I don't have the disk space to checkout the whole thing with the branches and everything on my small (but very fast) laptop SSD. But I needed to search through the whole repo for a file, the following command line can help out.

Windows

svn list -R https://subversion-repo/subfolder | findstr filename

Nix

svn list -R file:///subversion-repo/subfolder | grep filename

These commands don't look through the history but will find things at the current HEAD of the repository.

If you want to look for a particular point in time you can specify the revision thus:

svn list -r 1234 -R https://subversion-repo/subfolder | findstr filename

where 1234 is the revision to search though.

If you want to search the entire history you could script the search to look though every revision from 1 to n and list the files that match the search at each revision, then remove duplicates to get a single list. How about getting even fancier by recording the revision the file was first found and the revision it was deleted at. I have no requirement to do this right now but sounds like an interesting little project to try.

If you want to search for text in files I find searching the diffs useful. Just pipe the following into a file and search that in your favorite editor (Sublime text :-)

svn log -r1234:HEAD --diff https://subversion-repo/subfolder

this can be rather verbose but with a bit of tweaking and targeting of the repo/folder you can get some accurate results on text search in history

Sunday, 15 December 2013

Personal Backup strategies

Its been on my mind of late that I don't have a very good backup strategy in place for my own things at home. I've got many gigs of photos, code, documents, videos that are locally backed up but all over the place and not very consistent, and then there is gmail and the 4 gig of emails in there. So I'm doing something about it.

The solution is:

Dropbox

I use dropbox for cloud sync and storage. This is not back up. I use it to get access to files easily from anywhere, but if I accidentally delete or change something, then the change is propagated straight to dropbox, so (unless you have packrat) its quite hard to undo the change or get to an older version.

I keep a local copy of all the dropbox files on my home server.

gmail

A weekly download of all gmail to local machines using gmvault. The official guide to set up
is here, but Scott Hansleman did a great write up of how to do this here.

This boils down to two commands. The first for the initial sync, the second for incremental backups on top of the same folder structure.
gmvault sync youremail@gmail.com -d D:\foldertosaveto
gmvault sync -t quick youremail@gmail.com -d D:\foldertosaveto

Output of my initial run. Yes took a while to run...
================================================================
Sync operation performed in 2h 36m 35s.
Number of reconnections: 70.
Number of emails quarantined: 0.
Number of emails that could not be fetched: 0.
Number of emails that were returned empty by gmail: 0
================================================================

Scheduled job

I have set up a scheduled job (in windows task scheduler) which runs a script every Friday that backs up the week's email to my hard disk. This script is just a simple .bat file where the contents are thus:
gmvault sync -t quick youremail@gmail.com -d D:\foldertosaveto
You will need to make sure that gmvault is on your path if you do it this way. setting up scheduled jobs is easy too. There are loads of on-line tutorials, here is one for windows 8.

Amazon glacier

  1. Sign up for Amazon glacier, you will need your credit card for this (first you need to sign up for an Amazon AWS account)
  2. Once logged in, create a key pair (Access Keys (Access Key ID and Secret Access Key)) save them to your machine.
  3. Go to the glacier console and create a vault for each type of backup you are planning on doing. I've created two for now, one for my photos and one for my mail backups. I might create another for music later.
  4. Ensure you have chosen the data centre closest to you for the vaults. Both of mine are in EU Ireland.

Cloudberry online backup

I use cloudberry online backup to do the heavy lifting of actually sending all my files up to amazon
http://www.cloudberrylab.com/amazon-glacier-storage-backup.aspx#amazonglacier. Its great you just set up some backup plans and a schedule and cloudberry does the rest. Its not free but really quite cheap given what it does and how well it does it.
  1. Install the cloudberry online backup desktop version (download from: http://www.cloudberrylab.com/amazon-s3-cloud-desktop-backup.aspx )
  2. Add a glacier cloud storage account (File->amazon glacier)
  3. Follow the wizard - it's really easy
  4. Go to the backup plans tab and create a new plan or use a predefined plan
  5. For my gmail backup I created a new plan
  6. Click the backup wizard (backup files). Again, a real easy wizard to follow. Select the glacier account/vault, the files to back up and the schedule. So easy.

Costs

I'm storing 150 gig in amazon glacier, that costs me £1.50 per month and I can store as much as I like, practically unlimited storage. Be careful though because it costs a lot more to get it out. But that's ok right? This is emergency backup. You might be able to get your files back from dropbox, local backup etc. Glacier is the long term emergency backup we all need.

Summary

The whole point was to get all the files that I care about into cheap storage with multiple redundant backup locations, so if/when I lose some data I can get it back. Dropbox provides an easy way to get back files but its not a total solution, amazon provides the cheap offsite secure backup that I want for my 150 gig+ of data.

Other options:




Comments from https://news.ycombinator.com/item?id=6927659

* by drdaeman

Isn't Glacier overpriced, compared to other personal backup solutions?

Say, I have a mere 2TiB of historical data (various junk I made or collected over last ten years or so). Storing on them with Amazon is $20/mo, and if I want to look on that photos from 2008 I have to wait for several hours just to find that I misremembered where they were stored and pulled out wrong files. And unless it happened that I uploaded a good amount of data on that exact day, I'll have to pay for downloads.

Other offers for unlimited storage are Cyphertite at $10/mo, Crashplan at $6/mo, Carbonite at $100/yr, AltDrive at $4.5/mo and so on. While they're probably not-so-unlimited (they don't say that, but I guess one won't have much luck storing a petabyte), less respectable than Amazon, and most services lack an API and require to use not-so-trusty proprietary software that has to be sandboxed properly, Glacier doesn't look like a good deal to me unless we're talking about backing up some either quite big data (like tens of terabytes) or relatively small amounts of data (less than 500GiB).

Disclaimer: I have no affiliation to any of companies mentioned above. Just happens that I'm currently fleeing from Bitcasa (they suck hard) and looking at various options to not maintain a self-hosted NAS.

* by tfe

The difference is that I trust Amazon far more than those other companies you mentioned. If they go out if business or even change their "unlimited" policy, you're exposed until you can get your 2TB re-uploaded to another provider. It's a pain and a risk I'm unwilling to take. I know Amazon isn't going to suddenly try to dump me as a customer.

* by damianstanger

Yes all good points. I have a relatively small data set < 200GiB and so my costs with glacier are less than $2 per month :-)


* by hengheng

I am using Glacier to store a backup of most of my personal data. This includes my home directory, the most relevant photos I have taken as jpeg, my gmvault and that's about it. I do not copy over any movies, music, raw photos or software, as this is my last line of defense, so it only needs to cover the essentials. I am under 1€ per month this way, and the backup gets refreshed only every other month or so.

I do have a local server that stores a windows backup image of my whole laptop, a second Harddisk in that Server to store a copy of the server, and an external hard disk with a windows backup at my parents that gets a refresh every time I am over there. All backups are truecrypt images for good measure, and I have tested recovery. Amazon stores a split truecrypt archive. Recovery cost about 20€ and took a day.

So yes, glacier is great as a personal backup, if you make it part of a larger strategy. To me, this is disaster recovery, and a small price to pay for this kind of insurance of important files and memories.

Sunday, 1 December 2013

Script your build and deployment of android cordova apps with powershell

We are developing a new version of our customer facing solution, across web, ios and android, using cordova(phonegap).

Im a big proponent of build automation, and the classical (recommended?) way of using eclipse to build and manage the code base was getting me down, so i decided to write some scripts to build and deploy the app to either a device, an emulator or prepare for release. I also wanted any developer to be able to checkout the code and run the scripts to build the app.

I wrote the scripts in powershell (sorry!) with some batch files to make the various functions easy to run (im developing on windows 8 by the way).

You can find the scripts here: https://github.com/DamianStanger/AndroidBuildScripts

So how does it all work?

firstly it goes with out saying that you need your dev env set up for android development on the command line with cordova http://cordova.apache.org/docs/en/edge/guide_cli_index.md.html#The%20Command-Line%20Interface

As you will know (if you do cordova development) when you use the command line tools for creating cordova apps the folder created is where all your source code is placed and inside there is the www folder that is where you keep your .js and .html files. The problem is that you are keeping your source code along side the automatically built cordova files, not ideal. so I've created my own source folder that is where all the code you edit is kept. We then use powershell to copy these files to the correct places.

The development process

In a powershell (or dos cmd if you prefer)

build.bat
emulate.bat or install.bat

Line 01. run either or, depending on if you are using a real device or not

That's it. Now you might notice that build can take a wile to run because its setting up everything from scratch so i created a shortcut that will only copy your changes across.

quickCopy.bat
emulate.bat or install.bat

This is all good for general day to day dev but eventually you will want to test a production build on a real device for this use the following commands

release.bat
installRelease.bat

To make this work you must only have either a device plugged into usb or an emulator turned on (please use genymotion its so much faster than a standard emulator)
The release process signs and aligns your apk for you :-) so when you are ready you just send the apk you have tested to the play store.

The scripts

Here I'm going to show select lines of code from build.ps1

For building the app in debug and getting that on to your emulator or phone

function create()
cordova create app-cordova-android com.myapp.app myapp
...
cordova platform add android
...
cordova plugin add org.apache.cordova.device

function build()
cordova build android
function emulate()
cordova emulate android -d
function installDebug()
cordova run android -d

To build the releaseable apk and to get that onto your phone use the following:

function release()
cordova build android --release
function sign()
jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore ..\appstore\android-keystore\myapp -keypass myappKeyPassword -storepass myappStorePassword -signedjar .\platforms\android\bin\myapp-release-signed.apk .\platforms\android\bin\myapp-release-unsigned.apk myapp
zipalign -f -v 4 .\platforms\android\bin\myapp-release-signed.apk .\platforms\android\bin\myapp-release-signed-aligned.apk
function installRelease()
adb uninstall com.myapp.app
adb install .\appstore\APKs\myapp-release-signed-aligned.apk

Release versioning

When doing a release to the play store you need to make sure the version numbers are incremented each time, for this i added a helper which will update all the relevent places in the source for you.

just run:

setVersion 102 1.0.2

This will change all the files that need changing in order to properly put a new version of the app onto the play store.

Upload to play store error
Upload failed
You uploaded a debuggable APK. For security reasons you need to disable debugging before it can be published in Google Play

Make sure your manifest is set thus:

<application android:debuggable="false" android:hardwareAccelerated="true" android:icon="@drawable/icon" android:label="@string/app_name">

Resources

Sunday, 17 November 2013

Debug your android applications by capturing/monitoring their http traffic using wireshark

I’ve always wondered what my phone is telling the outside world and recently i had the need to actually find out as I’m developing an android app for work at the moment. I needed to find out what was going over the wire as i was getting some strange problems and could not debug the traffic on the production server.

Setup

Download and install wireshark : https://wireshark.org/

Disable wifi and mobile data on the phone.

Connect your phone to your laptop/desktop via USB.

Enable internet pass though. Basically you want your phones internet to come through the USB wire, through your computer network card, which when running a wireshark capture, through wireshark.

Set up a capture filter so that you only capture the data coming to and from your phone and not data initiated from the computer itself. i pick the option to ‘create a capture with detailed options’. Set a capture filter for example ‘host 192.168.15.129’,  where 192.168.15.129 is the ip address of the phone.

Additionally (or alternatively) you can filter the traffic by ip address after capture when viewing the results “ip.src==192.168.15.129 or ip.dst==192.168.15.129” where 192.168.15.129 is the ip address of your phone. Or filter the traffic by protocol, you probably care about http traffic so filter on this by entering “http” in the filter.

Results

You can get information overload with wireshark, it takes some getting used to, but if you dig you can find everything you need. Look for the requests you care about by looking down the info column and clicking the row. This will present all the packet details where you can dig as deep as you like into the request.

I use the Hypertext Transfer Protocol section as its the level of detail i care about. From here you can see the url and the headers as well as a link to the packet that contains the response, simply perfect.

Sunday, 27 October 2013

jquery promises wrapped in javascript closures, oh my..

I recently had a question from one of my fellow devs regarding a problem where the values in a loop were not what they expected. This was down to the deferred execution of the success function after a promise had been resolved.

The following code has been simplified for this explanation:

function (userArray) {
  var i;

  for (i = 0; i < userArray.length; i++) {
    var userDto = userArray[i];
    var user = new UserModel();
    user.displayName = userDto.displayName;
    var promise = service.getJSON(userDto.policies);

    promise.then(function(policyDtos){
      system.log("user.displayname : " + user.displayname);
      convertAndStore(user, policyDtos);
    });
  }
}

Input:
userArray = ['bob', 'sue', 'jon']; //* see footer
Output:
user.displayname : jon
user.displayname : jon
user.displayname : jon

service.getJSON (line 07) returns a jquery promise, the function actually makes a service call to an external API and so can take some time to resolve. Notice how the var userDto and user are declared within the loop, this is not best practice in javascript as the variables are actually hoisted up to the containing function (next to var i). To a none javascript expert it looks like the variables will be created anew inside the loop as they are in c#. In fact there is only one copy of i, user and userDto so obviously the values are overwritten within every loop iteration.

This is the fixed function using closures.

function (userArray) {
  var i,userDto, promise;

  for (i = 0; i < userArray.length; i++) {
    userDto = userArray[i];
    promise = service.getJSON(userDto.policies);
 
    (function(capturedDisplayName) {
      var user = new UserModel();
      user.displayName = capturedDisplayName;

      promise.then(function(policyDtos){
        system.log("user.displayname : " + user.displayname);
        convertAndStore(user, policyDtos);
      });
    }) (userDto.displayName);
 
  }
}

Input:
userArray = ['bob', 'sue', 'jon']; //** see footer
Output:
user.displayname : bob
user.displayname : sue
user.displayname : jon

The introduced function on line 07 creates a capture around the variable userDto.displayName, the variable is a parameter called capturedDisplayName. Now there is a copy of this variable for every iteration of the loop allowing you to use it after the promise has resolved.
You may wonder why the variable promise works as intended given the problems with user and userDto? Well that is because the object referenced within the loop iteration has the .then attached to it before it is overwritten by the next loop, remember the object itself is not overwritten or changed on line 05 only the reference to the object.


//* In reality are more complex objects than simple strings, I'm trying to keep this simple for readability.
//** The names have been changed to protect the innocent.

Sunday, 20 October 2013

HTC One and Vodafone. Removing the Kikin search service

If your like me then you will hate how the phone manufacturers install all sorts of things on your phone for you, (to be fair I think its Vodafone not HTC), and I know its not as bad as the galaxy (I’ve several friends with Samsung phones) but recently my phone started popping up a search service every time I selected (long pressed) a word to copy and paste it. really really annoying.

Anyway the Kikin service (http://www.kikin.com/) was really winding me up, software i didn’t ask for, didn’t want, couldn’t remove but did get in the way every time I wanted to simply select a piece of text.

So how to remove the kikin service?

OK first the bad news, you cant, well not without rooting your phone. But you can disable it so it no longer bothers you by popping up all the time.

Settings -> Apps -> All -> Kikin

Sadly you will see no uninstall, but if you turn notifications off, force stop, then Disable. you are done :-) now the long press select text click doesn't also do a web search automatically.
And I’m not sure but since I’ve done it I have not had Kikin auto update on me, or it might just be happening behind the scenes.

Vodafone, HTC, et al. Please please stop installing useless guff on to our phones. just because you can doesn’t mean you should.

Friday, 27 September 2013

Problems connecting to a live DB in an MVC 4 web app using EF5.0

I had some real problems this week deploying my latest site live. Everything was runnig fine locally, or so i thought. connecting to a local sql2012 DB through entity framework 5.0

I was running on dev through iis express on port :41171

When i deployed to live the web server would spin and spin and then eventually produce an asp error saying [Win32Exception (0x80004005): The system cannot find the file specified]

[SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)]
...

My first instincts were that SQL server was not set up right at the hosting providers end, I was using the correct connection string after all. But after some back and forth from the hosting provider (1and1) I came to realisation that the SQL instance (SQL server 2012) was probably set up right.
I finally removed this section from the web.config that entity framework adds in


<section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />
...
...
<entityframework>
<defaultconnectionfactory type="System.Data.Entity.Infrastructure.SqlConnectionFactory, EntityFramework">
<parameters>
<parameter value="Data Source=.; Integrated Security=True; MultipleActiveResultSets=True" />
</parameters>
</defaultConnectionFactory>
</entityFramework>

What this magic does? [sic]
From what i gather this is telling entity framework to auto-magically connect to a database by convention if the connection string doesn't work. I managed to ascertain that it was my code was actually trying to connect to a local sqlexpress DB. So i took it out.

After removing that section I got this error message (again after ages of trying to connect)

Server Error in '/' Application.
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified)

[SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified)]
System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) +5296071
...
...

AND now dev was also broken...

At this point you may be directed here by google/bing/duckduck et al. to https://blogs.msdn.com/b/sql_protocols/archive/2007/05/13/sql-network-interfaces-error-26-error-locating-server-instance-specified.aspx
its not the problem you are looking for... No you dont need to allow UDP port 1434 or anything like that.

My local connection string was
<add name="applicationName" connectionString="server=localhost; database=localdbname;Integrated Security = true;MultipleActiveResultSets=True" providerName="System.Data.SqlClient" />
my connection string on live was
<add name="applicationName" connectionString="Server=db123456789.db.1and1.com,1433;Database=123456789;User Id=mydbuser;Password=mydbuserpassword;" providerName="System.Data.SqlClient" />
all as provided by the host.

Why was it not connecting?? i almost put that config section back but then after debugging i found that inside the the application context that was driving entity framework the connection string was not there, again it was trying to use sqlexpress??

then i stumbled upon it, my context was configured thus.


public class theappContext : DbContext
{
public theappContext() : base("databasename")
{
}
...

and in the application_start in global.asax
Database.SetInitializer<theappcontext>(null);
It turns out you need the connection strings name to be the same as the string in base("databasename"). In my instance i needed to set it to databsename not applicationName as i had previously.

So finally these are my connection strings

On dev either:
<add name="databasename" connectionString="server=localhost; database=localdbname;Integrated Security = true;MultipleActiveResultSets=True" providerName="System.Data.SqlClient" />
<add name="databasename" connectionString="server=localhost; database=localdbname;User Id=mydbuser;Password=mydbuserpassword" providerName="System.Data.SqlClient" />

On live:
<add name="databasename" connectionString="Server=db123456789.db.1and1.com,1433;Database=123456789;User Id=mydbuser;Password=mydbuserpassword;" providerName="System.Data.SqlClient" />

And thats what fixed it, simply changing the name of the connection string. So entity framework was being super cleaver and making up for bad connection strings locally, which meant that when deploying live I had no hope. Why all this auto-magic?