Sunday, 31 August 2014

Using Powershell to create local users on windows

We are setting up a server farm for a new environment consisting on many servers and we want to create many users with admin rights on each one, including the remote desktop user group.

We could have spent an hour or so and used the GUI on each server but we thought that a script would be quicker, not to mention more fun to write.

The latest version is here: https://github.com/DamianStanger/Powershell/blob/master/Add-LocalAdminUserAccount

The version at time of writing is below:

Function Add-LocalUserAdminAccount{
  param (
  [parameter(Mandatory=$true)]
    [string[]]$ComputerNames=$env:computername,
  [parameter(Mandatory=$true)]
    [string[]]$UserNames,
  [parameter(Mandatory=$true)]
    [string]$Password
  )

  foreach ($computer in $ComputerNames){
    foreach ($userName in $UserNames){
      Write-Host "setting up user $userName on $computer"

      [ADSI]$server="WinNT://$computer"
      $user=$server.Create("User",$userName)
      $user.SetPassword($Password)
      $user.Put("FullName","$userName-admin")
      $user.Put("Description","Scripted admin user for $userName")

      #PasswordNeverExpires
      $flag=$User.UserFlags.value -bor 0x10000
      $user.put("userflags",$flag)

      $user.SetInfo()

      [ADSI]$group = “WinNT://$computer/Administrators,group”
      write-host "Adding" $user.path "to " $group.path
      $group.add($user.path)

      [ADSI]$group = “WinNT://$computer/Remote Desktop Users,group”
      write-host "Adding" $user.path "to " $group.path
      $group.add($user.path)
    }
  }
}

[string[]]$computerNames = "computer1", "computer2"
[string[]]$accountNames = "ops", "buildagent"

Add-LocalUserAccount -ComputerNames $computerNames -UserNames $accountNames -Password mysecurepassword


The lines that do the damage are 14 to 24 to create and save the user, then 26 to 32 to add the user to the required groups on the machine.

It would be trivial to change this script so it was a powershell module but the script as it stands serves my current needs. Just add more computer names and account names to suit your needs we have around 10 of each in the version of the scripts I'm running.

Friday, 20 June 2014

Story points vs numbers of stories. What's the best way to predict your project completion date? Or are estimates worth it?

Do story points actually help in estimation and the prediction of project milestones and end dates? Given the inherent inaccuracy of points can we save time by skipping the estimation process?
We estimate our stories in story points (2, 4 or 8 points representing small medium or large). We try and do this when the stories get added to the back log so that we can better plan the upcoming work.

But personally I don't really think the effort required to estimate the stories is worth it, maybe we should plan just with the numbers of stories instead?

Id like to highlight my thoughts with actual data rather than hearsay and conjecture. Opinion pieces are all very well but sometimes you just need to 'show me the data'.

Comparison of the two different metrics

We have been going with our current project for 16 iterations/weeks now, we had one major release at iteration 6 and we are now approaching the next major release for this project. I wanted to take this opportunity to reflect on the no-estimates debate, is there any point in estimates, I will let the numbers do the speaking for me.

Here is our velocity first in terms of story points then in terms of stories completed:


Velocity in story points



Velocity in numbers of stories

The following 2 graphs tracking the amount of work done vs remaining. The first is an agile burn up chart, the second a lean cumulative flow chart



Burn up (story points)



Cumulative flow (numbers of stories)

And here are the statistics around the numbers of stories that we have:
13 stories/spikes with no points. 39 with 2 points. 29 with 4 points. 5 with 8 points

Conclusions

Apart from the fact that there has been a lot of scope creep and our velocity has fluctuated wildly, what do the graphs tell us?

Well at the beginning of the project, like almost every project I've ever worked on we thought we had most (but not all) of the requirements captured, and like most of the projects I've ever worked on we were wrong, very wrong. The scope doubled from 120 points (40 stories) to 240 points (80 stories) from the start of the second phase up to now. So using points or numbers of stories gave a very misleading picture of the scope and hence estimated completion date, No difference in metrics here then.

Our velocity trend is either 15 points or 6 stories, whichever measurement of velocity you use the estimated end date (as of this writing) is the 20th July. Both the burn up and the cumulative flow diagrams show how scope has been added with each iteration as we discover new requirements. As with everything we do we have to ask is that thing that we do adding value?

So I think we could ditch estimates. But this is the thing, we MUST sill analyse stories properly, must break them down into small independent stories (always remember to INVEST in your stories). And we as a team must still discuss the stories before we pick them up to work on.

Appendix

You might ask what happened in early may when the graphs flat line (iteration 20)? Well half the team was needed for an urgent fix/deploy to another system at the same time as holidays from 2 other team members, then we had to ramp up the team again (with different team members).
And why the fluctuations in velocity? well the team changed quite regularly, the time when we were the most productive was (unsurprisingly) the time when the team was the most stable and we weren't getting distracted with other projects and maintenance work.

All graphs are produced curtsey of the Mingle agile project management software from ThoughtWorks, its a great tool for managing your agile projects. We moved from Trello to Mingle around about the new year but that is a different story for a different blog post.

Indecently I'm not even going to mention burn down charts, I've long since abandoned them, they are so limited for trying to visualise the actual picture of what is going on in a given release. If you want to see a good explanation of burn ups vs burn downs just Google it, or look at this example here: http://brodzinski.com/2012/10/burn-up-better-burn-down.html
I realise not everyone sees it my way, and I guess if you have stable/full requirements and want to track work in detail in a given iteration then burn down might work for you but in our experiences it just causes confusion to management and stakeholders. They may end up asking the question 'why is your burn down going up?' or 'why is it flat?'. They don't know if you did nothing or were adding features at the same rate as knocking them off.
Ok, ok, sorry I did mention burn down charts, sorry....

Saturday, 24 May 2014

Unit testing with Moq - Returning different values from multiple calls to the same method in a loop, and then verifying multiple calls to another method.

I was doing some TDD the other day in a C# .net service and found myself wanting to write a loop that called a couple of methods and acted on them in different ways depending on the return values. I needed to mock the calls so they returned different data depending on how many times the methods were invoked.

This is the method under test that I ended up with after doing the TDD cycle. (I've stripped out all the exception handling and logging to keep this example clear)

using JourneyHeader.Domain.Entities;
namespace journeyMigration
{
  public class JourneyMigrator
  {
    private readonly JourneySource _journeySource;
    private readonly JourneyDestination _journeyDestination;

    public JourneyMigrator(JourneySource journeySource, JourneyDestination journeyDestination)
    {
      _journeySource = journeySource;
      _journeyDestination = journeyDestination;
    }

    public int JourneysProcessed { get; private set; }
    public int JourneysFailedProcessing { get; private set; }
    public JourneyHeader LastJourneyProcessed { get; private set; }

    public void Start()
    {
      JourneyHeader journeyHeader = _journeySource.GetNextJourney();
      while (journeyHeader != null)
      {
        _journeyDestination.Upload(journeyHeader);
        JourneysProcessed++;               
        LastJourneyProcessed = journeyHeader;   
        journeyHeader = _journeySource.GetNextJourney();
      }
    }
  }
}


The lines we really care about are 21 through 28. Notice that GetNextJourney() is getting called many times depending on the value returned last time. Also see that on line 24 the upload() method is called inside the loop, I want to verify I was passing the correct values through to it.

This is one of the tests I came up with whilst writing this code, its a good example of the 2 things I wanted to demonstrate here.

using System;
using System.Collections.Generic;
using FluentAssertions;
using JourneyHeader.Domain.Entities;
using journeyMigration;
using Moq;
using NUnit.Framework;

namespace journeyMigrationTests
{
  [TestFixture]
  public class JourneyMigratorTests
  {
    [Test]
    public void ShouldProcessTwoJourneys()
    {
      var journeyHeader1 = new JourneyHeader();
      var journeyHeader2 = new JourneyHeader();
      var queue = new Queue<JourneyHeader>(new [] {journeyHeader1, journeyHeader2, null});
      _journeyHeaderSource.Setup(x => x.GetNextJourney()).Returns(queue.Dequeue);
 
      _journeyMigrator.Start();
 
      _journeyMigrator.JourneysProcessed.Should().Be(2);
      _journeyMigrator.LastJourneyProcessed.Should().Be(journeyHeader2);
      _journeyDestination.Verify(x => x.Upload(journeyHeader1), Times.Exactly(1));
      _journeyDestination.Verify(x => x.Upload(journeyHeader2), Times.Exactly(1));
    }
  }
}


The interesting lines here are 16 through 20 where im setting up a queue that is used to return the values in the prescribed order. You have to do it this way because in Moq you cant do multiple Setups on a given classes method, the last one to be defined will win. In the following example journeyHeader2 is always returned.
_journeyHeaderSource.Setup(x => x.GetNextJourney()).Returns(journeyHeader1);
_journeyHeaderSource.Setup(x => x.GetNextJourney()).Returns(journeyHeader2);


The Returns method takes a value as above or a function that is run every time a return value is required, you could write it like this.
_journeyHeaderSource.Setup(x => x.GetNextJourney()).Returns(() => queue.Dequeue());
But the one I've used on line 20 is a lot clearer.

Finally lines 26 and 27 verify the method Upload was called correctly, once with the first journeyHeader and once with the second.

Appendix
Moq - A popular and friendly mocking framework for .NET

Monday, 7 April 2014

Thoughtworks Go, adding a version text file as a build artifact

Firstly let me be clear that we use Go from Thoughtworks but I'm sure you can use the same technique outlined below for other CI systems such as Teamcity or TFS.

When we deploy our built code to the live servers its good to be able to see what the version of the dlls/exes files is. To do this we put a file in the same directory as the built files called version.txt which contains the details of the build that has been deployed, the build number, the revision of SVN that formed the source for the build.

If you look in the console tab of the build job that you have set up you will see something similar to the following:
[go] setting environment variable 'GO_ENVIRONMENT_NAME' to value 'CI'
[go] setting environment variable 'GO_SERVER_URL' to value 'https://buildAgent01:8154/go/'
[go] setting environment variable 'GO_TRIGGER_USER' to value 'changes'
[go] setting environment variable 'GO_PIPELINE_NAME' to value 'Scoring'
[go] setting environment variable 'GO_PIPELINE_COUNTER' to value '81'
[go] setting environment variable 'GO_PIPELINE_LABEL' to value '81'
[go] setting environment variable 'GO_STAGE_NAME' to value 'Build'
[go] setting environment variable 'GO_STAGE_COUNTER' to value '1'
[go] setting environment variable 'GO_JOB_NAME' to value 'BuildSolution'
[go] setting environment variable 'GO_REVISION' to value '6343'
[go] setting environment variable 'GO_TO_REVISION' to value '6343'
[go] setting environment variable 'GO_FROM_REVISION' to value '6343'

With this data you can create a task which uses powershell to create the file
Command: powershell
Arguments: sc .\version.txt "GO_ENVIRONMENT_NAME:%GO_ENVIRONMENT_NAME%, GO_SERVER_URL:%GO_SERVER_URL%, GO_TRIGGER_USER:%GO_TRIGGER_USER%, GO_PIPELINE_NAME:%GO_PIPELINE_NAME%, GO_PIPELINE_COUNTER:%GO_PIPELINE_COUNTER%, GO_PIPELINE_LABEL:%GO_PIPELINE_LABEL%, GO_STAGE_NAME:%GO_STAGE_NAME%, GO_STAGE_COUNTER:%GO_STAGE_COUNTER%, GO_JOB_NAME:%GO_JOB_NAME%, GO_REVISION:%GO_REVISION%"

This produces a text file like this:

version.txt

GO_ENVIRONMENT_NAME:CI
GO_SERVER_URL:https://buildAgent01:8154/go/
GO_TRIGGER_USER:changes
GO_PIPELINE_NAME:Scoring
GO_PIPELINE_COUNTER:81
GO_PIPELINE_LABEL:81
GO_STAGE_NAME:Build
GO_STAGE_COUNTER:1
GO_JOB_NAME:BuildSolution
GO_REVISION:6343

make sure this file is included in the build output folder along with the build artifacts, good times.

Monday, 24 March 2014

Visualising the Thoughtworks Go pipline using Cradiator, a build information radiator/monitor


You know the old adage, 'out of sight, out of mind'? Like it or not, sometimes the state of the build on CI is forgotten about, and if you can't see the current state without going looking for it, it can stay red for a few days before it's noticed by someone.

I've always been a big fan of information radiators, build monitors, graphs and stats that are in peoples faces, and I just wanted to share our current solution to the whole 'who cares if the CI server is not Green' problem.

We use Go (the CI server from Thoughtworks) for our build and deployment pipeline, which is great. Although it doesn't ship with a build monitor that can be installed on a machine to show the state of the build, Thoughtworks do expose an API that allows you to build your own, for example on http://Server:Port/go/cctray.xml

We found an old-ish project called Cradiator that works with cruse control (remember that old build server? it turns out that Go has the same API, all be it on a slightly different URL). The problem was that we have secured our instance of Go so that you need to be logged in to access it. This caused problems with Cradiator and so we forked it and added the ability to set your own credentials in the config. The fork can be found here.

Below are a couple of screen shots showing the Go build server and also the corresponding Cradiator screen.
Our Go pipeline for this small part of the overall system

The Cradiator build monitor screen

As you can see every stage within a Go pipeline has a corresponding line in Cradiator. To achieve this, we use the following filter in the Cradiator config file: project-regex="^.*::.*::.*"

There is a robotic voice that announces who broke what, with some good catastrophic sound effects to accompany it. It's doing a great job of focusing people on fixing things if/when they go red.

Ever since we have started using this, the average time of a red build to a check in fixing it is less than an hour. Visibility for the win.

References:

https://github.com/DamianStanger/Cradiator

Thursday, 13 March 2014

Thoughtworks Go, asynchronously trigger a manual stage from a long running test

We are using Go from Thoughtworks Studios to manage our build pipeline, we have the builds generating artifacts that are then deployed and installed on to UAT servers, automation rocks. We then we have a long running test that runs out of process using lots of NServiceBus queues.
We have a stage that starts a process manager that fires fake messages in to the start of the system, then 3 different services pass messages along doing various things, all connected together via message buses. 

So at the start of the UAT test we kick off some powershell passing in the current pipeline counter (this is a vital detail, it tells the future API call which pipeline to kick)
. .\start_fullpipelinetest.ps1 %GO_PIPELINE_COUNTER%;
in this example the %GO_PIPELINE_COUNTER% variable is 11

Given the nature of the system once the message is sent to kick off the process the Go pipeline goes green and the next stage goes into an awaiting approval manual stage.



The last thing the test do is to send a message out to inform other downstream systems that things are ready to go, we hook into this and fire the following POST into to Go to run the next stage. If you were to do this manually you click the icon circled above, to do it programatically you send the following POST command:
curl --data "" http://user:password@server:8153/go/run/uat_start_FullPipelineTest/11/TestingComplete
Which kicks off the final stage of the pipeline. For us this does some verification as to the expected state of the system and passes or fails accordingly.

I really like it, we get an asynchronous test that does not hog the go agent resources and will instantly tell you about failures once a test run has finished.

Go is very flexible and the API lets you do all sorts of cool things, like uploading artefacts and triggering pipelines.
curl -u user:password -F file=@abc.txt http://goserver.com:8153/go/files/foo/1243/UATest/1/UAT/def.txt
curl -u user:password -d "" http://goserver.com:8153/go/api/pipelines/foo/schedule

Wednesday, 18 December 2013

Search the whole SVN repository for a given filename

The SVN repository at work is huge, and I don't have the disk space to checkout the whole thing with the branches and everything on my small (but very fast) laptop SSD. But I needed to search through the whole repo for a file, the following command line can help out.

Windows

svn list -R https://subversion-repo/subfolder | findstr filename

Nix

svn list -R file:///subversion-repo/subfolder | grep filename

These commands don't look through the history but will find things at the current HEAD of the repository.

If you want to look for a particular point in time you can specify the revision thus:

svn list -r 1234 -R https://subversion-repo/subfolder | findstr filename

where 1234 is the revision to search though.

If you want to search the entire history you could script the search to look though every revision from 1 to n and list the files that match the search at each revision, then remove duplicates to get a single list. How about getting even fancier by recording the revision the file was first found and the revision it was deleted at. I have no requirement to do this right now but sounds like an interesting little project to try.

If you want to search for text in files I find searching the diffs useful. Just pipe the following into a file and search that in your favorite editor (Sublime text :-)

svn log -r1234:HEAD --diff https://subversion-repo/subfolder

this can be rather verbose but with a bit of tweaking and targeting of the repo/folder you can get some accurate results on text search in history

Sunday, 15 December 2013

Personal Backup strategies

Its been on my mind of late that I don't have a very good backup strategy in place for my own things at home. I've got many gigs of photos, code, documents, videos that are locally backed up but all over the place and not very consistent, and then there is gmail and the 4 gig of emails in there. So I'm doing something about it.

The solution is:

Dropbox

I use dropbox for cloud sync and storage. This is not back up. I use it to get access to files easily from anywhere, but if I accidentally delete or change something, then the change is propagated straight to dropbox, so (unless you have packrat) its quite hard to undo the change or get to an older version.

I keep a local copy of all the dropbox files on my home server.

gmail

A weekly download of all gmail to local machines using gmvault. The official guide to set up
is here, but Scott Hansleman did a great write up of how to do this here.

This boils down to two commands. The first for the initial sync, the second for incremental backups on top of the same folder structure.
gmvault sync youremail@gmail.com -d D:\foldertosaveto
gmvault sync -t quick youremail@gmail.com -d D:\foldertosaveto

Output of my initial run. Yes took a while to run...
================================================================
Sync operation performed in 2h 36m 35s.
Number of reconnections: 70.
Number of emails quarantined: 0.
Number of emails that could not be fetched: 0.
Number of emails that were returned empty by gmail: 0
================================================================

Scheduled job

I have set up a scheduled job (in windows task scheduler) which runs a script every Friday that backs up the week's email to my hard disk. This script is just a simple .bat file where the contents are thus:
gmvault sync -t quick youremail@gmail.com -d D:\foldertosaveto
You will need to make sure that gmvault is on your path if you do it this way. setting up scheduled jobs is easy too. There are loads of on-line tutorials, here is one for windows 8.

Amazon glacier

  1. Sign up for Amazon glacier, you will need your credit card for this (first you need to sign up for an Amazon AWS account)
  2. Once logged in, create a key pair (Access Keys (Access Key ID and Secret Access Key)) save them to your machine.
  3. Go to the glacier console and create a vault for each type of backup you are planning on doing. I've created two for now, one for my photos and one for my mail backups. I might create another for music later.
  4. Ensure you have chosen the data centre closest to you for the vaults. Both of mine are in EU Ireland.

Cloudberry online backup

I use cloudberry online backup to do the heavy lifting of actually sending all my files up to amazon
http://www.cloudberrylab.com/amazon-glacier-storage-backup.aspx#amazonglacier. Its great you just set up some backup plans and a schedule and cloudberry does the rest. Its not free but really quite cheap given what it does and how well it does it.
  1. Install the cloudberry online backup desktop version (download from: http://www.cloudberrylab.com/amazon-s3-cloud-desktop-backup.aspx )
  2. Add a glacier cloud storage account (File->amazon glacier)
  3. Follow the wizard - it's really easy
  4. Go to the backup plans tab and create a new plan or use a predefined plan
  5. For my gmail backup I created a new plan
  6. Click the backup wizard (backup files). Again, a real easy wizard to follow. Select the glacier account/vault, the files to back up and the schedule. So easy.

Costs

I'm storing 150 gig in amazon glacier, that costs me £1.50 per month and I can store as much as I like, practically unlimited storage. Be careful though because it costs a lot more to get it out. But that's ok right? This is emergency backup. You might be able to get your files back from dropbox, local backup etc. Glacier is the long term emergency backup we all need.

Summary

The whole point was to get all the files that I care about into cheap storage with multiple redundant backup locations, so if/when I lose some data I can get it back. Dropbox provides an easy way to get back files but its not a total solution, amazon provides the cheap offsite secure backup that I want for my 150 gig+ of data.

Other options:




Comments from https://news.ycombinator.com/item?id=6927659

* by drdaeman

Isn't Glacier overpriced, compared to other personal backup solutions?

Say, I have a mere 2TiB of historical data (various junk I made or collected over last ten years or so). Storing on them with Amazon is $20/mo, and if I want to look on that photos from 2008 I have to wait for several hours just to find that I misremembered where they were stored and pulled out wrong files. And unless it happened that I uploaded a good amount of data on that exact day, I'll have to pay for downloads.

Other offers for unlimited storage are Cyphertite at $10/mo, Crashplan at $6/mo, Carbonite at $100/yr, AltDrive at $4.5/mo and so on. While they're probably not-so-unlimited (they don't say that, but I guess one won't have much luck storing a petabyte), less respectable than Amazon, and most services lack an API and require to use not-so-trusty proprietary software that has to be sandboxed properly, Glacier doesn't look like a good deal to me unless we're talking about backing up some either quite big data (like tens of terabytes) or relatively small amounts of data (less than 500GiB).

Disclaimer: I have no affiliation to any of companies mentioned above. Just happens that I'm currently fleeing from Bitcasa (they suck hard) and looking at various options to not maintain a self-hosted NAS.

* by tfe

The difference is that I trust Amazon far more than those other companies you mentioned. If they go out if business or even change their "unlimited" policy, you're exposed until you can get your 2TB re-uploaded to another provider. It's a pain and a risk I'm unwilling to take. I know Amazon isn't going to suddenly try to dump me as a customer.

* by damianstanger

Yes all good points. I have a relatively small data set < 200GiB and so my costs with glacier are less than $2 per month :-)


* by hengheng

I am using Glacier to store a backup of most of my personal data. This includes my home directory, the most relevant photos I have taken as jpeg, my gmvault and that's about it. I do not copy over any movies, music, raw photos or software, as this is my last line of defense, so it only needs to cover the essentials. I am under 1€ per month this way, and the backup gets refreshed only every other month or so.

I do have a local server that stores a windows backup image of my whole laptop, a second Harddisk in that Server to store a copy of the server, and an external hard disk with a windows backup at my parents that gets a refresh every time I am over there. All backups are truecrypt images for good measure, and I have tested recovery. Amazon stores a split truecrypt archive. Recovery cost about 20€ and took a day.

So yes, glacier is great as a personal backup, if you make it part of a larger strategy. To me, this is disaster recovery, and a small price to pay for this kind of insurance of important files and memories.

Sunday, 1 December 2013

Script your build and deployment of android cordova apps with powershell

We are developing a new version of our customer facing solution, across web, ios and android, using cordova(phonegap).

Im a big proponent of build automation, and the classical (recommended?) way of using eclipse to build and manage the code base was getting me down, so i decided to write some scripts to build and deploy the app to either a device, an emulator or prepare for release. I also wanted any developer to be able to checkout the code and run the scripts to build the app.

I wrote the scripts in powershell (sorry!) with some batch files to make the various functions easy to run (im developing on windows 8 by the way).

You can find the scripts here: https://github.com/DamianStanger/AndroidBuildScripts

So how does it all work?

firstly it goes with out saying that you need your dev env set up for android development on the command line with cordova http://cordova.apache.org/docs/en/edge/guide_cli_index.md.html#The%20Command-Line%20Interface

As you will know (if you do cordova development) when you use the command line tools for creating cordova apps the folder created is where all your source code is placed and inside there is the www folder that is where you keep your .js and .html files. The problem is that you are keeping your source code along side the automatically built cordova files, not ideal. so I've created my own source folder that is where all the code you edit is kept. We then use powershell to copy these files to the correct places.

The development process

In a powershell (or dos cmd if you prefer)

build.bat
emulate.bat or install.bat

Line 01. run either or, depending on if you are using a real device or not

That's it. Now you might notice that build can take a wile to run because its setting up everything from scratch so i created a shortcut that will only copy your changes across.

quickCopy.bat
emulate.bat or install.bat

This is all good for general day to day dev but eventually you will want to test a production build on a real device for this use the following commands

release.bat
installRelease.bat

To make this work you must only have either a device plugged into usb or an emulator turned on (please use genymotion its so much faster than a standard emulator)
The release process signs and aligns your apk for you :-) so when you are ready you just send the apk you have tested to the play store.

The scripts

Here I'm going to show select lines of code from build.ps1

For building the app in debug and getting that on to your emulator or phone

function create()
cordova create app-cordova-android com.myapp.app myapp
...
cordova platform add android
...
cordova plugin add org.apache.cordova.device

function build()
cordova build android
function emulate()
cordova emulate android -d
function installDebug()
cordova run android -d

To build the releaseable apk and to get that onto your phone use the following:

function release()
cordova build android --release
function sign()
jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore ..\appstore\android-keystore\myapp -keypass myappKeyPassword -storepass myappStorePassword -signedjar .\platforms\android\bin\myapp-release-signed.apk .\platforms\android\bin\myapp-release-unsigned.apk myapp
zipalign -f -v 4 .\platforms\android\bin\myapp-release-signed.apk .\platforms\android\bin\myapp-release-signed-aligned.apk
function installRelease()
adb uninstall com.myapp.app
adb install .\appstore\APKs\myapp-release-signed-aligned.apk

Release versioning

When doing a release to the play store you need to make sure the version numbers are incremented each time, for this i added a helper which will update all the relevent places in the source for you.

just run:

setVersion 102 1.0.2

This will change all the files that need changing in order to properly put a new version of the app onto the play store.

Upload to play store error
Upload failed
You uploaded a debuggable APK. For security reasons you need to disable debugging before it can be published in Google Play

Make sure your manifest is set thus:

<application android:debuggable="false" android:hardwareAccelerated="true" android:icon="@drawable/icon" android:label="@string/app_name">

Resources

Sunday, 17 November 2013

Debug your android applications by capturing/monitoring their http traffic using wireshark

I’ve always wondered what my phone is telling the outside world and recently i had the need to actually find out as I’m developing an android app for work at the moment. I needed to find out what was going over the wire as i was getting some strange problems and could not debug the traffic on the production server.

Setup

Download and install wireshark : https://wireshark.org/

Disable wifi and mobile data on the phone.

Connect your phone to your laptop/desktop via USB.

Enable internet pass though. Basically you want your phones internet to come through the USB wire, through your computer network card, which when running a wireshark capture, through wireshark.

Set up a capture filter so that you only capture the data coming to and from your phone and not data initiated from the computer itself. i pick the option to ‘create a capture with detailed options’. Set a capture filter for example ‘host 192.168.15.129’,  where 192.168.15.129 is the ip address of the phone.

Additionally (or alternatively) you can filter the traffic by ip address after capture when viewing the results “ip.src==192.168.15.129 or ip.dst==192.168.15.129” where 192.168.15.129 is the ip address of your phone. Or filter the traffic by protocol, you probably care about http traffic so filter on this by entering “http” in the filter.

Results

You can get information overload with wireshark, it takes some getting used to, but if you dig you can find everything you need. Look for the requests you care about by looking down the info column and clicking the row. This will present all the packet details where you can dig as deep as you like into the request.

I use the Hypertext Transfer Protocol section as its the level of detail i care about. From here you can see the url and the headers as well as a link to the packet that contains the response, simply perfect.