Skip to content

Super charge productivity, scheduling interactions using ‘Team Time Warp’

May 1, 2013

Team Time Warp is an application which I’ve been working on for the past few months. It aims to find out if human interactions within a team of Software Engineers can be loosely ‘scheduled’ to minimize context switches between the actual interactions and the code related tasks that the engineers work on.

What does this fairly abstract paragraph mean? Well lets give an example. Lets say we have a Software Engineer who is ‘in the zone.’ The engineer feels as though everything falling into place, new code is flowing out, and unit tests are turning green one after the other. It’s taken a little while to get up to this velocity but there is no stopping him now!

Whilst the engineer is working he keeps a mental mapping of all the functions which require modification, variable names, tests, edge cases and so on. Imagine that whilst at full mental strength the engineer is suddenly interrupted by another member of the team who asks for help with an unrelated task. Because the engineer is a team player and a generally nice guy he decides to help his colleague out.

Mentally, to deal with this interaction, the engineer has to ‘context switch’ to the new task. The engineer puts all of that temporary information gathered whilst working on the current development task into ‘mental storage’ and switches over to the new task. He spends a couple of minutes and explains to the colleague his thoughts on the unrelated subject and once both are happy he gets back to the development task at hand. The engineer now has to “collect his thoughts” and restart.

This paper ( calls the time it takes for the engineer in the above example to get back up to full pace the ‘resumption lag.’

A chart comparing the time taken to resume a task after naturally taking a break (black) vs the time taken to recover from an unnatural context switch (white).

The result of the resumption lag is not just a loss in productivity but the engineer may forget something during the context switch, i.e the engineer may not bring every detail out of ‘mental storage’ and often specific details about the task can be overlooked, this can manifest itself in a bug within the codebase. We have all experienced this in one way or another! Whether it be when you are writing a detailed email, or writing formulae in a spreadsheet. Sometimes when proof reading an email and making a correction you remember back to the point at which the mistake was written and you can attribute it to a context switch.

During my time working in software I found time and time that these interruptions were happening, not just between developers, but with support, business analysts and project managers also.

What if there was a way to ‘loosely schedule’ these interactions so that they only occur when the mind is at rest? What ‘Time Warp’ tries to do is delay interactions until engineers have come to a natural stopping point, but how does it do this?

Ideally the Time Warp software needs to know when you are at full mental velocity, it can then ensure interactions are delayed until your next natural ‘break’ period. How can this be modelled by a computer program? Well firstly the system would need to know when an Engineer starts working on a task. For most Software Engineers they start their ‘work’ sessions by navigating through the code before making a change. Given this basic premise I needed some way to trigger Time Warp during this navigation process, so I wrote a plugin for the development environment we use (Visual Studio). Automatically a work period is triggered when the engineer actually starts working, without stealing focus or popping up an annoying dialog box.


Screenshots showing integration of Team Time Warp into the Visual Studio IDE.



Ok so that’s cool, but how does Time Warp know when you are coming to a natural resting point? For this I looked at various studies and popular time management techniques. Evidence shows that taking regular breaks from mental tasks improves productivity and creativity, and that skipping breaks can lead to stress and exhaustion, could this regular break methodology be the key tracking mental velocity? Time Warp relies on the breakdown of work into small regularly timed chunks of between 25 and 40 minutes. After a work period the user is encouraged to take a break. It is during these break periods when Time Warp ‘schedules’ interactions.

My mental pace throughout the morning with regular breaks – Where you see the red line (after the second peak) is where I have an interaction without a forced context switch.

Ok brilliant, so we know when we start and end periods of high mental velocity, how can a colleague grab my attention in one of these rest periods?  The ‘team‘ aspect of Time Warp is in it’s dashboard. It displays the current work/ rest situation of all the engineers in the team. This information can then be used to schedule interactions with colleagues:


The team dashboard. Clicking on the ‘eye’ icon schedules an interaction in a couple of minutes.
You’ll get a notification when it’s safe to interrupt!

Local vs. global optimization.

Okay so that’s great, no interruptions. Engineers can get on with their work until such a point in time when they naturally require a break from code related tasks, but doesn’t this lead to an imbalance where some engineers become more productive, whilst others, junior developers for example, who may be more dependant on others now start to drop off in productivity? Interestingly I have found out opposite…

1) The wait time and encourages team members to ‘dig a little’ for the answers to their questions, lets say that an interaction is scheduled in 15 minutes, the ‘waiting’ team member can use that 15 minutes to gain more experience around the issue. This little bit of digging acts as a knowledge sharing catalyst for the conversation they are about to have, thus making the interaction more effective.

2) Time Warp doesn’t just have to be used by individuals, it can (and should) be used in conjunction with techniques such as pair programming and code review processes. It should be used to break up design meetings, peer reviews, estimation sessions and retrospectives into small chunks of time. How many times have you seen an estimation session starting out with great estimates for the items discussed first, and then progressively more poor estimates for items discussed later on..

Want to take part?

If you and your team are interested in taking part in the study please drop me a message stating the following:

1) who you are and what your team does

2) How you currently measure the progress of your software project

3) What development environment you are using




Automating Radiology – detecting lung nodules using morphological image processing in F#

August 8, 2011


The human lung has a variety of techniques to keep the lung sterile, if small grains of sand, dust or small organisms enter the lungs then the body’s immune response is to destroy it with enzymes or clear it from the lung with coughing, however in some cases neither of these techniques work, so the body has to ‘wall’ the foreign material off, resulting in nodular densities that can calcify (calcium is a by product of this ‘walling off’), these nodules are seen on chest X-rays. Organisms also cause calcified nodules, these include the tuberculosis bacterium, Rubella (the virus that causes German Measles), and a variety of fungi.

In the case of the tuberculosis when our body tries to ‘wall’ the foreign bacterium off, calcified nodular densities can appear within the lungs. These nodules could indicate ‘inactive TB’ is present in the patient. These densities are seen on chest X-rays. Medically it is important to know whether a person has inactive TB because this can indicate that the patient is now more susceptible to contract TB again. The aim of this article is to see how we can use morphological image processing techniques as a method to create an automated detection system for these nodules within a chest X-ray:

Standard Chest X-ray

Chest X-Ray of a patient with TB.

Standard Chest X-ray

Chest X-Ray of a patient with suspect nodules highlighted. (Note the higher the red intensities the more likely a suspect nodule is within the area)

Image processing background

Morphological Image processing techniques are useful tools for extracting image components such as boundaries and skeletons, ‘Erosion’ and ‘Dilation’ are fundamental techniques used. Dilation is an operation that “grows” or “thickens” an image by a specific structuring element.

Figure: An example of a 3x3 structuring element of 1's


First let us look at dilation. Dilation is defined as:

Equation 1: Dilation: where b(with the accent) is the empty set, b is the structuring element and f is the original image.

The morphological transformation dilation combines two sets using vector addition, here is a graphical example of a dilation of an image a by a 3×3 structuring element of 1’s:

(original image (left), structuring element (middle), dilated image (right)


Erosion combines two sets using vector subtraction of set elements, and is the dual operator of dilation. It should be noted that erosion and dilation are not inverse transformations. If an image is eroded and then dilated the original image is not obtained.

erosion: where b(accent) is the empty set, b is the structuring element and f is the original image

Here is a graphical example of erosion by the structuring element:

(original image (left), structuring element (middle), eroded image (right)

This has the opposite effect.

Opening and Closing

From the two operators, erosion and dilation, we can define two more operations called opening and closing using the following equations:

Opening and Closing of an Image.

Simply put a morphological opening operation is simply an erosion followed by a dilation by a structuring element, whereas morphological closing is a dilation followed by and erosion. Opening and closing an image with an isotropic structuring element (isotropic in this context means that the structuring element behaves in the same way in all directions, like a square or circle of 1’s with an origin in the center) can be used to eliminate specific image details smaller than the structuring element. Closing connects objects that are close to each other, fills small holes and smoothes the object outline.

Nodule detection algorithm – Modified Grayscale Top Hat Morphological Transform (MGTH)

Now that we have discussed morphological image processing we can discuss the MGTH (Modified Grayscale Top Hat Morphological Transform). The MGTH uses the opening and closing operation to removing large vessels and boney structures in the X-ray revealing nodular like densities. The MGTH is described as the following:

The MGTH Transform, where f is the original image and b is a ‘large’ square structuring element.

The results of this operation are very similar to another morphological image processing technique called the Top-Hat transformation which is a good tool for extracting light objects on a dark but slowly changing background.

Prototyping in Matlab

Before building out an application to analyze the X-rays, let’s build a little prototype using MATLAB’s image processing libraries. This way we can see if the algorithm works:

%image is the intensity matrix which represents
%      the xray
%SEsize is the size of the structuring element
function h = MGTH(image, SEsize);
   %create a square structuring element.
   se1 = strel('square',SEsize);
   %perform 'open' morphological operation on image
   erodedXRay = imerode(image, se1);
   xRayOpen = imdilate(erodedXRay,se1);
   %perfom 'close' on the image
   dilatedXRay = imdilate(image, se1);
   xRayClose = imerode(dilatedXray,se1);
   outlines =imsubtract(imsubtract(image, xRayOpen),imsubtract(xRayClose, image));
   h = outlines;

Once happy we can move onto building the solution in F#.


The first thing to do is to implement the basic morphological operations. Lets start with ‘erosion’ and ‘dilation’. Lets start by writing the tests:

Erosion and dilation tests

and then the implementation:

then based the equations for erosion and dilation we can define opening and closing:

Once we’ve done that we need a few simple functions for addition and subtraction of image matrices, so lets define some tests for that:

(note that I’ve skipped out some tests in the interest of space)

And then the implementation:

Then finally the actually MGTH algorithm:


When we run the MGTH function with an intensity array representing an Xray, we get an output which highlights the nodules. I’ve dilated the results and applied them to the following XRay below which can be seen in red:


We can use F# to implement morphological image processing methodologies, and use them to produce medical expert systems.

Building a multi-threaded Monte Carlo stock price calculator in Silverlight using F#

August 1, 2011


In this post I’m going to demonstrate how to build a simple Monte Carlo stock pricer. I’m going to take advantage of F#’s Async library to parallelise the pricer. Thanks to Keith for reviewing some of the financial explanation, and to Jose for a quick code review!

Stock pricing background.

Before talking about the stock pricing theory, a few definitions:

  • Volatility σ, is a measure for variation of price of a financial instrument over time,
  • Growth rate μ, is the rate at which a stock price has increased/decreased within a specific period of time,

Using measures of volatility and growth rate of a stock price we can, using some model, get an estimate for the value of a stock price at some point in the future. How do we do this? We can conduct a log normal random walk (otherwise know as Geometric Brownian motion with drift). This formula shows how we can calculate the change in stock price at a timestep t within our each of our random walks:

Formula for the discrete time version of geometric brownian motion
Formula for the discrete time version of geometric brownian motion

Note that ε is a normally distributed random number with mean 0 and standard deviation of 1.

What we are aiming to calculate are a number of random walks that our stock price could potentially follow:

Ten lognormal random walks showing possible paths a stock price could follow Ten lognormal random walks showing possible paths a stock price could follow

What do we do once we have run these random walks? Well we can take the mean price from each random walk  for some point in time. As well as calculating the mean, we can also calculate the standard deviation. The lower the stddev, the more accurate our mean stock price is likely to be.

(note: the data output from our random walks can be used to value derivatives where the underlying value is the stock in question)


I’m going to split up my design into three layers, the Silverlight view, the (imperative) ViewModel code and (more functional) Statistics and Pricing modules. Note that this follows the MVVM design pattern.


Lets start with the functional parts of the code base, here is the main function which conducts the random walks asynchronusly and returns the stddev and mean:

Note that as well as running each random walk asynchronously, the function is an asynchronous itself, this will help us out later when we are dealing with running calculations asynchronously to the UI thread.

note that you can find an implementation of NormsInv on my previous post:

The next step is to have a look at the view model code, I’ve written two view model types, the first one is called StockPricerViewModel, and it’s responsibility is to control the main UI. It exposes properties such as vol (for volatility) and drift (for growth rate). It also exposes a collection of type CalculationResultViewModel.

For each calculation run by the user, a new instance of the CalculationResultViewModel is created, and once it’s RefreshResult function has been run will update it’s mutable members with the results (i.e the mean and stddev) of the stock price calculation, here is what that class looks like:


Here is a UI that I made eariler, which is now bound to the view model layer:

Silverlight UI for our stock pricer
Silverlight UI for our stock pricer

As you can see from the data grid at the bottom of the UI the user can run multiple calculations and the .net threadpool which underpins F# Async library will nicely queue calculations up.

If we open up task manager whilst calculations are in flight, we can see that most of the CPU is being used to perform our random walks:

CPU usage whilst performing the Monte Carlo simulation
CPU usage whilst performing the Monte Carlo simulation


We can use Silverlight and F# in interesting way to deliver industry standard pricing tools to users desktop machines through a web browser.

F# MVVM: DelegateCommand

July 31, 2011

Although most projects probably wouldn’t be using F# to program their WPF views, the ViewModel layer can certainly be implemented in F#. I’ve implemented a simple DelegateCommand class which can be used whilst building your F# ViewModel layer.

If you are not sure what a DelegateCommand is, or how to use it, then there is a nice example here (in C#):

Simplified F# version:


Calculating the size of a directory in F#

July 27, 2011

Alas there is no Win32 API for calculating the size of a directory, so the best way is to just calculate the length of each file within the directory:

open System.IO

let rec size(fileSystemInfo : FileSystemInfo) =
    match fileSystemInfo with
    | :? FileInfo as fileInfo -> fileInfo.Length
    | :? DirectoryInfo as dirInfo -> dirInfo.GetFileSystemInfos() 
                                     |> Seq.sumBy(fun finfo -> size(finfo))
    | _ -> int64 0

NormsInv function in F#

July 20, 2011

NORMSINV(p) returns the value z such that, with probability p, a standard normal random variable takes on a value that is less than or equal to z. A standard normal random variable has mean 0 and standard deviation 1 (and also variance 1 because variance = standard deviation squared)

// An algorithm for computing the inverse normal cumulative distribution function
let normsInv(p : float) = 
    //todo: validate that p is between 0 and 1

    //Coefficients in rational approximations      
    let a = [|-3.969683028665376e+01; 2.209460984245205e+02;
             -2.759285104469687e+02; 1.383577518672690e+02;
             -3.066479806614716e+01; 2.506628277459239e+00|]

    let b = [|-5.447609879822406e+01; 1.615858368580409e+02;
             -1.556989798598866e+02; 6.680131188771972e+01;

    let c = [|-7.784894002430293e-03; -3.223964580411365e-01;
             -2.400758277161838e+00; -2.549732539343734e+00;
             4.374664141464968e+00; 2.938163982698783e+00|]

    let d = [|7.784695709041462e-03; 3.224671290700398e-01;
             2.445134137142996e+00; 3.754408661907416e+00|]

    //Define break-points.
    let plow = 0.02425
    let phigh = 1. - plow

    let approxLower(var) : float = 
        let q = Math.Sqrt(-2. * Math.Log(var))
        let r = (((((c.[0] * q + c.[1]) * q + c.[2]) * q + c.[3]) * q + c.[4]) * q + c.[5]) / ((((d.[0] * q + d.[1]) * q + d.[2]) * q + d.[3]) * q + 1.)

    let approxUpper(var) = 
        let q = Math.Sqrt(-2. * Math.Log(1. - var));
        let r = (((((c.[0] * q + c.[1]) * q + c.[2]) * q + c.[3]) * q + c.[4]) * q + c.[5]) /  ((((d.[0] * q + d.[1]) * q + d.[2]) * q + d.[3]) * q + 1.)
        r * -1.

    let approxCentral(var) = 
        let q = var - 0.5;
        let r = q * q
        (((((a.[0] * r + a.[1]) * r + a.[2]) * r + a.[3]) * r + a.[4]) * r + a.[5]) * q / (((((b.[0] * r + b.[1]) * r + b.[2]) * r + b.[3]) * r + b.[4]) * r + 1.)

    match p with 
    | var when p < plow  -> approxLower(var)
    | var when p > phigh -> approxUpper(var)
    | _ -> approxCentral(p)

Custom memory paging algorithm for Singular Value Decomposition of large datasets.

January 29, 2011


Singular Value Decomposition (SVD) is an important Factorisation within the sphere of linear algebra, SVD on large datasets is a problem that is very relevant to many scientific applications such as numerical weather prediction, astronomical data modeling, financial modeling and many industrial image processing applications.

This blog post demonstrates a technique which yields low physical memory usage by storing ‘intermediate matrices’ produced during the SVD process on a persistent store (i.e a hard disk). I first wrote this custom paging algorithm so that I could perform SVD on very large data sets, and not use more physical memory than my 32bit machine can support.

Virtual Memory and Memory Addressing.

This article assumes that the matrix you want to perform SVD on is greater than 4 gigabytes and less than 17,179,869,184 gigabytes.

Most CPUs are currently designed so that the contents of a single integer register can store the address of any datum in the computer’s virtual memory. Therefore, the total number of addresses in the virtual memory, is determined by the width of these registers. 32-bit memory addressing means that 232 addresses, or 4 gigabytes of RAM, can be referenced.

The emergence of the 64-bit architecture effectively increases the memory ceiling to 264 addresses, equivalent to 17,179,869,184 gigabytes of RAM however, even on a 64bit machine with disk paging enabled, the technique described here are still effective due to the fact that traditional paging algorithms are not being optimized for handing matrices.

Overview of Singular Value Decomposition

By definition Singular Value Decomposition is the decomposition of a matrix M into orthogonal matrices, U and V and a diagonal matrix S:

Singular Value Decomposition is the decomposition of a matrix M into orthogonal matrices, U and V and a diagonal matrix S:

Standard algorithms (such as LAPACKs svd subroutine) used to calculate the SVD of a matrix require the whole matrix to be in memory at any one time. My custom method still adheres to this constraint due to the incremental methodology used.

Incremental SVD

In 2006 Matthew Brand created a new method which calculates the SVD of a matrix in an incremental fashion, thus allowing the addition of new data to a dataset without the need to recalculate the entire SVD of the whole dataset:

An incremental update to the SVD is based on the addition, to M, of the new data in matrix C which creates matrices U’’, S’’ and V’’ via the update of U, S and V.

Incremental methodology allows us to calculate the SVD of a dataset using far less processing and memory requirements than that of batch SVD(i.e calculating the SVD of a matrix all at once). In practice we have to feed parts of the dataset into the incremental algorithm bit by bit. In an image processing system one might consider feeding each image vector in a sequence of images in at a time.

Brands’ method still produces large matrices however I have engineered it in such a way as to store these large matrices on disk in ‘swapfiles’ whilst keeping smaller ‘intermediate matrices’ in memory; this will all become much clearer once the methodology is explained.

Brands Method.

Step 1: Calculation of intermediate matrices

The first stage of the algorithm is the calculation of intermediate matrices. Matrix L is the

projection of C onto the orthogonal basis U, or C in terms of U. Matrix H is the component of

C which is orthogonal to the subspace spanned by U. This is the component of C which is

not represented by U. Basically what brand suggests is that we create the intermediate matrix L by doing the transpose of U(created from the SVD of M) multiplied by our new data C:

And we create matrix H by doing C minus U multiplied by L:

Step 2: QR decomposition

Matrix H is decomposed into an orthogonal basis, J, and upper triangular matrix K. For this

purpose Brand suggests QR Decomposition. He suggests the use of the Modified Gram-

Schmidt algorithm in order to keep numerical stability

Step 3: Matrix Concatenation

Now that we have produced all of these intermediate matrices Brand concatenates them into the following framework:

Note that matrix I is an identity matrix.

Step 4: SVD on matrix Q.

Brand suggests that we treat the matrix in the center (the one which is the concatenation of C, L and K as the matrix Q:

Now for the beauty of this piece of mathematics; we only compute the SVD of this small matrix Q:

Step 5: Substitution of Q back into the concatenated framework

We now substitute the results of Q back into the mix:

Step 6: Calculation of Final Matrices by breaking up the concatenated framework into three parts.

Too calculate the U’’, S’’ and V’’ we split the above equation into three:

And we end up with our solution:

Getting the Super Massive requirement:

To enable this to work on super massive datasets I wrote an implementation that stores matrices M and U on the hard disk as opposed to in memory, this allows me to use Fortran’s SVD routine as matrix Q is small and is kept on the hard disk.

Obviously storing a matrix in a swap file on disk is far slower than storing it in memory, and because of this the program does run slower. Also because my implementation is not (yet) optimised for speed it doesn’t run as fast compared to an implementation using other matrix libraries (such as Newmat or Blitz++). In terms of the actual algorithm Brand states that for the incremental method, for a dense p x q matrix with a low rank r, the incremental method has time complexity O(pqr) and space complexity O((p + q)r).

One of the main problems that I encountered whilst implementing this is that Matrix U grows as it is

concatenated with other matrices  and I am forced to redeclare a new size of U in the hard disk each time the SVD is updated, this operation is an I/O bound process and hence it does take some time.

There will be an implementation of the above algorithm to follow in another blog post, probably in f#.

%d bloggers like this: