; )

Interrupt Software

Custom Software Development / Advanced Functionality & Design Excellence

Tim is a gifted developer - thoughtful, strategic, and careful. We hired him to develop the StoryDesk CloudEditor. He delivered a beautiful, one-of-a-kind HTML5 content management system for iPad. I offer Tim my highest recommendation, as a colleague and a friend.

Jordan Stolper, CEO, StoryDesk.com

Tim is a strong Ruby and Rails developer, whom I worked with as a SCRUM Master on an operational data transformation and data store project. Tim was both proactive and balanced in his solutions from a business and technical standpoint, articulate, and was very effective in pair programming scenarios with different developers on the team. His attention to detail, adaptability and willingness to take on new challenges is further enhanced with his positive attitude. I would welcome working with Tim again and recommend him as a strong asset to any organization.

Cort Fowler, Product Manager and Business Analyst, Rogers Digital Media

Tim is one of the most intelligent, forward-thinking developers I've ever worked with. He possesses a deep pool of knowledge about all things related to software and uses that to engineer top-notch solutions. The diversity he's gained from experience with so many different languages and technologies gives him great perspective on technical approaches and strategies. He is also one of the most enthusiastic and proactive colleagues I've ever had the chance to work with, and I'm sure that he's up to any challenge that comes his way.

Stephen Kawaguchi, Engineer , Bank Of Montreal, IFL


Computation And The Mind


I'm writing this article to discuss the mindsets that computer programmers often take regarding programming languages. It's bothered me for a long time that a lot of programmers have quasi-religious attachments to one or another particular language. I'll state up front, my position that computer programming languages are simply tools that let us communicate with other programmers; and for human control of the machine. To that end, I want to explore the question of how we should think about computer languages. What philosophies, contexts and habits of mind should we have when thinking of representing data and how we compute. And for that, I think it's useful to think about human nature. As humans, what is data, information and the purpose of computation? And why, as humans, do we want to do this?

Mathematics and the Concepts of Computation

Mathematics, rightly viewed, possesses not only truth, but supreme beauty ? a beauty cold and austere, like that of sculpture, without appeal to any part of our weaker nature, without the gorgeous trappings of painting or music, yet sublimely pure, and capable of a stern perfection such as only the greatest art can show. The true spirit of delight, the exaltation, the sense of being more than Man, which is the touchstone of the highest excellence, is to be found in mathematics as surely as poetry. - Bertrand Russell

When thinking about mathematics, we may ask why we as humans need such a discipline? There is the Philosophical and Cognitive Scientific inquiry into the phenomena that animates us and compels our survival. But I'll leave that as an ancillary investigation for the purpose of this discussion. For now, we know "that" humans, among many other things, use maths to help us codify our social organisations. I infer this from my observation that maths can be applied to all aspects of human activity: Art, Music, Engineering (from mechanical to civil), Social and Political Organisation, Business Studies (from Accounting to Trade), etc [**]. And we see this trend throughout our histories: Fractals and African Maths [1]. Greek, Chinese, Islamic, et al. Mathematics [2]. It also is useful in predicting and building towards our future as a human species. I write this article, in Toronto, Canada, in June 2013 ( over a number of days :). And our current built form, is arguably the product of accelerated technological advances from the Industrial Revolution, through American Post-War economic planning, to our present information age. Economic growth has been a major constant throughout. And maths and the natural sciences have been relied on heavily to engineer this outcome [**]. Extrapolating this trend, we can expect further technological progress. In particular, for this discussion, we can expect exponential increases in computational power and machine intelligence (quantum computing [3][4], nanobots [5][6], Powerful AI [7], etc.).

I'm focusing heavily on mathematics, because it's development and implementation is almost entirely reliant on our mathematical conceptions and skills; and the quality of our tools of computation. So from the Chinese Abacus, to our Von Neumann Architecture PCs, we can see i) the development of the disciplines of our natural sciences and mathematics. And further, we can see the computer's impact on the development of the human built form. From smaller city and agrarian groupings in early feudal arrangements, to this information age's air flight, city and population growths, space travel, etc. And again, to my mind, the languages we use to compute are simply tools that humans use for fine-grained control of these computers, and also communicate with other programmers. Now we can ask, what does it mean to compute? To compute can be thought of as an actualization, or concrete version, of a mathematical function. Alan Turing was a English mathematician, who invented the disciplines of Computer Science and Artificial Intelligence [8]. He thought that to compute can be thought of as an actualization, or concrete version, of a mathematical function. His ideas of computation can be found in his contributions to the concept of an algorithm and his description of a turing machine. He saw an algorithm as just a step-by-step procedure for calculations. And those steps are expressed as a finite list of well-defined instructions for calculating a function. Further, Turing postulated a theoretical computer (a Turing Machine), that:

  1. consumes an infinite ribbon of tape and
  2. can perform read and write operations on the tape and
  3. alters its own state

The concept is that a Turing machine can calculate anything that is computable (ie. anything for which there is a function), no matter the complexity. Using this as a basis, we can start to look at various programming models and how they approach the question of computation. We can explore their canonical concepts and abstractions they use.

Computer Languages and their Categories

Most to all computer languages are "Turing Complete". Being that a "Turing complete" computational system is one that can compute every Turing-computable function. I can argue this because most, to all languages have conditional branching and allow an arbitrary amount of variables. If this is the case, then how do different language categories address the notion of computation?

Imperative Programming ( Assembly, C, Fortran, etc ), generally describe computation in terms of statements that change a program state. The imperative paradigm was first used under hardware constraints of small memory and limited processing power. Thus we had to be judicious with these resources. Assembly, or Machine, languages were the first imperative languages. In these languages, instructions were very simple, which made hardware implementation easier. Assembly languages, while simple, made it very difficult to write more complex programs. Due to this, language designers began introducing language features (variables, complex expressions, human-readable syntax, etc) to improve this constraint. This begat languages such as Fortran, COBOL, C, et al. So the imperative style was created with Machine languages, to build programs, while dealing with limited processing and memory resources. New languages and imperative features were created to address the expressive limitations of Machine languages. This enabled the creation of more elaborate and complex programs.

The Object Oriented paradigm grew out of a need to build these larger, more complex systems. It's computation model focuses on data encapsulation and object interaction. While Imperative programming treats computation in terms of statements that change a program state, the Object Oriented (OO) paradigm ( Simula, Smalltalk, Java, C++, etc ) encapsulates program state in objects, and procedures in methods. These objects are usually derived from classes, prototypes, or some other mechanism. The OO paradigm was first introduced in Simula, in order to handle discrete event simulation (ex: modelling physical ways to improve the movement of ships and their content through cargo ports). Alan Kay first cohesively described OO to represent the pervasive use of objects and messages, and dynamic typing as the basis for computation. He implemented his ideas in the Smalltalk language. From there, OO grew in popularity while experimenting with new features, in languages such as Eiffel, C++, Java, etc [9].

Functional programming ( Lisp , R, Haskell, APL, etc ), treats computation as the evaluation of mathematical functions and avoids state and mutable data. Whereas Imperative Programming emphasizes changes in state, Functional Programming (FP) emphasizes the application of functions. FP has its roots in lambda calculus, a formal system developed in the 1930s to investigate computability, the Entscheidungs problem, function definition, function application, and recursion. Many functional programming languages can be viewed as elaborations on lambda calculus. The notion of a function used in imperative programming is that they can have side effects that may change the value of program state. Ie, they lack referential transparency, (the same language expression can result in different values at different times depending on the executing program state). Conversely, in FP, the output value of a function depends only on the arguments that are input to the function, which is much closer to a mathematical function. Eliminating these side effects make it much easier to understand and predict the behaviour of a program. This, and FP's close proximity to mathematical functions, are some of the key motivations for the development of functional programming [10].

Declarative programming ( SQL, Regular Expressions, Prolog, etc ), approaches computation as telling the machine what you want to happen, without describing it's control flow. The computer then figures out the how, needed to get the result. This is in contrast to Imperative and Object-Oriented styles, which force the programmer to describe the how. Declarative programming often considers programs as theories of a formal logic, and computations as deductions in that logic space. Declarative programming has become of particular interest since 2009, as it may greatly simplify writing parallel programs [11]. Some purely functional programming languages, attempting to minimize or eliminate side effects, are therefore considered declarative.

Ideally, I think we should choose which tools are appropriate, under each given circumstance. LISP features, for example, include the same qualities of recursion and self-replication in the human mind (and language) [**]. So it's no surprise that it was developed for research into Artificial Intelligence [**]. These computational notions and the human mind, I suspect will be a recursive tension as our computational power grows ever more sophisticated. That's to say, I suspect we will have to constantly re-evaluate our notions of computation, ad infinitum, or as long as we keep witnessing the exponential growth of computational power.

And from here, I think it's useful to keep in mind why we want to compute. I've mentioned before, computation's role in society, and it's effects on the human built form. But I think there's some implicit assumptions in that outlook that compels us to build. It assumes a particular imagination, or vision of the future. And again, I won't suppose which vision is being pursued; Just THAT there is an imperative that compels the human species to have built our current amount of "stuff" [**].

THAT thing, whichever we suppose it is, leads many people to explore alternative futures through science fiction, and other narratives [**]. And all seem to involve a kind of extension of our reach. One could consider whether the future vision is apocalyptic, due to technological or biological overreach [**]; Or the future is triumphant, due to our technical and scientific mastery [**]. In either of these scenarios, we, as a human species, have extended our reach, and somehow augmented our minds. I think this is an important point, because, it allows us to explore alternative technological futures. Our current internet, for example (circa 2013), was not the first attempt at a transnational network of machines or computers.

Recent History of Computing Technology

Considering that this is July 2013, we can contemplate how we got to where we are in terms of the internet, our computer hardware, and state of the art in computer science. Starting in 1978, France rolled out a videotex online service, accessible through telephone lines. It was called the "Minitel", and was a national service of networked computers that pre-dates the existing World Wide Web. Minitel users could chat, make online purchases, train reservations, have a mail box, and even check stock prices. Despite it's success, France Telecom had to retire the service in June of 2012 [][].

There were also early Russian attempts at building a computer industry as early as the 1950s. Starting from 1952, the USSR began work on an automated missile defence system which used a "computer network" to calculate radar data on test missiles through central machine. It interchanged information with smaller, geographically remote terminals. At the same time, amateur radio users across the USSR were conducting "P2P" connections with their comrades worldwide using data codes. Later, a massive "automated data network" called Express was launched in 1972 to serve the needs of Russian Railways [].

At the dawn of the Space Age, the Soviet Union has already had in place a network of ground control stations, allowing to communicate with the early spacecraft to send commands onboard and process the information received from orbit. As it was the case, with early space launchers, the Soviet command and control network owes its origin to the program of long-range ballistic missiles. The early ballistic missiles required radio commands to be sent onboard constantly to adjust their trajectories in flight. As a result of the Cold War, all computer hardware produced in the socialist countries was either designed locally or derived or copied from Western models by the intelligence agencies []. These computers were integral in the command-and-control infrastructure of the Sputnik series of satellites. They tracked satellites, calculated the flight trajectory, processing data, forecasting orbits, etc [].

The history of our current Internet began with the development of electronic computers in the 1950s. ARPANET was the precursor to the Internet. It was a large wide-area network created by the United States Defence Advanced Research Project Agency (ARPA). A message was first sent over the ARPANet from computer science Professor Leonard Kleinrock's laboratory at University of California, Los Angeles (UCLA), after the second piece of network equipment was installed at Stanford Research Institute (SRI). Other organizations in Europe and the UK (Cyclades & National Physical Laboratory), brought Science and Commercial disciplines to bear. In 1982, the Internet protocol suite (TCP/IP) was standardized. Thereafter, the concept of a world-wide network of interconnected TCP/IP networks, the Internet, was introduced [][][].

Perception & Representation: Humans as a Symbolic Species

That's just a quick look at how humans arrived at this current future. After many attempts, the human mind's grasp of natural phenomena is encoded in our maths and natural sciences. And in my opinion, arriving at this body of knowledge required ever greater tools of engineering and computation, and vice versa. We need these tools to think and perceive, as well as to build.

But I think that begs another question. What is human perception, and how do we represent our perceptions. The human species still has an extremely limited understanding of how consciousness (ie cognition & perception) work. However, we can notice that humans, as a species is heavily symbolic. From ancient cave paintings, to pictograms, to human language script. We use these symbols to communicate with each other, our outside world, and our abstract thoughts. Whether written or spoken language, or math or music notation, we need these symbols. It seems to be a part of human nature, and peripherally, has been acknowledged by scientists like Noam Chomsky, ie, the innate language faculty []. Dolphins or monkeys, while very intelligent, do not, by themselves, use any written symbols to communicate with each other. This is all to say that humans are a symbolic species.

And if we accept i) this, and ii) the humans imperative to extend our reach, and augment our minds, we then just need to gauge how close our perception and representation is to the natural world. Our perception & representation, ie, computer language(s) for this discussion, impacts and facilitates the kinds of ideas we can have, the ease with which we can communicate them with each other, and so on. So how should this affect our choice in computer languages ?

Data and Computation in the Natural World

The human genome is about 30 GB. Through gene expression, cells take that template and create us - humans. A small change in that genome, can yield a cow instead. But this is a very complex example of data, information and processes for which we don't have the slightest understanding. Excellent examples of very optimal computation can be found in nature. The natural world, even for examples much reduced in complexity, can provide some insights into the possibilities for very advanced algorithms and extremely fast computations.

Inside every spring leaf is a system capable of performing a speedy and efficient quantum computation - David Biello. It turns out that plants can harvest up to 95 percent of the energy from the light they absorb. They transform sunlight into carbohydrates, in one million billionths of a second, preventing much of that energy from dissipating as heat. This near instantaneous process uses the basic principle of quantum computing?the exploration of a multiplicity of different answers at the same time?to achieve near-perfect efficiency. The protein structure of the plant, using this quantum effect, somehow allows the transfer of energy, but not heat. Humans just don't understand how it happens [].

The cell cycle is a series of events that take place in a cell leading to its division and replication. In cells with a nucleus, the cell cycle can be divided in three periods. There's i) the interphase, during which the cell grows, accumulating nutrients needed for mitosis and duplicating its DNA. ii) The mitotic phase, is where the cell splits itself into two distinct cells. And iii) cytokinesis is the point where the new cell is completely divided []. Within the mitotic phase, there are 4 distinct sub-phases where the pairs of chromosomes condense and attach to fibres, pulling the sister chromatids to opposite sides of the cell. Now, for such a small organism, cell division is very complex and varied across species. From a biological perspective, we have an extremely limited understanding of how this entire process is controlled. And further, we are not close to understanding the information and algorithms in the process, for something even as simple as e-coli. Yet, this is the process by which a single-celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed.

Given these last 2 examples, we know THAT data and computation are happening. We also have some statistical models of what to expect. However, even with our current sciences, we don't understand the information or algorithms that are occurring. We can extrapolate our ignorance to most of the phenomena in the natural world. In fact, compared to nature, humans don't know how to compute [].

Human Species and the Future of Intelligence and Computation

In 1949, MIT Professor Norbert Wiener, imagined an Age of Robots []. He considered the impact of computing machines on society and of automation on human labor. Over 60 years later, his vision is still extremely relevant. Can we, today, foretell the future of Computation and Human Intelligence? Only time will tell. But in my opinion, we should puzzle over, and try to grasp the essential nature of computation. And from this vantage point, work to obtain the computational tools necessary to realize this essential nature, to at least the efficacy of photosynthesis or cell division. The natural world has already given us these, and many other examples.

We can already observe current computer trends such as advanced robotics [], genetic computers [], quantum computing [], nano technology [], artificial intelligence, and so on. Today, in 2013, the US Military already has working prototypes of intelligent, autonomous robots []. These are flight drones, ground dogs, and humanoid robots, with enough intelligence to autonomously navigate their given terrain, and carry out tasks []. The kinds of algorithms necessary to control these machines, is at the very edge of our understanding of cognition and computation.

Many, such as Ray Kurzweil [], and Vernor Vinge [], refer to John Von Neumann [], in describing a "Singularity". This being a theoretical moment when artificial intelligence will have progressed to the point of a greater-than-human intelligence that will radically change human civilization, and perhaps even human nature itself. Toward amplifying human reach, many see a merging of biological and computing devices. Amplifying the human brain, or augmenting it with an artificial intelligence, it is theorized, would be the only reasonable way for humans to keep pace with geometric increases in computational power and intelligence.

So with regards to notions of computation and the human mind, I don't think there will be an endpoint; Just an ongoing relationship between the two. I suspect we will see a recursive tension as our computational power grows ever more sophisticated. Or rather, I suspect we will have to constantly re-evaluate our notions of computation, ad infinitum, or as long as we keep witnessing advances in our tools of abstraction & computation, and the exponential growth of computational power. We will need to focus on greater abilities to abstract and process data and information, whether our computer languages remain symbolic, or move to some other kind of encoding, such as genetic or biological or as yet unfathomed representation.


  • [1] http://www.ted.com/talks/ron_eglash_on_african_fractals.html
  • [2] http://en.wikipedia.org/wiki/History_of_mathematics
  • [3] http://phys.org/news/2012-05-quantum.html
  • [4] http://arstechnica.com/science/2010/01/a-tale-of-two-qubits-how-quantum-computers-work
  • [5] http://en.wikipedia.org/wiki/Nanobots
  • [6] http://www.smh.com.au/technology/sci-tech/nanotransistor-breakthrough-to-offer-billion-times-faster-computer-20120220-1thqk.html
  • [7] http://www.wired.com/wiredenterprise/2013/05/neuro-artificial-intelligence
  • [8] http://en.wikipedia.org/wiki/Alan_Turing
  • [9] https://en.wikipedia.org/wiki/Object-oriented_programming
  • [10] https://en.wikipedia.org/wiki/Functional_programming
  • [11] http://en.wikipedia.org/wiki/Declarative_programming
  • [Minitel] Nation bids 'adieu' to 'French Internet' - FRANCE - FRANCE 24; Minitel - Wikipedia, the free encyclopedia
  • [USSR] History of computer hardware in Soviet Bloc countries
  • [USSR] Internet in Russia - Wikipedia, the free encyclopedia
  • [USSR] Russia's space command and control infrastructure
  • [Arpanet] http://www.webopedia.com/TERM/A/ARPANET.html
  • [Internet] http://en.wikipedia.org/wiki/History_of_the_Internet
  • [Internet] History of the Internet
  • [Perception / Representation] http://plato.stanford.edu/entries/innateness-language
  • [Natural World] http://en.wikipedia.org/wiki/Cell_cycle
  • [Natural World] http://peda.net/veraja/jkllukiokoulutus/lyseonlukio/opetus/oppiaineet5/biologia/opet/vsa/ib/cell/group4
  • [Natural World] http://www.youtube.com/watch?v=NR0mdDJMHIQ
  • [Natural World] http://www.scientificamerican.com/article.cfm?id=when-it-comes-to-photosynthesis-plants-perform-quantum-computation
  • [Natural World] http://www.infoq.com/presentations/We-Really-Dont-Know-How-To-Compute
  • [Future Intelligence] http://www.nytimes.com/2013/05/21/science/mit-scholars-1949-essay-on-machine-age-is-found.html
  • [Future Intelligence] http://www.extremetech.com/extreme/149732-us-militarys-bigdog-robot-learns-to-throw-cinder-blocks-grenades
  • [Future Intelligence] http://www.youtube.com/watch?v=-0VgrSSlGoo
  • [Future Intelligence] http://en.wikipedia.org/wiki/Ray_Kurzweil
  • [Future Intelligence] http://en.wikipedia.org/wiki/Vernor_Vinge
  • [Future Intelligence] http://en.wikipedia.org/wiki/John_von_Neumann
  • [Future Intelligence] http://en.wikipedia.org/wiki/Technological_singularity

A Developer's Toolbox (Rich Internet Applications)


I'm often asked what are the best tools and technology stack for building a Web Application. For the purposes of this article, I'll focus on more advanced front-end representations - what are known as Rich Internet Applications (RIA(s)). I think it's useful to step back and consider the purpose and conceptions of Rich Internet Applications (RIA, synonymous with Single-Page Applications (SPA)). We can start by thinking back to when most applications were on the desktop. As the internet grew in popularity, Javascript was introduced into browsers (and Flash). Web pages grew in sophistication, as to begin to resemble full desktop apps. Now, we have things like Google docs, which are basically our old desktop apps extruded onto the web. I mention all of this in order to get us thinking about how we should be treating these new web apps. Ie, we should be treating these web apps like full applications. And with that, my opinion is that, to the highest degree possible, we should let a webapps do its own rendering, state changes, business logic, etc. It's a much cleaner design to i) pass raw HTML template chunks and ii) JSON data from RESTful services. The web app will have enough intelligence to take these, and generate a web view, UI functions, state transitions between the UI, etc. I advocate these principles to enforce a clean separation of concerns. It future-proofs the app, and allows for easily scaling machine resources, or adding new functionality. 

With this in mind, as an example, let's consider three MVC Web Frameworks - Ember, Angular, and Backbone. We'll take a semantic comparison between these libraries, and more, why a certain library would benefit us from a production, cost, time, future planning standpoint. So you can properly judge my position, I'll state from the beginning, my opinion that Backbone is usually the best tool for a front-end MVC solution. My experience is that it optimizes i) developer time (ie. speed to market), ii) production efficiency (it's very lightweight), iii) scalability, and iv) future flexibility. To begin, I present a useful Client-side JS MV* Framework Roundup. It gives a nod to the TodoMVC project. TodoMVC implements a simple todo app in all the web MVC frameworks. It's meant to help you select the best one for your needs. 

Like Rails, Ember is meant to be an opinionated framework, using common idioms. Views are handled via 2-way binding against rendered moustache templates. Angular is meant to be a way of declaring dynamic views in web-applications. It does this by letting you extend HTML vocabulary for your application. Angular also defines its own set of attributes and markup, which are processed by its JS library to provide browser-specific behaviour. Backbone is intended to be a lightweight and focused way of building single-page applications (or RIAs). It gives structure to web applications by providing models with key-value binding and custom events, collections, views with declarative event handling, etc. It connects it all to an existing API over a RESTful JSON interface. 

With the above, I'll begin with my preference to eschew the moustache approach to templates, used by Ember. It tangles together the raw HTML template chunks, with transformation logic. And it unnecessarily forces web designers to know Javascript or some other logical transformation language, reducing developer efficiency. There are better, more declarative path-based solutions, like PureJS. Ember also implements rendering logic on the server. This tangles together application functions, reducing future flexibility and scalability. The tangling I described earlier is also why I eschew Angular. 

Now, broadly listing a technology stack, will not address enough cases. Below, I'll outline three scenarios, or types of web applications, and an appropriate technology stack baseline. With each set of choices, I'll explain the tool and the rationale behind that choice. But I also want to step back again, and take a more holistic approach to my solutions. Before the Scenario Breakdown, I'll describe my approaches to i) Project Management, ii) thoughts on Pair Programming, and iii) a good approach for Testing and Test Automation.

Project Management

I think most software projects are good candidates for an Agile software development approach. Consider eXtreme Programming (XP) and Scrum, both Agile Methodologies. They are closely aligned, yet with subtle differences. XP uses strict priority order, and prescribes engineering practices (see here). I think it's appropriate to start with Scrum, then introduce elements of XP where needed (ie, Continuous Integration, TDD, etc). 

  • With regard to roles within a project, at the very least, most will need the i) Product Owner ii) Team iii) Scrum Master and iv) the Project Manager. 
  • Sprints of 2 weeks are a good starting point. This would include a Planning Meeting, where the i) tasks / Stories for the sprint are identified ii) Estimated and iii) Prioritized. Teams should also conclude each sprint with a review or Retrospective Meeting. This is where the progress is reviewed and lessons for the next sprint identified. And of course the software will be Delivered to and reviewed by the customer. 
  • I find Daily Scrums to be overkill for most projects, unless teams can strictly keep them to 5 minutes. However, it's good practice to do constant Backlog Refinement. That being the process of creating stories, decomposing stories into smaller ones, refining, prioritizing and sizing existing stories using effort / points. 
  • That leads to the next feature, adding a Points System to tasks. An abstract point system is used to discuss the difficulty of the story, without assigning actual hours.
  • Product Backlog is an ordered list of "requirements" that is maintained for a product. It consists of features, bug fixes, non-functional requirements, etc. - whatever needs to be done in order to successfully deliver a working software system. 
  • Sprint Backlog is a subcomponent of the Product Backlog. It is the list of work the Development Team must address during the next sprint. The velocity previous sprints will guide the team when selecting stories/features for the new sprint. 
  • Increment is the sum of all the Product Backlog Items completed during a sprint and all previous sprints. 
  • Burn Down Chart is a publicly displayed chart showing remaining work in the sprint backlog. Updated routinely, it gives a simple view of the sprint progress.
  • Spike - A time boxed period used to research a concept and/or create a simple prototype. 
  • Velocity - The total effort a team is capable of in a sprint. The number is derived by evaluating the story points completed from the last few sprint's stories/features. 
  • Tracking - Both these tools have excellent project management features: Pivotal Tracker and FogBugz.

A few other key project artifacts are itemized below. These are needed to maintain efficient management of developer hours: 

Thoughts On Pair Programming

I believe software team cohesion, is closely tied to how productive and empowered all team members feel. So we discussed starting with a solid development methodology. This usually means an Agile software development approach. A next good step is pair programming. I like the rock-solid code that is usually produced with pair-programming. I find some of the effects of pairing, are i) each programmer is more thoughtful wrt how they are designing the system(s); and ii) both programmers usually a wider breadth of technical knowledge and experience between them. This is because the person coding is usually required to verbally explain and justify their technical decisions. iii) And fewer tangents are made, due to the constant support of an ever-present partner. Also, pairs can and should switch between coding and supporting. This allows rest for each team member, and usually means the active coder is more fully alert.

Full-time pair-programming is a good idea, if your team can afford it. However, it's sometimes necessary for a programmer to either i) quickly try out a solution or technology, to better understand the problem domain. Or ii) it's often necessary for someone to simply take time to think clearly about a problem (could involved reading books, blogs, etc). So in a full pairing engagement, time apart from coding could reasonably be managed by the pairs. 

Testing and Test Automation Solutions

Of course the testing framework would depend on the language in which we choose to implement the system. There are several levels and approaches to testing that are appropriate in each scenario. 

  • Unit Tests (vs BDD) - Unit Tests addresses individual units of code. Alternatively, BDD, an outgrowth of TDD, focuses on the behavioural specification of software units
  • Acceptance tests (vs Generative testing) - Acceptance tests address the end-to-end functioning of the system. This is in contrast to Integration tests, which only test several layers of the system (but not everything). Generative testing is a newer idea. It is one where the code itself generates test cases. We typically write code to generate test cases according to one or more assumptions you would like to test. This is a good approach for more complex systems; when we want to test unanticipated inputs, over a wide range. 
  • Simulation testing - Simulation testing, derived from disciplines such as engineering, disaster recovery, etc., is meant to be a rigorous, scalable, and reproducible approach to testing. Artifacts from each step (modelling, defining activity streams, execution, result capture, and validation) are captured in a time-aware database, so steps can be run (and re-run, and enhanced) independently of each other. 
  • Continuous Integration (or Automated build) - Continuous integration (CI) merges all developer working copies with a shared mainline several times a day. Its main aim is to prevent integration problems, upon delivery of the software. 

So for example, consider a Ruby Rails versus a Clojure Compojure application. Generally, the pattern would be:

  • Ruby - RSpec (BDD) > Cucumber (Acceptance tests) > CruiseControl (Continuous Integration). This is a well-understood and battle-tested collection of test tools. It gives great test coverage for the simple version of our webapp. Generative or Simulation testing is not warranted in a simpler web application scenario. 
  • Clojure - Speclj (BDD) > Test.generative (Generative tests) > Pallet (Continuous Integration). Speclj is a clean and straightforward approach to testing, while focusing on the behaviour of software units. Test.generative allows us to test the more general assumptions we have about the system. We then let the test tool generate potentially thousands of tests that validate our assumptions. This would be more appropriate than Acceptance tests, for a dynamic and streaming types of applications. And Pallet is a dev ops automation platform, with excellent integration with hudson/jenkins, and Clojure build tools. Simulation testing is probably not warranted if the application is more speculative in nature. Ie, users will often create and deploy new algorithms, quickly negating prescribed simulations. 

Scenario Breakdown

Before selecting a toolset, it's very important to know a few things about the system

  • What are the core function(s) ? 
  • What is the expected time-to-delivery ?
  • Where the delivered application will live (incl. network reliability), and what DBs and with which services it must communicate ?
  • What are its users, and how much load the application is expected to see ?
  • Who will be maintaining the application upon delivery, and what are their skill-sets ?

Scenario A)

This is a Rich Internet Application (akin to Pixelthrone), solely as a web tool, communicating with 3rd party cloud services. It will be a responsive front-end that is capable on smart phones, tablets, and varied screen sizes.

    HAML / SCSS / Coffeescript / PureJS - Haml, Scss and Coffeescript compile down to html, css and javascript, respectively. They're higher level syntaxes that let developers write equivalent output code, in a much shorter amount of time. The added benefits greatly outweigh the added abstraction. PureJS is a lightweight templating tool, that eschews the moustache templating approach. My opinion is that the moustache approach, incorrectly tangles together document structure and logic in the same place. PureJS, instead uses path-like expressions for data locations.
    Backbone - Backbone has a focused and elegant approach to rendering choices. It also has a clean and lightweight approach to managing the internal state of the application (model and controller). And the RESTful server communication is also very consistent and well thought out. In short, these design advantages are what help optimize development and production costs, time, and future planning.
    Bootstrap - Bootstrap is an excellent front end framework with which many developers already have a strong knowledge level. However, there are advantages and disadvantages of this option, and some alternatives. 


  • Every HTML element that could potentially be used is accounted for. Meaning even rare tags, like
    , will be elegantly styled and positioned.
  • It lays a foundation for consistency that would take a good amount of time to achieve manually. Further, when a developer passes off the deliverable to the client, others will be able to 'extend' the original work without disturbing the general aesthetic.
  • It's facility for rapid prototyping, and again, most team's familiarity, means it would be quick to use and efficient.


  • Suboptimal for creating a performance driven web app
  • The framework can become too heavy, because so many things (html elements, etc) are included. It can be tough to quickly find what you're looking for. Additionally, troubleshooting unexpected margins and borders and whatnot can be difficult. 
  • It's not bespoke, or tending toward a higher quality brand. It is a generic solution that a lot of startups use.
  • Customizing such a pervasive framework can be very tricky. Changing one thing might mean unintended effects on other elements.


  • Foundation is a responsive front-end framework. It let's developers quickly prototype and build sites or apps that work on any kind of device. 
  • HTML5 Boilerplate is a professional front-end template for building adaptable web apps or sites. It does not impose a specific development framework, freeing the developer to manipulate the code to their needs.

Scenario B)

A basic, SQL-backed webapp; simple set of functions, moderate usage, and Junior Sys Admins maintaining. 

  • HAML / SCSS / Coffeescript / PureJS - Haml, Scss and Coffeescript compile down to html, css and javascript, respectively. They're higher level syntaxes that let developers write equivalent output code, in a much shorter amount of time. The added benefits greatly outweigh the added abstraction. PureJS is a lightweight templating tool, that eschews the moustache templating approach (see here). I'll reiterate my opinion that the moustache approach, incorrectly tangles together document structure and logic in the same place. PureJS, instead uses path-like expressions for data locations.
  • Ruby / Rails / JSON data exchange - Ruby is an excellent dynamic, object-oriented language. It has language features (first-class-functions, simple syntax design, etc) that let programmers quickly build out capable, general-purpose solutions. Sinatra is good for simple webapps. However Rails gives i) better support for REST endpoints ii) more compatible libraries and iii) easier setup and migration of SQL database schemas and data. There's a good set of Rails / Sinatra tradeoffs here. And JSON is a well-known and supported data exchange format, especially for RESTful, AJAX calls 
    • not Sinatra - see Rails / Sinatra tradeoffs here
  • PostgreSQL - The app data is rectangular and related. That makes SQL technology a good fit. schema and queries will be well-known before hand, meaning they won't require a lot of mutation after delivery. Postgres is a reliable, stable, and well-known RDBMS. It is open source, and has a license that's suitable for commercial purposes (see here).

Scenario C)

A complex, stateful UI, backed by several "big data" stores. Client wants to capture and analyse a constant stream of financial data. Researchers will take this data and need to create and deploy new algorithms and analytics on top of the data. This means real-time analytics, on a constant stream of data; high usage by very senior quantitative analysts and data scientists; maintained by Senior IT Personnel. 

  • HAML / SCSS - Haml and Scss compile down to html and css, respectively. They're higher level syntaxes that let developers write equivalent output code, in a much shorter amount of time. The added benefits greatly outweigh the added abstraction. 
  • Coffeescript / PureJS / RequireJS / BackboneJS - Coffeescript compiles down to Javascript. It provides greater expressive power over javascript, using less code. The added benefits greatly outweigh the added abstraction. PureJS is a lightweight templating tool, that eschews the moustache templating approach (see here). I'll reiterate my opinion that the moustache approach, incorrectly tangles together document structure and logic in the same place. PureJS, instead uses path-like expressions for data locations. RequireJS is a very good tool for building component systems necessary in a large, complex thick-client. BackboneJS is a lightweight, well-thought out MVC tool for managing in-browser app state. 
    • Thing
    • almost Clojurescript, Enfocus, Functional Reactive Programming - These technologies would be a much better fit than the abouve, for the kind of real-time sensitive interactions in the app. Clojurescript especially, is ideal for computationally intensive, interactive applications (see here). However maintenance ability and cost is high. I would recommend this over RequireJS and BackboneJS if you have very good IT specialists as maintainers. Enfocus is a templating tool for Clojurescript. Like PureJS, it uses path-like expressions for data locations. Functional Reactive Programming (FRP) is an approach that uses Functional programming techniques to operate on data structures over time. Ideally, we'll want an FRP library that lets us more cleanly transform, compose, and query streams of data (mouse moves, stock streams, etc). 
    • ? not Websockets - There's more standard HTTP Server Sent Events (EventSource API) 
  • Clojure / Pedestal (for SSE support)  / Storm / JSON data exchange - Clojure provides a number of language features (first-class functions, homoiconic, immutable data, etc) that make it ideal for building complex, data intensive apps. Pedestal is a tool set for building web applications in Clojure. For this app, it has a number of useful features, such as built in SSE support. Storm is a distributed realtime computation system. It provides a set of utilities for doing realtime computation. I chose it over Hadoop, as Storm is used for real time processing while Hadoop is used for batch processing. JSON is a well-known and supported data exchange format, especially for RESTful, AJAX calls. 
    • almost EDN - would be a better data exchange format than JSON. This data format is extensible, has rich objects, and is serializable. But it's new, not in wide enough use, and not enough people understand it. I would only recommend this format if the client has very Senior maintainers. 
  • Datomic - I think Datomic is ideal as it i) decouples DB functions such as read & write (see here). It also ii) has a flexible schema model, allowing for changes to data structures, as users learn more about the domain. It also iii) has a sound data model based on time and immutability (more faithfully representing data over time) and iv) a logic-based query language (focus on facts). The downside is the specialized knowledge needed to maintain and query the database. But the advantages, and simplicity of the query language, mitigate those tradeoffs. All these other databases are close considerations. But they don't fit the bill due to their specialized nature. Whereas Datomic covers more ground, in terms of leveraging the data. You can see some DB tradeoffs here. 
    • not Cassandra - Our app will write to DB, more than it reads. And this is Cassandra's main advantage. Most reads will come from big data stream services (via Storm). 
    • not Redis - Good for rapidly changing data sets (but not that much will be needed); but it works best when those data sets all must fit into memory  
    • not Neo4j - This is good for graph-style, rich or complex, interconnected data.
    • not Couchbase - Good for low-latency and high availability 
    • not VoltDB - Good for reacting fast on large amounts of data 


These technologies are simply a good baseline when considering building out a Rich Internet Application. There are other options, as with the rising popularity of Javascript on all devices. There's NodeJS on the server, PhoneGap native apps on the mobile device. Tools like Node-Webkit also allow you to create desktop applications with Javascript. And with Tessel, we can even use it on our micro-controllers (ie Arduino).

Beyond tools simply, teams should consider the kinds of language features and architectures appropriate for their needs. Features such as immutable data structures or first class functions (ie closures) offer a lot of benefits and usually be added in as a library or 3rd-party solution. Beyond that, even, techniques like Combinators and Functional Reactive Programming offer better control, albeit with increased abstraction. You can think creatively. I, personally, prefer tools that offer the greatest amount of expressive power. And at the same time optimize my i) developer time (ie. speed to market), ii) production efficiency (it's very lightweight), iii) scalability, and iv) future flexibility.


Interrupt Software is a premier software development boutique. I create and utilise the best tools possible, to identify the problem, craft a solution and distill the technology's role and interaction.

Interrupt Software offers advanced functionality and design excellence. Technology is more than the context. It has the ability to reimagine behaviour and experience. It connects users with messages and customers with business. And this creates a tremendous amount of business value.

At Interrupt Software, I program as fast as I think. There's a great deal a of value in a well thought out solution. And doing it well leverages your investment for long-term impact.


As a Full Stack developer, I see solutions at all levels. Considerations range from user needs to innovation and software craftsmanship.

My skill applies not only to custom building your software. It also creates better solutions, faster, by extracting greater expressive power from my software tools.

Understanding The Problem.

To break new ground, you have to understand the context. Each business and customer is different. I assess and evaluate all elements so I can effectively conceptualize the most advanced problems.

Choosing My Tools

My expertise is in knowing what tools to utilise and to what extent. As much utilization as creation, I select best-of-breed tools that enable me to program as fast as I problem solve.

Crafting Impactful Solutions

I aim to generate business and technological value with the solutions I deliver to you. They have to perform on every level, be as visually stunning as functionally impactful.


I can consult on your project in 3 key areas

System Analysis

Helping you define the scope and purpose of your system.

System Design

Shaping the experience.

System Implementation

Using best-of-breed tools to customize a solution for your needs.


Interrupt Software is Timothy Washington

Highly experienced and respected in the digital realm, he is a keen advocate of technology as a world view. He is a senior software developer with over a decade of experience architecting and building custom and enterprise software solutions. Tim is also an avid contributor to the open source community. he is an expert utiliser and visionary. From Lehman Brothers to Conde Nast, his vast portfolio reflects his eclectic expertise, professionalism and passion for his craft and its industry impact.