About our App
The Trainline app is a ticket-reseller market leader with more than a million active users. So you can imagine that the quality of our app is one of the main things we care about. The ability to spot issues quickly and capture crashes is critical for the apps of this scale and complexity. And as we are always developing something new and updating the application continuously, we must make sure that we have a fast and robust way of making sure that we do not break anything on the way.
As with any responsible team, of course, we write unit tests for business logic, and we also have integration tests for some big system components. But we have started to get a sense that we are missing something and this something is quite big piece of the puzzle in order to get the right level of confidence about the quality and stability of our application as the team makes code changes from release to release. So we started thinking about this. What is missing? Logic is tested and covered. Even complex class interactions are covered, but what is missing?
Of course, one of the most crucial parts of the application to test is the UI! The most prominent piece of the application that people are guaranteed to come into contact with in their daily use of the app. If something breaks the UI, if something breaks in screen interactions, it will inevitably lead to a very obvious bad user experience and we definitely don’t want to upset the users of our app at all!
To address this issue correctly, we first identified any obstacles that we might face while working with a UI test automation suite: Continue reading
Our team supports nine applications out of the same code base (achieved by a combination of configuration, feature toggles and CSS magic). This code base has been evolving continuously over the last five years and we do at least one release every week and often more than that. Given this scenario, you can imagine how vital a role that unit tests play.
We depend a lot on our unit tests (among other things, of course) to ensure that releases go smoothly and that, when we add that shiny new feature that enables the customer to change her seat, it does not break the feature that lets her get the ticket on her mobile! To achieve this, our team adheres strictly to TDD and we have over 10,000 unit test cases that are run every time a commit is pushed to github and this number keeps on growing with every new feature development that we do.
How could we run our unit tests faster?
OK, so we have great unit-test coverage. However, the side effect of this is that it usually took more than 5 minutes to run the unit tests. Now that is not a very big number by itself but it does become an irritant when we run tests on our developer boxes multiple times a day before we push our commits to git. On a given day, a developer could have spent 15-30 minutes waiting for the tests to run and the build to finish. So how could we spped up this process?
It turns out that NUnit-3 test engine has the ability to run tests in parallel. We hoped that it would reduce our test execution time. In addition, we looked at how Rake Multitask could help us reduce our overall build times. Read on to see what happened…
Debug all the things!
I’ve been spending quite a lot of time render checking and debugging various new features recently. Whilst unit and acceptance tests are essential for ensuring an application functions correctly, rendering is an important thing to check too and often needs a variety of methods of testing to ensure pixel perfect results on all media. Continue reading
Review of the Agile Connexions Meetup event hosted at Trainline. A presentation by Schalk Cronjé on Agile Testing.
Testing is an ever-changing discipline that has been disrupted and reworked after the dawn of Agile development. Traditional testing has made way for leaner, more focused alternatives- where it was once relegated to being a post-development step in Waterfall, testing has since been redefined as an integral part of Agile development and adapted to suit the faster pace of Continuous Delivery. Old practices have been re-examined and changed beyond recognition… but is Agile testing doing anything new? Is it delivering value, or is it too focused on process and management instead? Continue reading
Often a situation faced by coders, especially when following test-driven development, is the writing of very similar test cases, changing only in, for example, the expected and actual values, along with some set up parameters. We often end up writing dozens, nay hundreds of near identical test cases, and end up with a test class that looks that it has suffered from a terminal case of copy-paste. This blog post shows a little-known technique for making this sort of test class a little more readable using the nUnit TestCase attribute.
A commonly overlooked area of many systems are the non-functional requirements and the design to meet those requirements. Patterns for Performance and Operability by Ford, Gileadi, Purba and Moerman provides everyone involved in the software life-cycle from development to support with a good foundation in understanding why non-functional requirements are important and real examples of how to capture, develop, test and operate with these requirements. Systems fail when non-functional requirements have not be considered and it is everyone’s role in the SDLC to consider them.
I recently attended a training session run by Dan North (@tastapod) called Accelerated Agile. This blog post summarises what we learnt: Dan argued that some organisations/teams using Agile practices have lost sight of actual achievement and are unconsciously going through the motions, too comfortable in the fact that they are ‘doing Agile’? Why might this be happening, and how can it be avoided?
Dan North in full flow
I thought I would share with you a recent conversion to SpecFlow of some automated tests that we were running in “pure” NUnit, in order to demonstrate the value of using a Domain Specific Language (DSL). See below for some example code and screenshots. Continue reading
Recently, we were fortunate to be joined by Steve Freeman (@sf105), co-author of Growing Object-Oriented Software, Guided by Tests, and one of the early practitioners of Test-Driven Development (TDD). Steve facilitated a discussion at our weekly dev session (aka Burrito Club) on how tests can and should be used to help shape the growth and evolution of software, particularly the use of Ports and Adapters to make testability a first-class concern.
A few years ago, I was working on a project where we decided to use Selenium as our automation tool. Writing automated tests was easy and very soon we ended up copying & pasting code and at times hardcoding values in tests and using xpath to get the job done. Although such tests were quick to write and were giving us results that we wanted at the time, as our test suite grew, we could see ourselves getting into a test code maintenance nightmare. We fixed this problem by using the Page Object Design Pattern.