Our team was tasked with creating a new RESTful API to help reduce the amount of logic that is implemented in potentially different ways across our front-end channels. One item discussed early on was if we should try using JSON API to structure our responses.
Other teams within Trainline have had some success creating JSON-API-based services in Ruby, and we saw no reason why our C# implementation would be any less successful. This blog post is an attempt to tell the story of the journey we took and where we ended up. Continue reading
I was selected for the Trainline graduate scheme right after university and the first few weeks here have been a new chapter in my life. It might be somewhat confusing for most people how an Aerospace Engineering graduate finds himself starting out at a tech company that deals with trains. Software was always a key part of my work in engineering and I have always wanted to learn the tricks of the trade in this sector. What better way for me to achieve these goals than to join one of the most innovative e-commerce companies in Europe? And, more specifically, in a team which is responsible for so much interaction with the trains on the ground? Continue reading
Since my talk at AWS re:Invent in December, we’ve had a lot of questions about how we tackled our Oracle Exadata migration to AWS.
This post goes through our Oracle migration journey in more detail and also includes some general tips for moving large databases to the Cloud. Continue reading
We recently started working on rebuilding the desktop version of the Trainline website. It is a big deal for the front-end engineering team because we have to make sure every choice is justified so that the user experience is never compromised. I am writing this blog post to explain how we went about selecting our front-end tech stack and why we made these choices. Continue reading
From Change Control to Assumed Approval: how we first managed the operational visibility of Continuous Delivery and how it’s still in use 2 years later
Trainline has changed in many ways over the last 2½ years and, as a 4-year veteran, I have been ideally placed to watch and help enable that change. One of the big changes was from a project-led to a product-led organisation. Along with that comes lots of things, one of which is Continuous Delivery (CD). The advantages of this are well known, and one excellent stat was recently produced that showed that we have achieved a:
122-fold improvement in deployment agility!
2016 has been a very busy year at Trainline as we have been growing at a rapid pace. This means our tech team has had to work hard to scale effectively and cope with the sheer demand.
New product teams have been springing up all through the year, while older teams have grown and been split into sub-teams to maintain the essential agility of small teams. With such growth, we have needed to ensure our processes were in good shape to manage the increased complexity that this brings.
Here are a few fascinating stats which tell a short story of the year at Trainline … Continue reading
Hacktrain is a three day hackathon on a train. Around 80 software developers, designers and entrepreneurs from across the world gather together to imagine a better travel experience for the rails. The best ideas win a trip to Hong Kong and an assortment of light sabers, drones and other suitably geeky toys. Continue reading
About our App
The Trainline app is a ticket-reseller market leader with more than a million active users. So you can imagine that the quality of our app is one of the main things we care about. The ability to spot issues quickly and capture crashes is critical for the apps of this scale and complexity. And as we are always developing something new and updating the application continuously, we must make sure that we have a fast and robust way of making sure that we do not break anything on the way.
As with any responsible team, of course, we write unit tests for business logic, and we also have integration tests for some big system components. But we have started to get a sense that we are missing something and this something is quite big piece of the puzzle in order to get the right level of confidence about the quality and stability of our application as the team makes code changes from release to release. So we started thinking about this. What is missing? Logic is tested and covered. Even complex class interactions are covered, but what is missing?
Of course, one of the most crucial parts of the application to test is the UI! The most prominent piece of the application that people are guaranteed to come into contact with in their daily use of the app. If something breaks the UI, if something breaks in screen interactions, it will inevitably lead to a very obvious bad user experience and we definitely don’t want to upset the users of our app at all!
To address this issue correctly, we first identified any obstacles that we might face while working with a UI test automation suite: Continue reading
Our team supports nine applications out of the same code base (achieved by a combination of configuration, feature toggles and CSS magic). This code base has been evolving continuously over the last five years and we do at least one release every week and often more than that. Given this scenario, you can imagine how vital a role that unit tests play.
We depend a lot on our unit tests (among other things, of course) to ensure that releases go smoothly and that, when we add that shiny new feature that enables the customer to change her seat, it does not break the feature that lets her get the ticket on her mobile! To achieve this, our team adheres strictly to TDD and we have over 10,000 unit test cases that are run every time a commit is pushed to github and this number keeps on growing with every new feature development that we do.
How could we run our unit tests faster?
OK, so we have great unit-test coverage. However, the side effect of this is that it usually took more than 5 minutes to run the unit tests. Now that is not a very big number by itself but it does become an irritant when we run tests on our developer boxes multiple times a day before we push our commits to git. On a given day, a developer could have spent 15-30 minutes waiting for the tests to run and the build to finish. So how could we spped up this process?
It turns out that NUnit-3 test engine has the ability to run tests in parallel. We hoped that it would reduce our test execution time. In addition, we looked at how Rake Multitask could help us reduce our overall build times. Read on to see what happened…
Whenever I reach a milestone or finish something, I pause and reflect on the journey and what I have learnt. I’m going to talk about our journey of migrating one of our systems from MSMQ to SQS.
When we started, we had a system that looked like this: