In the last article, I explained why it’s a bad idea to test your Ruby code against real API endpoints and introduced WebMock as one option for stubbing out those integrations and keeping your tests speedy and manageable even as your suite grows.

That post used a really basic example application to show how to structure your tests under a simple use case. But writing and testing distributed applications is rarely that simple, and most of the time, you’ll find yourself needing to stub whole classes of requests and handle a number of common edge cases. So using the patterns from last time as a baseline, let’s now take a look at some other practical WebMock techniques to help you use it more effectively.

I’ve recently been working on a number of projects that are built on multiple Rails applications, microservices, and data from third-party providers. I can tell you one thing for sure: when your application is flinging JSON blobs all over the place, you can’t use the same direct testing style that you would with a monolith. Do so, and you create all sorts of problems for yourself including:

  • Lousy test performance due to network overhead
  • Unexpected failures caused by connectivity issues, API rate limiting, and other problems
  • Undesired side effects from using a real web service (possibly even in the production environment)

But the thornier problem is the lack of control you have when using live APIs for testing. Working against a real system, it becomes a real trick to exercise your code against a full range of reasonable (and unreasonable) responses, so you find yourself stuck testing a few “happy path” scenarios and perhaps any cases that might happen to throw an exception from somewhere in the stack.

As adoption of agile methodologies increases, more teams are finding user stories to be a useful tool for framing discussions with customers. By defining features using simple, clear language and emphasizing the direct benefits to end users, project teams can organize and plan development activities in a way that’s accessible to both business and technical stakeholders.

But like many other agile practices, much is lost between theory and application. The original concept that drove the invention of user stories has been obscured with increaed popularity and wider adoption. As a result, many development teams are trying to use the technique to solve problems it was never intended to address and seeing lackluster results.

Understand this: user stories are not a lightweight substitute for traditional requirements management and documentation. They’re great when used as a high level outline of project features, but they’re not a solution to every problem, and certainly not a replacement for a good specification. By following an approach that includes user stories and selected, well-maintained documentation, development teams are able to better address a wider range of needs that every project will encounter.

Welcome to the future, coders. We’ve got large publishers and small businesses churning out books and running courses and bootcamps to produce armies of freshly minted coders. Our text editors and tools write a lot of the code for us - at least the most tedious boilerplate code. We have access to servers we can spin-up and spin-down for pennies an hour. There are active communities of experts online 24 hours a day eager to answer any questions we might have along the way. The support for people learning to code or wanting to improve has achieved a level of maturity that would have been unimaginable decades ago.

But while all this has been going on, we’ve hit the wall in terms of our ability as an industry to solve real-world problems for consumers. Improvements to hardware have allowed us add more window dressing to the systems we build, but what those systems are able to do remains mostly unchanged. Software projects still routinely come in over-time and over-budget, sometimes to an absurd degree, and users and project sponsors are still routinely disappointed with the results they receive. The root causes are at this point fairly well known.

  • Unrealistic and poorly communicated objectives and schedules
  • Insufficient or absent requirements definition and management
  • Solutions that fail to solve users’ most urgent problems
  • Underestimation due to lack of detailed planning, overconfidence, or management or peer pressure
  • Failure to identify and confront critical risk factors
  • Poor management oversight and visibility into the development process
  • Lack of end user involvement throughout the development process

It’s been about 10 days since I returned home from MicroConf Europe 2016. MicroConf is a special event for those in attendance - part industry event, part seminar, part summer camp, part support group. And every year, I come away with a shopping list of tactics to try and tasks to be done AND the motivation to dig in and get started on them. This year was no different. Since getting back to my desk, I’ve already:

  • Collected relevant statistics my sales funnel since the beginning of 2016.
  • Filled in my content calendar for September and October.
  • Finished half of the book that was recommended to me by three different people.
  • Sketched out the broad strokes for two new projects.
  • Planned a head to head test to compare the two and decide which one to work on first later this year.

Setting aside the tactical for a moment though, a lot the value of MicroConf comes from the larger lessons drawn from the talks and hallway conversations. Sometimes, these are common threads that run through many interactions but never appear on a slide. Now that I’ve had some time for rest and recovery, I took some time this week to think through some of the broader themes from this year’s conference.

(Special thanks to Christoph Engelhardt, the official MicroConf Europe scribe. Without his always great conference notes to refer to, I wouldn’t have been able to collect my thoughts nearly as easily.)