How Do I Unit Test This?

Hang out in IRC, Slack, or Gitter rooms for OS projects for a few days and before too long you’ll see someone ask how to unit test some part of their app. It’s particularly common with large frameworks that encourage inheritance over composition, which usually results in a great deal of environmental setup standing in the way of efficient, automated testing on a unit basis. It sometimes makes me feel bad, but usually my answer is: you can’t.

If you’ve lashed your code so tightly to your framework that you need to jump through hoops to test it, then you’re almost certainly not unit testing it. Testing code that’s in a subclass of ActiveRecord::Base is an integration test. Testing how an Angular component renders using the framework’s templating system is an integration test. It’s hard to write a unit test when your app is forcing you to write an integration test.

Why do we even test at all?

When it comes to testing—any testing—one must always keep in mind that the actual point of the testing is to help us write better software, not to meet some quota for code coverage or tests written. So many devs are content to write bullshit, space-filling tests just to keep up appearances, or out of a sense of obligation. The emphasis in some communities (cough-ruby-cough) on “test-driven” design or development is particularly problematic here, since too often there’s an over-emphasis on writing the test first as the only hallmark of TDD, and a complete ignorance of how to let the test drive the code—the actually important part.

“Best practices” or “being idiomatic” aren’t magical outs here, either. Design patterns and best practices are great, insofar as they actually result in good code design. It is self-evident, however, that if the way a developer codes and tests is predetermined by cookie-cutter-style conventions then that developer is not letting the tests drive anything other than the clock. While this often puts the lie to claims of test-driven development, it isn’t just a concern for aspiring practitioners of TDD—Awkward, jury-rigged, and brittle tests should be setting off alarms and clueing us into code smells and technical debt whether we write tests before the code or after.

Just calling it a unit test doesn’t make it a unit test

When it comes to unit testing in particular—where TDD is most natural and effective—there are two rules to follow in order for something to be a unit test, in a meaningful sense:

  1. You need to be able to mock any dependencies of the unit
  2. You need to own all the dependencies of the unit

These rules, just like the rule of writing tests at all or writing the test first, are in service of a higher goal: allowing the process of writing tests to make it clear to us where our design needs to change. This is the most basic way tests “drive” development—by encouraging design choices that make it possible to test in the first place.

Two approaches to mocking

One approach to the first rule is to figure out how to reduce and simplify your dependencies. Just by chopping up one class into several—each with one or two dependencies—the code almost magically becomes much more easily tested, refactored, and extended. This is a classic example of the test driving the improvement of your code by encouraging the separation of responsibilities.

The second approach is to look at your oodles of dependencies and piss and moan about all this mocking you have to do. Slog through it for a few hours. Pop into a chatroom. Let someone tell you that you can just test directly against the database. Write an integration test disguised as a unit test. Finally, call it a day for the rest of your career.

One reason so many developers insist on the tests adapting to fit their design, rather than the other way around, is because it isn’t actually their design at all. Frameworks that encourage code to be piled into a handful of classes that fit a set of roles determined by some development methodology do developers a disservice. Frameworks aren’t bad, necessarily, but when it’s considered “best practice” for the developer to forfeit all responsibility for their app’s architecture and their code’s design, it makes it impossible for the developer’s tests to inform the development process.

If you start off by subclassing someone else’s code you’ve almost certainly fallen afoul of the first rule right from the start. You’ve introduced a massive, irresolvable dependency into the very foundation of your code. Sometimes you’ll have little choice but to rely on scaffolding provided by the dependency in order to test your own code, integration style.

The two operative words in the first rule are “you” and “able.” The rule isn’t “It must be theoretically possible for someone, with unbounded knowledge of the dependencies, to mock the dependencies,” or even “You need to have mocks for the dependencies, from wherever you can get them.” If you can’t look at the class and immediately know what needs to be mocked and how to mock it, that should be a huge red flag.

Only mock what you own

The second rule is a consequence of a third rule: only mock what you own. You own your project’s pure classes, and to the extent that you subclass you own whatever logic you’ve added. You don’t own the base classes, despite their behaviour being incorporated via inheritance. This is another rule where the face value isn’t so much the point of it as the consequences: by only mocking your own classes, you’re pushed into building out facade and bridge classes to formalize the boundaries between your app and any external systems.

Tests are much more confidence-inspiring when the mocks they depend on are rock-solid doubles of tiny classes each with a single responsibility. Tests that instead stub one or two methods on a huge dependency are brittle, are prone to edge cases, increase coupling, and are more difficult to write and tweak with confidence. Tests of classes that themselves have to be stubbed are almost worthless.

Thinking outside of the class

Following these three rules can help put the focus back on writing well-structured, maintainable code. It’s not always obvious, however, what changes need to be made. If a developer is staring at a class that descends from ActiveRecord::Base, and which includes a couple of plugins, along with a raft of methods that all need to be tested it’s understandable to look askance at the notion that AR and those plugins should be expunged in order to test the class. After all, without AR they don’t even have a class to begin with, right? The path of least resistance all too often is just to write an integration test using the entire stack.

In these situations one must keep in mind that “unit” and “class” are not identical, and to ask not “how can I possibly remove these dependencies from my class” but “how do I remove my code from this class, which I don’t really own?” By moving those methods off to other classes as appropriate (formatting, serializing, and complex validations are things that might be on an AR class that can easily be broken out into their own plain-old-ruby classes) we’ve accomplished the same thing. So much ActiveRecord-dependent code can be refactored to depend only on a hash (or OpenStruct) of attributes.

It’s possible to use monolithic frameworks and still care about good design. Finding ways to take ownership of our code away from the framework is crucial. Your tests should be a searchlight, pointing out places where your code is unnecessarily tangled up in someone else’s class hierarchy.

Preventing bad testing habits

Developers often begin their professional life with a few high-level heuristics that are, unfortunately, continually reinforced. A few relevant ones:

  1. Minimize the number of classes to write and test
  2. “DRY” code up by relying on libraries as much as possible
  3. MVC means my app is made up of models, views, and controllers

It’s not difficult to see how these lead to large, fragmented classes tightly coupled to oodles of dependencies. The resulting code is going to be difficult to test well in any circumstance, and will bear little resemblance to anything that was “test-driven.” I’d like to suggest some replacements:

  1. Minimize the number of dependencies per class.
  2. Minimize the number of classes dependent on an external dependency.
  3. Write the code first, worry what “category” each class falls into later.

The first will result in more classes, but they’ll be more easily tested, refactored, and maintained. The second encourages dependencies to be isolated into bridge, adapter, or facade classes, keeping the dev’s code dependent on interfaces he or she owns. The third breaks the MVC (among others) intuition pump that says every class we write has to fit one of two or three possible roles. A dev utilizing these heuristics will find themselves asking “how do I unit test this?” far less frequently.

Now, “how do I integration test this” is a different question entirely… more on that later.

Monads and Ruby

Monads have a weird and varied reputation outside of the FP universe. For Rubyists, in particular, monads and functional programming can look alien and nearly unparseable. Ruby is aggressively object oriented—it doesn’t even have first-class functions, technically—and the foreign nature of a lot of the background necessary to grok monads leads to indifference at best and hostility at worst.

On that score, I once overheard (after mentioning monads at a Ruby meetup) someone define a monad as “something assholes talk about to seem smart.” There is way too much knee-jerk rejection by some in the Ruby community of things they don’t immediately understand or find comfortable, but that’s another post—or multi-year psychological survey—entirely. This isn’t an article about why monads are awesome and why Ruby devs should love them.

Other than the indifferent and the hostile, there’s also a weird middle group of Ruby developers who are enthusiastic about monads, but who drastically overthink their implementation. I recall coming across a project that was mostly just an ersatz implementation of algebraic data types and type checking in Ruby. I definitely appreciate the benefits of those things, but Ruby just does not have either, and besides that we don’t need them to use monads—in Ruby or any other language.

What is a monad? A refresher

At their core, monads are just another design pattern, like the command or visitor patterns. Here’s a simple definition of a monad, or at least I think so, courtesy of Jed Wesley-Smith, and paraphrased by myself:

A type T which encapsulates value a (T a), and for which there exist
functions such that:

a T a
T a (a T b) T b

What’s interesting about this definition is that, in a philosophical sense, something is a monad regardless of whether you actually write down and implement those functions in code. Either the functions exist, and someone somewhere could write them and use them in their code, or they don’t.

This is the concept of mathematical realism, which underlies the notion that we “discover” mathematics as opposed to invent it. Max Tegmark, an MIT physicist, extends this into his hypothesis that the universe itself is essentially mathematical, and, as a consequence, all possible mathematical structures exist, in some meaningful sense. Here the idea is much simpler: if it is possible for an object to be a monad, then it is a monad, whether that was your intention, or not, and regardless of the extent to which it looks like a monad in another language.

As Rubyists, things shake out even more simply since we don’t have much in the way of typing to worry us. If we implement #bind (the second function) there’s no mechanism for defining or enforcing type signatures, so #bind and #map have the exact same signature in Ruby. As a result of duck typing the only real type signature is arity, but that being the case remember that a monad in the general case is a mathematical entity—not a type or a class—and as such it is and remains a monad only so long as we use it as one.

Ruby almost has a built-in monad already

We’ve already recognized the similar shape of #bind and #map, but what about that first function, usually called #return? #return, being a method that takes a value and returns an instance of a type, is, in Ruby, just a constructor. Actually, it isn’t strictly identical: with return, there’s a universal interface, while initializers have hard-coded and peculiar names. This is a direct consequence of dynamic typing and the differing natures of OO vs Functional Programming.

Array, of course, has both an initializer and #map. Can Array#map function as Array#bind? Unfortunately, not quite. Let’s look at that signature again.

T a (a T b) T b

In terms of Array, this looks like

Array a (a Array b) Array b

So, #bind takes a block that returns an Array of a given type, and then itself returns an Array of that given type. #map doesn’t work that way. If you tried to use #map like #bind you would get the following:

[1, 2, 3].map { |x| [x + 1] }
=> [[2], [3], [4]]

Clearly, not what we wanted. #map gave us an Array of Arrays, not an Array of Integers like we’d expect from #bind. Luckily for us, it looks like there’s a simple transform from one to the other. We’re just one #flatten call away from having an Array monad, in theory:

refine Array do
  def bind(&block)

[1, 2, 3].bind { |x| [x + 1] }
=> [2, 3, 4]

With just five lines of code we seemingly now have a monad in Ruby, no complex type enforcement necessary. It remains, however, incumbent on us, the developer, to maintain fidelity to the monad requirements, as with all other informal contracts in our code. With our implementation above we could ignore the laws and use #bind exactly the same as we’d use #map, and it, surprisingly or not, would work just fine:

[1, 2, 3].bind { |x| x + 1 }
=> [2, 3, 4]

This works thanks to the specifics of our implementation, and we all know not to depend on knowledge of an implementation rather than interface when we rely on a library method, right? In fact, in this case it’d be an even bigger mistake, because the implementation is flawed. It works for Arrays of numbers, strings, your own classes, etc… but it doesn’t work for Arrays of Arrays.

[[1], [2], [3]].bind { |x| [x + [4]] }
=> [1, 4, 2, 4, 3, 4]

The monadic law has been broken: #bind has given us an Array of Integers, instead of an Array of Arrays of Integers. We can tweak our implementation to fix this, but in doing so we’d break (and have to fix) any uses of #bind that ignore the law and treat it like #map:

refine Array do
  def bind(&block)

[[1], [2], [3]].bind { |x| [x + [4]] }
=> [[1, 4], [2, 4], [3, 4]] # Good!

[1, 2, 3].bind { |x| x + 1 }
=> 9 # Bad!

[1, 2, 3].bind { |x| [x + 1] }
=> [2, 3, 4] # Good!

So that leaves us with an Array monad, which is of limited usefulness without the other List goodness in Haskell. A far more universally useful monad is Maybe (or Optional for Swift devs).

Call me (maybe)

Maybe represents the possibility of there being a value, or there not being a value, without using nil. This means we can call methods on the result of an operation without worrying about which situation we’re dealing with. If we actually do have a value, calling #bind (or other related methods) operates on the value. If we don’t have a value, #bind short-circuits and simply returns the empty Maybe. It’s basically rails’s #try on steroids.

b = foo(5) #<None> { |x| x * 2 }
=> #<None>

c = foo(2) #<Some @value=2> { |x| x * 2 }
=> #<Some @value=4>

Maybe and Optional are the names of types for this monad in Haskell and Swift, respectively, but that doesn’t mean there has to be a corresponding class in Ruby. Haskell and Swift’s implementation uses algebraic data types, which are great, but they’re not objects and Ruby doesn’t have anything similar. So when we talk about Maybe in Ruby, we’re not actually talking about anything called Maybe in our code, but the coupling of two types that we can implement: Some and None. In a sense that’s all Maybe/Optional are, as well: a combination (called a tagged union) of two other types.

Here are the Haskell-ish type signatures for Maybe (Haskell uses Just and Nothing rather than Some and None):

Maybe a = Some a | None
bind :: Maybe a (a Maybe b) Maybe b

#bind is the same as before: it takes a function that receives the value and returns another Maybe, and itself returns a Maybe. So, a block passed to #bind has to return either Some a with a new value or None without a new value. There are no other choices. Ruby obviously will let us return anything we feel like, or even however many different kinds of anything we feel like. We can’t rely on type checking to help us here anymore than we ever can, and trying to build some ersatz type enforcement just for this special case makes no more sense than it ever would. So, what do we do? Well, we return either Some a or None. It’s as simple as that.

Some = do
  def bind(&block)

None = do
  def bind

That’s it. That’s all we need in order to conform to the laws. There is, almost literally, nothing to it when written down in actual code. Concerned that #bind will let you return anything you damn well please? So will almost any other Ruby method, so don’t sweat it too much. I don’t mean “don’t test your code,” or anything so laissez-faire, but don’t get too caught up in the lack of type checking. That’s a red herring, and either way your monadic code isn’t any worse off than the rest of your Ruby code in that regard.

One very real downside to forcing your code to care about return types is that you lose the benefit of duck typing, and couple your use of a monad to your specific implementation. Theoretically, if you were to use a library or other shared code with methods that returned a Maybe, its return values should be interchangeable with your implementations. Some#bind will work as expected, None#bind will short-circuit as expected, and so on. Now, there might be other differences you care about (particularly around what utility methods are implemented/exposed), but when it comes to the monad type, the behaviour of #bind is the only thing that matters.

One more thing to be careful of: #bind has to return a monad of the same kind. Returning Some from an Array#bind call, or [] from a Some#bind are both monads, and will both respond to #bind in turn, but they aren’t valid invocations. You can nest #bind calls, of course, but when it comes time to return, make sure you’re returning the same kind of monad as you started with.

arr = [2, 5, 7, 9, 4]

greater_than_5 = ->(x) { x > 5 ? : }
even           = ->(x) { x.even? ? * 2) : }
increment      = ->(x) { + 1) }

composed       = ->(x) { Array(
                                    .bind(&increment)) }

arr.bind(&composed) - [None]
=> [#<Some @value=5>, #<Some @value=9>]

I have, in my own projects, a pair of module methods—Monad.bind and Monad.compose—for simplifying monad composition. compose actually is just a bit of sugar on top of reduce and bind, which does the heavy lifting. By using Procs and composition it’s trivial to build up a set of simple transforms into more complex operations. They’re very easily tested, as individually they’re just procs which respond to #call, same as always.


The Either monad is similar to Maybe, except instead of just one value or nothing you get either a Left with a value or a Right with a value. Left often represents an error, and Right a succesful result. The implementations are equally simple:

Right = # Right is identical to Some

Left = do
  def bind


Does the rigid type enforcement of a language like Haskell or Swift help catch bugs in your use of monads that you might trip over in a permissive environment like Ruby? Absolutely. That doesn’t mean monads have no value to Rubyists, or that we have to turn the language on its head to mine that value. We can build some nice APIs on top of these basic implementations, of course, to add some safety or convenience, but at their core there’s nothing about monads that’s incompatible with Ruby or that even qualifies as nonidiomatic Ruby.

Using Github OAuth For Sign-in

This weekend I went to implement sign-in with Github (specifically, since my app is dev-focused). Having used it to sign into apps in the past, I had the vague idea that Github was an OpenID provider. It’s not, it’s just a plain-old OAuth2 server, so even more than usual it’s incumbent on the developer, i.e. me, to ensure it is used securely. There are a few major considerations.

It should be well-known—but probably isn’t—that Oauth 2, by itself, is an authorization protocol, not an authentication protocol. In authorizing a user, we want to gain access to some protected resource. In authenticating a user, we want to affirmatively establish their identity. OAuth specifies a rock solid flow for the former, but mostly ignores the latter. This might seem counterintuitive if you’re used to thinking of OAuth as an authentication mechanism, but when you consider that the point of the protocol is to enable password-less access to data owned by a third party service it actually makes a lot of sense that authentication for your app is out of scope.

It’s still possible to use OAuth2 as part of a secure authentication process. In fact, there are extensions that build on OAuth2 to create such an authentication protocol, such as OpenID Connect or Facebook Login. OAuth isn’t inherently insecure, it just doesn’t go that last mile to provide the functionality we want by default. Absent implementation of an additional authentication layer spec, there are steps that an application developer can take to use OAuth for authentication.

Properly configured SSL is absolutely essential

SSL can be an incredible pain to setup: acquiring the certificates, processing them in obscure ways, installing them, configuring it all, and then dealing with potential errors and issues in your code. OAuth 2 relies heavily on SSL to provide security, however, and as such eschews techniques such as token signing to provide fallback security. Improper use of SSL can expose client secrets in addition to communications between your server and the browser. When faced with certain errors it can be tempting to toggle obscurely named settings that seem to resolve the error. Oftentimes this can have the effect of severely crippling SSL, for instance by disabling certificate verification.

Only use the server-side authorization grant

Using the implicit grant is inherently insecure, as a “bad actor” app developer can attain an access token for someone who has logged into their app and has also logged into your app. That token can then be injected into your app. Your app would have no way of knowing that the access token provided was issued to another app entirely. If a user is logged in based solely on verifying this access token, then your user’s account can be completely compromised. Github does not support the implicit grant.

Block cross-site scripting attacks

OAuth2 provides a ‘state’ parameter that, when provided as part of the authorization request, will get returned as part of the callback. This parameter should be an unguessable string that can be verified, usually by tying it to the session. Cross-site attackers won’t be able to guess the ‘state’ and thus won’t be able to inject arbitrary access tokens into your app. This sort of attack is obviously significantly more dangerous when it comes to apps that export data.