Engineers love design patterns. They make our jobs easier, because they provide us with a lot of time-tested, proven ways of solving common problems. We also love design principles, which help guide good code design. Confusing the two is a common, and unfortunate, mistake that can lead an engineer down a deep well of poor code design that can be hard to escape. We all know a principle when we see one (Single Responsibility Principle, etc), but what, exactly, is a design principle?
A design principle is a top-down, goal-directed, descriptiveheuristic.
It’s top-down, because design is top-down: high level concepts and systems are recursively decomposed into smaller systems and finally into classes. That doesn’t mean starting with a complete spec, but it does mean that, whatever your methodology, the end result needs to make sense from that perspective. If your end result has no high-level design then the process has failed.
It’s goal-directed, because it’s in service of external goals and not a goal in-and-of itself. Those goals are many but include: ease of testing, long-term maintainability, reusability, and velocity. A design principle that makes any one of those things worse is less than useless.
It’s descriptive, because it describes a likely ideal outcome without necessarily prescribing the means to achieve it. The difference is between giving someone enough information to reliably identify a cow, and that person trying to reconstruct a cow from that description without seeing one in advance.
It’s a heuristic, because it’s a rule of thumb, not a law. Heuristics help us get into and stay in the ballpark, by cleaving off unlikely or obviously unpalatable alternatives. Once there, though, they don’t help you identify the right answer. Occam’s razor, for instance, is a heuristic that is often abused. It says that the simplest explanation should be preferred to others. The point isn’t that any given simple explanation is necessarily more correct than a more complex one, it’s that simple explanations are preferable as a practical matter to more complex ones (and note they need not be mutually exclusive,) and, more importantly, as a basis of inquiry they’re likely to prove more fruitful.
So when applying a design principle, keep these things in mind:
Are you applying it as part of an actual design process? Are you going to end up with a series of well-structured, coherent, and reasonable systems and classes, ordered into appropriate levels of abstraction? At any given level and for any given system, can you hold the entire thing in your head and comprehend it reasonably well?
Are you making things easier for yourself in the long term, or just making a bigger mess on a large scale for the sake of applying a principle on the immediate, small scale?
Are you just applying it by rote, as if it were an algorithm? Are you judging your use of the principle from the perspective of finished, high-quality software, or by whether some cookie-cutter pattern has been universally applied everywhere possible?
Have you considered whether it even applies, or whether it applies in the same way as it has in other situations?
(This post has been sitting unpublished for months—oops.)
Lately, I’ve been seeing these mistakes everywhere—from blog posts, to newspapers, to professionally edited and published books. I doubt they’re actually much more prevalent lately, it’s more likely that I’m just noticing them more and am increasingly bothered by them. It’s surprisingly difficult not to correct people all the time (I sometimes slip up.) Part of my strategy for coping is to write things like this post, so I can chew glass without aiming my wrath at any particular poor soul.
Correcting these poor habits can be a double-edged sword: often people are so attached to the incorrect “rules” they’ve internalized that they will try to correct your already correct English.
Words that end in s are not special
You still make them possessive in the usual way
The most important thing to keep in mind is that the variations in spelling and apostrophe usage exist to match pronunciations, not to avoid having an s on both sides of an apostrophe. When you leave an s off of words that require it, you’re changing how the words sound, both when said aloud and in your readers’ heads.
Most words are made possessive by adding ’s. This includes words that already end in s, with a notable exception: words that are made plural by adding an s are then made possessive by adding just the apostrophe. Words that don’t have an s after the apostrophe are pronounced the same as words without an apostrophe. Words that have an s after the apostrophe are, for the most part, pronounced the same as the plural of the same word. An apostrophe by itself doesn’t affect the pronunciation at all.
I was called into my boss’s office.
Boss’s sounds like bosses.
Boss’ would sound like boss.
Boss’s, bosses, and bosses’ all gain both an s sound and an s, in some fashion.
“The officers ate at the officers’ club.” Same pronunciation, same spelling.
Say the word aloud. If it has an extra s sound at the end, then write it after the apostrophe.
They are not their own plurals
Words that end in s are made plural by adding es, not an apostrophe, ’s, or by leaving the word unmodified. Some get an additional s before the es, like gasses, but some don’t, like lenses. A lone apostrophe is then added to make the word possessive, if applicable.
I met with my two bosses to discuss the lenses’ specifications.
First names are not special cases
Both of these rules apply to names. First names that end in s get an additional s, no exceptions. That includes, for instance, Curtis’s, Atticus’s, Jesus’s, Chris’s, James’s, and Charles’s. To make them plural, add es. To make them both plural and possessive, add es’.
I went with the Rosses to Charles’s house.
He sat at the Jameses’ table during lunch.
If you say Chris’s and spell it Chris’, that’s not a stylistic choice: you’re leaving off an entire syllable, and are just plain misspelling it. This applies doubly to contractions. Writing Chris’ waiting in the car is just nonsense.
Last names aren’t, either
Due to widespread pronunciation of some last names when they’re made possessive (like Mr. Rogers and his neighborhood), there is disagreement as to whether an additional s is required after all last names that end in s, or only some. Generally speaking, the widely-accepted, easiest, and most consistent rule is to add the s if you say it aloud.
These names can trick the ear, though, and out of context they might sound as if an s isn’t pronounced, when it is in common practice. For instance, “Mr. Jones’ classroom” would be pronounced the same as “Mr. Jone’s classroom,” which might sound natural. However, “Mr. Jones’s classroom” would match the pronunciation in actual usage.
For what it’s worth, I always write (and say) the s, with the occasional exception for certain phrases, like “Mr. Rogers’ Neighborhood.”
Last names are made plural the same as other words: by adding an es. This is reflected in the expression “keeping up with the Joneses.” They are then made possessive by just adding an apostrophe. Last names are left alone if they’re being used as adjectives.
Steve Jobs’s legacy
Bernie Sanders’s campaign
The Jones family is going to the Adamses’ for dinner.
The Reynoldses went to Disney World with the Smiths.
Other things that technically aren’t even words
An initialism that ends in S isn’t plural, so it always gets ’s to make it possessive, even if the word it stands for arguably wouldn’t. This one perplexes me, because it’s impossible to mishear these mistakes as correct, and the act of speaking them (aloud or in your head) is identical to spelling them, so it should be obvious they’re missing a letter.
The U.S.’s sestercentennial is in 2026.
For that matter, initialisms, acronyms, and decades are made plural by adding a lone s without an apostrophe. An apostrophe is never used to make anything plural.
CDs and PINs, not CD’s or PIN’s.
1990s, not 1990’s
Between you and I, this bakes my beans
A common over-correction is to avoid ever saying “you and me.” This is often incorrect, as I is used for a subject, and me for an object. This applies, to a lesser extent, to other pronouns. In all cases, the mental fix is simple: remove the other person, and suddenly that I starts to sound pretty weird: That gift is from Bob & I becomes That gift is from I, which is clearly wrong.
Use I, she, he, they, or who when the person is the one doing something. Use me, her, him, them, or whom when something is being done to the person or in relation to the person.
Just between you and I…
I went with Bob and he to the bar last night.
When is a good time for you to meet with Bob & I?
“I don’t know who they’re angry at, Hillary, you or I.”
Just between you and me, I think it’s terrific.
Do you want to go with me and Bob?
These tickets were given to me and Bob to share.
Bob and I are not fond of her.
She and Bob went to lunch already.
“I don’t know at whom they’re angry, Hillary, you or me.”
It’s from bits
It’s is a contraction of it is. Its is the possessive form of it. It might be easier to remember that, as a group, pronouns don’t get apostrophes to make them possessive. E.g., theirs, hers, his, yours, whose, and ours are other words that don’t get an apostrophe.
It’s going to be a nice day.
Its average temperature will be 72.
People’s, not peoples’
Irregular plurals (people, men, women) are made possessive by adding ’s. The apostrophe is only placed at the end of the word if an s is not added, on account of the word already being a regular plural ending in s.
In all cases, it is a question of adding simply a lone apostrophe if the pronunciation doesn’t change, or an apostrophe followed by an s if it does. In no cases is an s added followed by an apostrophe simply to make a word possessive.
The Monty Hall problem is a classic example of a provable, simple truth that runs entirely counter to our expectations. When it was widely distributed in Parade Magazine in 1990, thousands of readers—many very well educated, including scientists—wrote in to vigorously disagree with the proferred conclusion. Yet, computer simulations have (trivially) proven the extremely intuitive solution to be incorrect.
The problem is simple, and is based on the game show “Let’s Make a Deal.” A game show host presents you with three doors. Behind two of the doors is a goat. Behind the third door is a new car. You’re invited to select one of the three doors. After you pick your door, the host opens one of the other two doors to reveal a goat. He then gives you the option of switching to the door that you did not pick and that he did not open. Should you switch?
The answer is yes—your odds of winning the car are doubled if you switch. The intuitive answer is “it doesn’t matter”—there are two doors, one of which definitely has a car behind it, so if you were to flip a coin and pick one you’d have a 50% chance of picking the right door. That answer is provably incorrect. So, what’s going on here?
Let’s step back and take another look at how this could go. You’re back in front of the doors, which have been reset. This time, after picking your door (let’s say you pick door C) the host offers to let you either stay with the door you picked, or get both of the other doors (A & B). If the car is behind door A or door B, then you win it. Should you switch?
Of course you should. It’s plainly evident that two doors give you twice as many chances to get the car. With one door, you only have a 33% chance of having picked the car. There’s a 66% chance the car is behind one of the other two doors. Now, remember again that there is only 1 car, and 2 goats. That means that at least one of the two doors definitely has a goat behind it. The question to consider is: are the odds different if the goat is behind door A rather than door B? You know for an absolute ironclad fact that one of them is a goat, so does it matter which door has a goat?
Let’s say you switch. At this point, your choice is locked in. The host reminds you that behind one of the two doors there is definitely a goat, and asks what you think your odds of winning are. You tell him that you’re twice as likely to win as not win, since you have 2 chances to get the car. Then the host says “What if I were to tell you that behind door A is a goat, what are the odds now?”
There remains, of course, a 66% likelihood of winning the car.
There was always going to be a goat, behind either door A or door B (or both, of course.) And the host was always going to tell which one had the goat, regardless of whether it was door A or door B. You have received absolutely zero new information that would affect your odds of winning the car.
This scenario is completely identical to the original formulation, where the option to switch is given after the goat is revealed. There is no new information, you always knew there was a goat, and you knew the host was going to show you one. The key to all of this, and what makes it counter to our intuitions, is that the door opened to reveal the goat wasn’t chosen randomly. The host was never going to open your door, even if it held a goat. So, even though there are now two unopened doors to choose between, the odds aren’t equal because the two sets of doors were treated differently.
If a complete stranger were to come across the set and see the three doors with one already open to reveal a goat then it would be a coin flip for that stranger—because they don’t know which door you initially picked. That extra information is what tips the odds in your favor if you end up switching.
I’m working on an experimental graphics app that delegates a lot of functionality (including user-scriptability) to (mostly) pure functional code, written in Haskell. To be clear, the point here isn’t to “write a Mac app in Haskell.” Instead, my Haskell code consists of certain domain-specific operations on data structures. Transformed data is returned to the main app, to be interpreted as appropriate.
There are two main problems to solve: (1) integrating the Haskell part of the app in the first place and (2) exchanging structured data between Swift and Haskell. This post is Part 1, and I’ll discuss marshaling data across the boundary in a later post. The first step turns out to be pretty simple, in contrast to the impression given by an article on the official Haskell wiki on the subject. This post covers the process for an Application target. Framework targets are a bit different, and will be covered in Part 2.
The Haskell Code
Haskell integration with other languages is based on the Foreign Function Interface (FFI.) The FFI handles translating/calling the external function (or vice-versa.) All we have to do is tell it a little bit about how the function gets called, and what the types are. We’ll start with a very simple function:
I went with triple to avoid any confusion with Double. In order to export this function to be called elsewhere, we have to first include the ForeignFunctionInterface language extension at the top of the file:
The last thing we need to do is actually export the function. Note that the typing situation can be a little weird, and FFI provides the Foreign.C.Types module with C-specific types. In this case, however, the normal Int type works just fine.
ccall specifies the calling convention, e.g., how to find the function and its arguments in memory. In this and most cases ccall suffices, and tells the compiler to use the C calling conventions. Finally, we simply repeat the function signature. Note that in many cases it will not be this simple to translate a Haskell signature into the C-compatible version. That’ll be covered in more detail in Part 3.
Compiling The Haskell Code
GHC has a gazillion flags… the man page is truly frightening. Luckily we need only to use a handful. The command to compile our simple Haskell file looks like this:
--make combines all the modules of your code and the dependencies in one step
-dynamic specifies using dynamic Haskell libraries.
-shared generates a shared library
-lHSrts links in the Haskell (HS) rts library
-lffi links in the FFI library
-O (that’s a capital ‘o’) enables default optimization
-o triple.so specifies our output filename
triple.hs is our input file
Add Files To The XCode Project
The compiler will output four files. The only two we’re interested in are triple.so and triple_stub.h. Add those to your project. Ideally you would just add references to the files, so that modifying and recompiling the Haskell source won’t necessitate any copying or re-adding of files later.
Configuring The Header Files
In your bridging header file, add #import "triple_stub.h". If you don’t have a bridging header, you can just create a new .h file and name it projectname-Bridging-Header.h.
triple_stub.h includes HsFFI.h, which is part of the core GHC libraries. We have to tell XCode where to find the header file, via the project inspector. Under the “Build Settings” tab, find the “Header Search Paths” setting and add the location of the GHC includes directory. On my system that directory is:
triple.so should have been added automatically to “Linked Frameworks and Libraries,” at the bottom of the “General” tab for the app target. If not, add it now.
This isn’t enough to actually make the library available, though, so we have to tell XCode to copy it via the “Build Phases” tab. Click the “+” at the top and add a “New Copy Files Phase.” Set the “Destination” dropdown to “Frameworks.” Now, drag triple.so into the file list for the new build phase.
Now that we have our project set up, we can call our function from our Swift code just like any other C function that is automatically bridged for us. The only caveat is before we call a Haskell function we have to call hs_init to setup… I’m not entirely sure what it sets up. There is a corresponding ‘hs_exit’ function to call when we’re all done with Haskell.
Sending and receiving more complex data will require a bit more work, and I’ll cover that in a separate post.
Tests shouldn’t have to be changed or updated all that often. If they are, then they’re getting in the way of what tests are supposed to help us achieve: high velocity, effortless refactoring, code maintenance, etc. High test churn is an indication that something is wrong with either the testing methodology or the code design. The proximate causes are legion: lots of stubbing/mocking, large numbers of dependencies, spaghetti classes, testing glue code, high level (integration) tests masquerading as low level (unit) tests, and so on. This is a separate issue from keeping tests DRY. If your helper modules or shared contexts are churning, then that’s likely as much a smell as if you have to constantly rewrite the tests themselves.
There are three main kinds of problems, in my experience:
Testing The Wrong Thing
It’s really easy to test things you shouldn’t, especially if library glue/boilerplate code makes up a significant fraction of your app. There’s sometimes an insistence on exhaustively testing “our code,” even if our code doesn’t actually do anything. Or there might be pressure, internal or external, to write tests just to say you wrote tests. Often this will take the form of testing rote configuration of some framework class, which is a combination of code duplication and testing third-party code. Not only are you probably “testing” something that is liable to change, but you’re quite possibly coupling your test to your implementation, at best, and the implementation of a third party library, at worst.
A very rough rule of thumb is not to write a test if you didn’t actually write a function or method yourself. In those situations where you do feel the need to write a test, then it should be functional: varying inputs and asserting on results, not interrogating and asserting against internal state. A good example might be validations built into an ORM class: testing those validations should be functional, i.e. the validate method should be called with actual valid or invalid data—simply using introspection to check that “this class has a uniqueness validation registered on it” is pointless.
Testing Too Much
If you fall into the mindset that good testing is to throw a veil over the code and rigorously test against any conceivable bug via every single access point, then it’ll be easy to ramp up the quantity of tests you write to an absurd level. This can result in a lot of test churn if the things you’re overtesting end up changing—and they probably will. For example, you might write a bunch of tests that verify logic for a method that simply forwards its arguments elsewhere. Test logic present in the class, method, or function. Don’t test delegated logic.
For instance, if you have a method that does some sort of computation, and another method that composes that method:
Then tests for tax_for_order shouldn’t be testing that individual taxes were calculated properly. The tests for calculate_tax handle that. A good rule of thumb is that if you find yourself testing more than one thing for a given method/function, or testing the same thing across multiple test subjects, then you’re either testing logic that is elsewhere or logic that should be elsewhere. How applicable the rule is will vary based on how vital the thing you are testing is, whether it’s public vs. private, whether it’s part of an interface that client code might use, etc. In general, though, well-written code will have simple, single-issue tests. In this example, tax_for_order might initially look like this:
Now you’re testing at least two things: (1) Whether we even charge tax on this order, based on the state and (2) What the tax for the order should be. Code that is more cleanly tested might look like:
(An even worse initial version might be something more like @states_where_we_have_warehouses.include?(order.state).)
Testing Poorly Designed Code
There’s nothing wrong with mocking, stubbing, test doubles, etc. However, too much mocking, or stubbing in low-level unit tests, can oftentimes be a code smell. Having to mock or stub a lot is a strong indication that a class is too tightly coupled, either to its dependencies or because the class combines a lot of responsibilities. If you have to stub the class you’re testing itself, then something has gone horribly wrong. If you’re stubbing or mocking some internal method, then you’ve hit on something that should be in another class in the most direct and obvious way possible.
Too much mocking/stubbing can be caused by a class having too many dependencies. Having many dependencies is, furthermore, an indication that your class is doing too much. Often this’ll be paired with large methods that tie everything together. One of the chief benefits of testing is its ability to highlight larger-scale design problems: if it sucks to test something, it’s probably poorly designed. Being at a loss for how to test something, or even just really not looking forward to it, is a strong indication that you should be refactoring, not papering over the problem with painful, complex tests.
None of these problems are peculiar to any particular testing methodology. However, if you’re encountering them while ostensibly practicing TDD then you should step back and reconsider how much you’re actually letting the tests drive the code. Actually writing tests first is a key part of TDD, of course, but putting the tests first is, in my opinion, both more important and often overlooked entirely.
One of the unsung heroes of Interface Builder is the “custom object.” This doesn’t go onto the storyboard canvas—it isn’t a view or a view controller. Instead, objects goes up in the “scene dock,” that bar with little icons (for, for example, the view controller and its exit segue) at the top of a scene. Simply drag from the palette to the scene dock, and then set the class in the inspector and you can start connecting outlets and actions to the object. Then, on instantiation, the object will be initialized for you and connected.
This is pretty great, but the usefulness is somewhat limited by the fact that the storyboard will only initialize the object via its class’s designated initializer. That isn’t always possible or desirable. In one very, very, very common case—managed object contexts—it isn’t possible by any stretch of the imagination. It would, however, save a LOT of glue code and reduce a great deal of coupling if custom objects were just a bit more powerful.
What I’d like to see is the ability to give a custom object a storyboard ID, in addition to a class. Using the storyboard ID, an app delegate could register a pre-initialized object with the storyboard that would be used, rather than a fresh object being instantiated.
This would transform the storyboard into a fairly powerful dependency injection container. Imagine removing this code, or anything like it:
This sort of code is sprinkled through a lot of code bases, passing the managed object context around “bucket brigade” style. The theory is it’s better than calling a global method to get at the managed object context (I actually disagree, see my post on iOS design patterns.) I think it leads to potentially insane amounts of coupling, not to mention being a pain in the ass to unwind if app flows change. Imagine, instead, that you add the managed object context as a custom object to each scene where you need access to it. Then, in one place, you register your MOC with the storyboard:
And never worry about it again. Your controllers that need access to the MOC magically get access, without the previous controller having to care about or have access to the MOC itself.
TDD can mean many things—from simple ‘test first’ practices focused mainly on integration or acceptance tests, all the way down to a highly granular, line-by-line, red-green-refactor methodology. I, personally, am not a TDD purist. I’ll use TDD in some circumstances, and take a more relaxed approach in others. Generally speaking, the more concentrated and encapsulated the functionality the more likely I am to use TDD. Try to use TDD with highly diffuse code, and I’m likely to freeze up and suffer from design paralysis. I also picked up some poor testing practices by osmosis in my early years that I’ve had to mindfully and aggressively prune. The more I prune, the more effectively I find I can use TDD, and the more I find myself actually using TDD. In hindsight, a lot of my aversion to TDD over the years is traceable to my own bad habits and misconceptions—stumbling blocks and speedbumps that can slow the TDD process to a halt.
The simplest and first to go was the problem of not actually letting tests drive the design—rigidly imposing a design from the start and then expecting TDD to magically and readily produce a well-written, reliable implementation seems to be a common practice. Once it becomes obvious that different forms are more—or less—readily tested, it’s a simple matter of making those patterns the default—composition over inheritance, dependency injection, encapsulation, minimal side-effects. This isn’t the be-all-and-end-all, but not only is your code easier to test but almost as a direct consequence it’s also much more well-designed. DHH might call this “design damage,” but I call it “Testability Driven Design”—generally speaking, if your code isn’t testable then it’s badly designed, and if it’s well designed then it will be easily tested. If you have to invent new disparaging terms to justify the poor testability of your code, well… good luck.
For me, the second wrong idea about testing to go was the intuition of the well-specified unit. I’m not sure how prevalent this is, but for the longest time I labored under the belief that each class should be a black box, and its interface tested exhaustively. It didn’t matter if the class itself had no logic of its own, and simply incorporated functionality tested elsewhere. Taken to an extreme, this will be an obvious absurdity. The problem is that it often is taken to an extreme. Plenty of Rails devs will write tests that actually just exercise ActiveRecord, rather than their own code, in the belief that they need to exhaustively specify everything, right down to the automatically provided attribute readers. I believe the origin here lies in libraries which encourage much blurring of boundaries—the harder it can be to tell where app code ends and library code begins, the more one will instinctively ‘play it safe’ by over-testing.
This may be more or less controversial, but in my view a unit test should test only logic, and only logic that is present in the unit itself. “Logic” being code whose behaviour will vary depending on the input. Attribute accessors are not logic—simply testing for their presence is code duplication, and they should instead be exercised by higher-level tests (if they’re not, then they’re not used elsewhere in the code and so why do they exist?) Taking arguments and directly passing them to an injected dependency is not logic—that’s glue code. Unit tests aren’t black box tests—you don’t have to suspend knowledge that certain functionality isn’t actually implemented by the unit.
This leads to the third wrong idea, which is that the main purpose of a test is to prevent as many potential future bugs as possible. Preventing bugs is a benefit of testing, but it is not the purpose: well-designed, well-functioning, maintainable code is the purpose. Focusing on preventing bugs will lead directly to pathological testing, including code duplication, testing third party code, and their degenerate case: testing implementation detail. The classic example is the sort of tests encouraged by the ‘shoulda’ gem. If you’re writing a unit test for an ActiveRecord model that ultimately asserts that a validation is present on the model by checking the model’s validations array—please stop. You’re just duplicating your code and tightly coupling your unit tests to third party code for zero reason. “But what if I accidentally delete that validation?” one might ask. Tests aren’t there to verify that you wrote certain code—they’re there to verify that the logic works correctly. Those aren’t the same thing. If you want to verify a validation, then somewhere in your code it should be tested by varying actual inputs. If your test doesn’t ultimately depend on some input somewhere changing, in all likelihood you’re just duplicating your code, or testing someone else’s code.
No doubt someone will disagree with my interpretation of TDD in light of the above, but once I started to shed these habits my willingness to use TDD and my velocity while doing so skyrocketed. I was no longer dreading the arduous task of writing tons of pointless tests just to make sure every line of code or potential ‘contract’ was covered. I no longer felt that adding a new class automatically meant a bunch of TDD boilerplate. Focusing all my tests on logic meant far fewer breakages when irrelevant glue code changed or low-value code was moved around. I still don’t use TDD all the time, but I find myself shying away from using it out of fear of “design paralysis” less and less.
This was originally titled “Introduction to ROM: Part I,” but seeing as it focuses almost exclusively on AR and Rails, I’ve decided to rework it into a post specifically about ActiveRecord, with a separate series focusing exclusively on ROM. I’ve retitled this post to reflect the topic more accurately.
Yesterday I was pointed to a comment thread for a blog post titled “Five More ActiveRecord Features You Should Be Using.” The features themselves were some of the usual suspects when it comes to AR anti-patterns: lots of coding by side effect (lifecycle callbacks, change tracking, etc). The interesting thing to me was what happened in the comment thread.
First, @solnic mentioned the suggestion that you use the after_commit lifecycle callback to automatically kick off an update to Redis when the database model is updated, and remarks “great, you just coupled your AR model to Redis, every time you persist it, it needs Redis.” He doesn’t say that the goal—synced data—is bad, merely that the implementation is introducing significant coupling. In reply, @awj says:
There can be great value in having secondary data stores continuously kept in sync with primary data changes. There also can be value in not doing this. Stating that either is unequivocally a “bad practice” is little more than cargo cult system design.
Holy leap of logic, Batman. That’s some underpants-gnome thinking… “Don’t use A to implement B because that method increases coupling” does not imply “Don’t implement B.” At first I was angered by what I considered to be dishonest debating tactics, but after thinking about it for a while, I’ve come to realize that it most likely results not from dishonesty, but from a constrictive mindset that a developer, steeped in Rails and ActiveRecord culture, will almost inevitably adopt.
Within the Rails and AR world, whenever good coding practices are pitted against “Rails Way” mantras like DRY and various “best practices”—not to mention expediency—the good coding practices almost always lose. The fact is, there is no good way to implement that sort of automatic syncing between database and Redis that is both well-coded and compatible with the “Rails Way.” To a certain kind of “Rails developer,” the only way to resolve the dissonance is to adopt logic like “Saying I shouldn’t couple is the same as saying I shouldn’t implement my feature—” because when you’re wedded to Rails and ActiveRecord, that is in fact exactly the case.
ActiveRecord—both as it is implemented and as it is used—is a big driver of the culture that insists that tightly integrated code and side-effect driven logic is necessary and desirable. On its surface, it purports to be a powerful and easy-to-use database access layer. Developers like it because they don’t have to do anything to use it—its ease of use right from the start of a project is legendary. Unfortunately, these benefits are illusory. The fact is, ActiveRecord induces insane amounts of coupling in your app and severely restrict developer freedom down the road.
ActiveRecord is Full of Anti-Patterns
How does ActiveRecord lead to coupling? Let me count the ways. The simplest is the globally accessible interface—such as being able to call where on any model from anywhere—which can lead to app code littered with knowledge of the database schema, not to mention that every class has complete unfettered access to your entire database. Named scopes aren’t much of an improvement. How many named scopes look like this:
Not only does this barely count as syntactic sugar, but they still expose details of the database and remain available globally, as always. The global is still a significant problem—more semantic scopes would be either completely inflexible or forced to incorporate business logic (those will be some fun tests) to be useful in different circumstances. Other bullshit “best practices” like “thin controller, thick model” lead to monster model classes full of business logic—pretty much the definition of tightly coupled code:
What does code to send data to a client have to do with persisting a Post to the database? Beats me. The model class this method came from is over 600 lines long. Everything that this class does—and it is a lot—is more brittle and less maintainable for it.
Less obviously, the one-table-one-model approach couples your business domain and your database schema, which is sometimes fine but often not. I’ll put that another way: a business domain and a database design aren’t mirror images of one another—but Rails and ActiveRecord assume (and insist) that they are. As if that weren’t bad enough, by having so completely obliterated the distinction developers are universally encouraged to view the database as an extension of their Rails app, with schema changes and migrations directly correlated with changes to the app. The idea that your database is completely isomorphic to and a part of your app is sheer folly, but it is almost Rails gospel.
The Database is Not Your App
The fundamental principle at play here is that of the architectural boundary—the place where your app and another system or concern interact with one another. Architectural boundaries aren’t necessarily large, but the larger ones are pretty obvious and important: database, file system, network connectivity, in-memory store, etc. They’re boundaries because from the perspective of your app what lies on the other side is not important—the file system could be real or a mock and your app does exactly the same things. The database could be SQL, NoSQL, or flat files and your app has to use the data in the same exact ways and eventually output the same exact updated data. Conversely, the less agnostic your business code is toward whatever is on the other side of a boundary, the more tightly coupled it is and the weaker the boundary.
If you’re having trouble accepting that your app shouldn’t care about what database you have on the other side of the boundary, consider this: Imagine a world where SQL is an obscure, relatively new and untested technology and NoSQL is the default, go-to data storage solution. Does that change anything about what your app actually needs to do, from a business perspective? Does a single user story change? Does a single formula for calculating some vital piece of data change? No, of course not. On the other hand, how wide is the impact on the code? How many classes have to change, even a little? The best case scenario is only your model classes have to be reworked—but even that alone can be an arduous prospect implicating thousands of lines of business logic.
The idea, again, isn’t that you should care about these things because you might someday replace Postgres with Mongo. The point is your code shouldn’t care about whether its data comes from Postgres or Mongo because it ultimately makes no difference, from a business logic perspective. By making your code care, you are, objectively, making it less valuable in the long-term and increasing maintenance costs, while simultaneously reducing its testability and confidence in any tests. You’re handicapping your code, tying it to irrelevant detail for little to no upside.
The code forming the boundary mediates between two very different worlds—the world of your domain objects and business rules on one side, and the mechanics of data storage on the other. Architectural boundaries are not reducible to a single class wrapping up obscure details of a protocol inside a nice API. Instead, they translate and mediate between your app and the external system, speaking the language of your domain on one side and the language of the external system on the other.
Coupling happens when details cross over the architectural boundary and mold our code in unavoidable ways. This is exactly what happens with ActiveRecord, because ActiveRecord doesn’t actually concern itself with translating between our app and the database—instead it operates from the assumption that your database and your app are the same thing. Rather than translate your business concepts to the database and back again, it simply provides high-level hooks into the low-level boundary not to bridge the boundary, but to erase it.
Side-Effects May Include…
By combining business logic, querying, data representation, validation, lifecycle, and persistence your app is shackled to a single database and persistance strategy, oftentimes encompassing an enormous amount of the application. This unavoidable fact is directly implicated by another part of that comment that initially made me so angry:
If it’s acceptable for that to “need Redis” then that’s what it does. If it’s not, then maybe you work around it. It’s not like you don’t have options to control behavior there.
Essentially what he is saying is that every part of your app should know about how your model depends on and mutates Redis every time it saves a record, in order to decide if it should work around that behavior. Let that sink in. That’s a recipe for the spaghetti-est of spaghetti code. Your code now can’t simply use the data access class to save a record anymore, and if you want to use the interface that it is presenting for the stated purpose you have to have in-depth knowledge of its implementation at each point of use, lest you run afoul of its side effects. That’s insanity—when you save a damn record you should expect the record to be saved and that’s it. Driving your app by side effect makes it incredibly brittle, and simultaneously difficult to change, and the testing situation turns into a complete disaster.
You don’t need to be a FP acolyte to see why it’s bad that your classes that do basic, universal things like saving to the database would be kicking off all sorts of other business logic. Imagine that every time you turned the oven on, everyone in your family got automatic notifications that dinner was in 30 minutes—unless you remember to disable it by removing the face-plate and detaching the transmitter every single time you want to use the oven for something else. We encapsulate functionality because it makes that functionality better, for one, to be isolated. It makes it more easily tested, and it make the logic cleaner. We also encapsulate functionality because we don’t always want to use things in the exact same ways with the exact same collaborators every time. And in situations where we don’t want to use a particular collaborator, we don’t want to have to actively take steps to avoid using it.
Mo’ Responsibilities, Mo’ Problems
A lot of the blame goes to the libraries that are popular and the patterns they push. Encouraging—or enforcing—inheritance over composition leads to large classes with numerous responsibilities, just as a matter of course. Community pressure or “best practices” combined with laziness can then lead to an explosion of responsibilities, as plugins and developers add more and more code to a handful of classes. Finally, having an artificially limited range of “kinds” of classes a developer believes he or she can have (Model, Controller) leads directly to a parsimony of classes, and indeed a general trend of developer resistance towards adding new classes (maybe because it makes the “models” folder look so messy.)
DRY, an almost religious mantra in Rails circles, boils down to increasing code reusability through refactoring. Unfortunately, that’s fundamentally at odds with the broader development pattern that is encouraged by almost everything else about Rails. In fact, the way DRY is pushed in Rails circles can lead directly to perverse outcomes. To go back to the after_commit hook and Redis example, the obvious alternative to putting that code in a lifecycle callback is to move it to the controller—invoke that completely separate behaviour where and when you want it. Of course, from a wider perspective this isn’t good design, because it does repeat code. The problem is invoking DRY here and hooking into AR makes the code objectively worse, not better. Moving that code into the model reduced repetition, while simultaneously decreasing the reusability of the code.
The massive classes this sort of development process ends up encouraging prevent code reuse through tight coupling from two directions.
From the top-down, the class makes so many assumptions about how it is being used and what it is working with that it can only be used in a handful of ways, if that. If a graphics class internalizes the generation of output files, it’ll probably be difficult to extend it to support other formats. If your models handle their own persistence, it can be nigh-impossible to persist the same model in different ways depending on context. If your model is also where you put data filtering and formatting accessors, then having to provide different views of the same data can lead to an combinatoric explosion of methods. Decisions that were made universally based on initial convenience almost never pan out in the long-term for most use-cases, leading to awkward compromises and workarounds which ever-more-tightly couple the class to its circumstances.
From the bottom-up, the class locks up code that might otherwise be generalizable and applicable elsewhere. Code to handle the peculiarities of graphics file formats could find other uses, were it not buried in a god class’s private methods. Code to run reports on data can be refactored and made more powerful and flexible if it were its own class. One example of something that is successfully and commonly extracted from the AR hierarchy is serialization (via, e.g., ActiveModel::Serializers), exactly the sort of concern that should be treated as a separate responsibility.
Bottom line: there’s an inverse relationship between composability and number of responsibilities. The more responsibilities you pile into a class, the less composable it is, and the less use you’ll get out of your code, on average (which means you’ll write more code, in the long run.) ActiveRecord is a complete failure on both grounds: AR models are increasingly less reusable as time goes on and they grow larger and introduce side-effects, and the code locked within is completely un-reusable right from the start… yet, it’s all still DRY, somehow.
Rails Models Have Many “Reasons to Change”
The Single Responsibility Principle says (spoilers) that every class should have a single responsibility—which is sometimes defined as “a reason to change.” The “reason to change” clarification is useful because too often “responsibility” is conflated with a Rails “resource”—this class is responsible for posts, that’s a single responsibility, right? Well, no. No, it isn’t. Not at all.
Let’s take a look at the responsibilities a Post class has in a Rails app. It loads the schema from its database, so that it knows what attributes it has. It defines the relationships between your models. It provides for querying the database. It performs domain validations on records. It is the data itself, and handles accessing and mutating record data. It persists (create or update as needed) records to its database. And all of that’s without any user code.
Add in things like Paperclip and Devise and the responsibilities explode, before the dev even begins to pile on business responsibilities. What if you want to change how a post is persisted, without changing anything else? Good luck. Want different validations depending on whether the logged in user is an admin? I hope you like duplicated code and hackish workarounds. Persist auto-save drafts to an in-memory store rather than the database? Abandon all hope, ye who enter.
The thing is, when you first start a project or when you start with simple projects and gradually work your way up in terms of complexity, this can look pretty good—of course you don’t want to worry about where a particular model is getting stored, or managing sets of validations. Of course! It “just works” … for now. Eventually, though, all the things that AR makes so easy and simple at first glance will be your “reasons to change”—maybe not today, and maybe not tomorrow, but soon. Then what? If you were like many Rails devs, I imagine you’d simply “work around it” by using other parts of AR that seem to give you “options to control behavior.”
There are strategies to mitigate some of the damage that ActiveRecord can cause. At best, they reduce but do not eliminate the problem. Regardless of efficacy, they are almost never put into practice. The attitude seems to be—if not outright hostility to any alternative—at least a resigned acceptance that one has made his or her deal with the devil. Far too often, the very worst parts of ActiveRecord are enthusiastically embraced and evangelized. And so it goes.
Modern software development is a dense memeplex teeming with patterns, methodologies, and practices that rapidly mutate and recombine with each other in novel, often infuriating ways. “Agile” is famously a term whose meaning is nebulous and ever-shifting, applied to any set of development processes up to and including waterfall. Another is “Minimum Viable Product,” or MVP. Originating with a specific meaning in the Lean Startup movement, MVP has since morphed into a general notion in software development of the small, focused, well-honed feature set of an initial release milestone.
Both ideas are valuable, but problems emerge when wisdom that pertains to one kind of MVP is blindly applied forward to the new, mutated MVP. A great example is this graphic, which I’ve mentioned before and just now stumbled across in an article on software versioning:
While I do not know the exact provenance of this graphic (Update: It’s either created by Henrik Kniberg or directly derivative of his work), I can very confidently state that it originally applied not to software development, but to the original Lean Startup concept of the Minimum Viable Product. I know this because it is a great illustration of the MVP in a Lean Startup, and an absolutely horrible illustration of the MVP in software development.
The MVP in Lean Startup is Concept-First
Lean Startup isn’t a software development methodology—it’s an approach to entrepreneurship and business. The Minimum Viable Product, properly applied, isn’t a trimmed down version of a sprawling—if undocumented—business plan. Rather, it’s the quickest, cheapest, easiest-possible solution to whatever problem the company is tackling in order to make money. In the context of Lean Startup you don’t sit down, dream up “we’re going to be Twitter/Yelp/Google/Vimeo/Foursquare for Quickimarts/Bus stops/Bowling alleys/Dog parks,” excitedly plot out some expansive ecosystem/plan for world domination, and then scale that all back to an MVP. Instead, first a monetizable problem—something for which you can sell a solution—is identified, and then a quick-and-dirty fix is plotted out in the form of the MVP.
The reason the comic makes a lot of sense from this perspective is that you explicitly do not start out with aiming for the moon and then having to adjust your sights downward—there’s money sitting on the table, and you want to start snapping it up as quickly as possible using whatever tools you can. Software development is actually a worst-case scenario, from a Lean Startup perspective, so piecing together a gold-standard system from scratch is understandably discouraged.
Instead, the “skateboard” solution might take the form of something as simple as a Google Sheets/Forms, or Filemaker Pro app. What matters is that you solve the problem and that you can monetize it. The “bike” solution might be a tricked-out wordpress site with some custom programming outsourced to a contractor. In this context, it doesn’t matter one bit that each solution is discontinuous with the next, and is essentially thrown away with each new version—what matters is that money start coming in as soon as possible to validate the idea, and that the solution (as distinct from software) is iterated rapidly.
The MVP in Software Development is Software-First
You’d be hard pressed to find a professional software engineer who’ll respond “Excel spreadsheet” or “Wordpress site + 20 plugins” when asked to describe an MVP. It’s simply a different country. The core idea—do as little work as possible to get to a point where you have something to validate the solution—is retained, but the normal best practices of good code design and project management are still paramount.
As software engineers, we’re not going to half-ass a partial solution to a problem, pretend it’s a skateboard, and then throw that work away to start on the scooter iteration. Instead, an MVP is the smallest app we can write well that is still useful and useable. From there, we iterate and experiment with new features and functionality, taking into account real-world feedback and evolving demands and opportunities.
From a product perspective, we need to take a look at what we mean by minimal and by viable. Minimal, in one sense, means simply stripping things away until you get to a vital core. The tension is with viability—probably the most ambiguous of the three parts of “MVP.” Some take the position that “viable” is in relation to the market… something is “viable” if it can be sold or marketed to users. I argue that “viable” is to be taken in the same meaning as a foetus being “viable,” which is to say it is able to be carried to term. A minimum viable product, then, is the smallest set of features which can conceivably be developed into your product. An MVP should consist of an essential, enduring core of code that will form the foundation for further efforts. Anything else is simply a prototype or proof of concept.
A Concept-First Software MVP Can Be Disastrous
As erroneously applied to software, this can also be called the teleological MVP, or alternatively the function-over-form MVP. This approach emphasizes the principal value we’re delivering for our users rather than how we intend to manifest that in a final product. That value is distilled, and minified, until we have a vision of a product which is significantly less effort than what we actually intend to make. So, if we’re trying to create a sports car, as in the comic, you might interpret the value as “getting from point A to point B more efficiently than walking,” then squeeze that down until you arrive at a bike. Or a skateboard.
The chief problem with concept-first is that the MVP does not exist in a vacuum. Like any tool it has to be measured by how well it helps us achieve the intended results. If you’re developing a sports car—or the software equivalent—then an MVP is only valuable to the extent that it helps you to produce, eventually, a sports car. If you develop as your MVP the skateboard version of a sports car, where do you go from there? The only thing you can possibly do is to throw the skateboard away and start over, or become a skateboard company.
The comic illustrates this process with smiley faces, implying that with each version you have happy users and thus are on the right path. In reality, it shows four product development dead-ends amounting to—in real terms—thousands of hours and potentially millions of dollars of wasted time and effort. Take, for instance, the case of Gamevy, which produced as the MVP for their real-money gambling business a “freemium” fake-money product which—while certainly less effort and time-intensive than their ultimate goal—nearly doomed the company.
Forget The Cutesy Comic
That comic, at the top of this article? It’s about Lean Startup MVPs. It isn’t about developing software. The process it has crossed out as undesirable is actually the approach we want to take to development software—good, well-architected systems composed of ever more generalized, ever more fundamental systems. Don’t program a skateboard when you mean to program a motorcycle—you might turn out “something” sooner than you otherwise would, but it is a false savings and you’ll pay for it very, very quickly.
Here are three more iOS development patterns that fly somewhat in the face of answers you might see on Stack Overflow touted as “best practices.” Two of these are rock solid and a third is on probationary status, which I’m throwing out there as a discussion point.
Use structs to define appropriate architectural boundaries
It’s really easy to blur architectural boundaries in an iOS app. That’s partly thanks to the Delegate pattern, which encourages concerns to spread across multiple classes with varying roles and responsibilities. When we lose sight of that boundary between view and controller we inevitably neglect the fact that appropriate and tightly defined boundaries are the backbone of a well-designed and maintainable architecture. An extremely common and simple example of where this can happen is configuring UITableViewCells in a UITableView’s data source.
Often we’ll end up with one of two extremes: either a controller that knows a lot about the internal view structure (directly setting UILabel strings and colors and other blatant Demeter violations), or a view object that takes a domain record (such as an NSManagedObject) and is responsible for translating that high level object into specific pieces of display data, itself. In either case, we have parts of our app that are tightly coupled to things from which they should be insulated. The contagion can easily spread, for instance by moving view-specific display logic to that high level domain record in order to “clean things up.”
Somewhere between tweaking UILabels in your view controller and passing NSManagedObjects all over the place is the sweet spot of just enough data for the view to render itself, with a minimum of logic required to do so. Minimizing logic is a key goal, here—the view system is one of the more complex and opaque parts of an iOS app, and one of the last places you want to have code dependent on high-level semantics, if you can avoid it. Any code in your view that takes a high-level concept and translates it into things like colors, text strings, and the like is code that is significantly harder to test than if it were elsewhere.
Immutable structs are fantastic for exactly this purpose. A struct provides a single value that can be passed across the boundary while encapsulating a potentially unlimited amount of complexity. Immutability simplifies our code by ensuring a single entry point for the configuration logic, and helps keep the logic for generating the struct’s member values in one place. They can be as high-level or low-level as is appropriate given the view and the data. For basic table views that are primarily text I might simply have a struct of String, Bool and UIColor values mapped to each visual element. On the other hand, I have a view for drawing graphs that takes a general description of the graph to be drawn, where there is less of a direct connection between the values I pass and what ultimately gets set as the final configuration.
(In the latter example, the view makes use of other classes to interpret the input and produce the final display values—to what extent you continue to rely on the view-side of things being able to interpret data in complex ways will vary. In my case, the controller “collates” the data into a general form, and the view is responsible for turning that into a renderable form)
In either case, there is one correct way to cross the boundary between controller and view, and provided you keep your view outlets private (as you should) you’ll have confidence that your controllers and views remain both loosely coupled and synchronized in their effects.
Use awakeFromNib as a form of dependency injection
Google around for how to get at your NSManagedObjectContext from your view controllers, and you’ll get two answers:
Set it on your root view controller in your AppDelegate, then pass it to each view controller you present
One downside to this solution is that, at least as of the last project I began, the AppDelegate is no longer necessarily involved in bootstrapping the storyboard. You can get at your root view controller via the window, getting a chain of optionals leading to your controller and then setting your managedObjectContext property, but it is exceedingly slapdash, at best. Another problem is all the laborious glue code involved in ensuring an unbroken chain of passing along the context, bucket brigade style, between your root and any controller that might need it. All of this is in service of avoiding globals, as advocated by the next solution:
Anytime this solution is mentioned, comments about avoiding globals or Apple having rejected this approach surely follow. In general, yes, globals are bad, for varying reasons (some of which have less to do with “global” and more to do with general pitfalls of reference types.) In this case, the global is bad because it bakes an external dependency into our code. In my opinion, a global is bad in this situation for the exact same reason that using a class constructor can be bad—absolutely nothing would be improved here if AppDelegate were a constructor we could call, rather than a property.
What this all is crying out for is a form of dependency injection—which is why the first solution is often preferred, being a poor man’s dependency injection solution. Too poor, in my view, since it ties a class’s dependencies to the classes it might eventually be responsible for presenting. That’s craziness, and even worse than just using UIApplication.SharedApp... inline, if followed to its natural conclusion.
Thankfully, because we’re using storyboards, we can have the best of both worlds. First, yes: your methods should be dependent on a managedObjectContext property on your class, not directly referring to the global. Eliminate the global from inline code. Second, no: passing objects bucket brigade style from controller to controller isn’t the only form of injection available to us. Storyboards can’t set arbitrary values on the classes it instantiates—unfortunately—but it gives us a hook to handle in code any setup that it can’t: awakeFromNib.
The fact that awakeFromNib is in our class and not somehow external to it is a complete technicality. To the extent that we’re being pushed into doing the least unreasonable thing we can, using global or top level methods in awakeFromNib is fair game—this code is only ever run by the storyboard, at instantiation time. To be fair, awakeFromNib is a blunt instrument, but we needn’t live with its dictates, as plenty of other hooks are called before the controller is actually put to use. Ultimately, I view using awakeFromNib in this way as no different than specifying a concrete class to instantiate in a storyboard and connect to a view controller via a protocol-typed outlet.
(In this specific case, one additional thing I would do is have my own global function to return the managed object context, and call that in awakeFromNib, as a single point of contact with the “real” global. I’ll also note that I avoid having my view controllers directly dependent on NSManagedObjectContext as much as possible, which is another pattern I’ll be discussing.)
One last thing: why awakeFromNib and not initWithCoder? First, awakeFromNib is called in any object instantiated by the storyboard, not just views and view controllers. Second, it reinforces the special-cased nature of the injection, over the more general case of object instantiation. Third, outlets are connected by the time awakeFromNib is called, in case that’s ever a concern. Fourth, initializers are very clearly a proper part of their class, but awakeFromNib is, arguably, properly part of the storyboard/nib system and only located on the class for convenience, giving our class-proper code design a bit of distance from what goes on therein.
Handle view controller setup in UIStoryboardSegue subclasses
This one might be a bit more controversial. I’m going to see how it shakes out, long-term, but from a coupling-and-responsibilities perspective it seems a no brainer. In short: configuring a new view controller isn’t necessarily—or even usually—the responsibility of whatever view controller came before it. If only we had a class that was responsible for handling the transition from one view controller to another, where we could handle that responsibility. Wait, there is such an object—a segue. Of course, segues aren’t a perfect solution, since using them conflates animations with nuts-and-bolts setup. They are, however, a natural, lightweight mechanism for getting random crosstalk code out of our view controllers, and the field for setting a custom UIStoryboardSegue class is right below the field for setting the identifier.
If there’s one underlying theme throughout these patterns it’s “stop using view controllers as junk drawers for your code.”
I’m working on a brand-spanking-new iPhone app, for the first time in a while, and I’m trying to take a fundamentals-first, good-design approach to development, rather than simply regurgitate the patterns I’ve used/seen in the past. Here are three “new” approaches I’m taking this time around. Each of these patterns are broadly applicable regardless of your language or platform of choice, but with iOS development, and XCode, they can take a form that, at first, might look odd to someone used to a particular style.
Clean Up View Controllers With Composition
Ever popped open a class and seen that it conforms to 10 protocols, with 20-30 mostly unrelated methods just piled on top of one another? This is a mess: it makes the class harder to read and debug, it makes individual lines of logic harder to test and refactor, it can mean an explosion of code or subclasses you don’t actually need, and it precludes sane code reuse.
By applying the single responsibility principle—and the principle of composition-over-inheritance—we can mitigate all of those problems, moving code out into individual classes for each protocol/responsibility. You’ll win gains pretty quickly when you realize, for instance, that a lot of your NSFetchedResultsController-based UITableViewDataSource code is nearly identical, and a single class can suffice for multiple view controllers.
That goes for view code, too: If you’re poking around in the view layer it’s probably a good idea to do it in a UIView subclass. The name of the game isn’t to minimize the number of classes in your project, and separating code by function appropriately is the basis of good code design. For that matter, the name of the game isn’t merely “code reuse” either—whether or not you’ll ever take two classes and use them independently isn’t the mark of whether they should just be smooshed into one giant class.
Cut Your Managed Objects Down to Size
What’s the responsibility of your NSManagedObject subclasses? To coordinate the persistance of its attributes and relationships. That’s it. Taking that data and doing various useful things with it is not part of that responsibility. Not only do all those methods for interpreting and combining the attributes in various ways not belong in that specific class, but by being there they are manifestly more difficult to test and refactor as needed. If you’re looking at a bit of code to—I don’t know—collect and format the names and expiration dates of someone’s magazine subscriptions, why should that code be dragging all of core data behind it?
At a minimum, most of those second-order functions can be split out into a decorator class or struct. A decorator is a wrapper that depends simply on being able to read the attributes of its target object, and can then do the interesting things with reading and displaying that data—without involving core data at all. How do we eliminate core data entirely? By using a protocol to reflect the properties of the NSManagedObject subclass. Testing any complex code in your decorator is now a cinch—just create a test double conforming to that protocol with the input data you need.
This is a super simple example of a decorator I use to encapsulate a Law of Demeter violation. This illustrates the form, but the usefulness pays increasing dividends as the code gets more complex. Note, also, that you needn’t have a single decorator for a given model… different situations and domains might call for differentiated or completely orthogonal decorators. In that way, decorators also provide a way to segregate interfaces appropriately.
Storyboards Can Help Manage Composition
I used to have a knee-jerk reaction to storyboards. They felt like magic and as if all they did was take nice, explicit, readable code and hide it behind a somewhat byzantine UI. Then I realized what they really do: they decouple our classes from each other. The storyboard is a lot like a container. It lets us write generic, lightweight classes and combine them together in complex ways without having to hardcode all those relationships, because there’s another part of our app directing traffic for us.
After you’ve moved all your protocol and ancillary methods from your view controllers, you’ll probably end up instead with a bunch of code to initialize and configure the various objects with which the view controller is now coordinating. This is an improvement for sure, but you still end up with classes that mostly exist just to strongly couple themselves to other classes. That glue code is also so much clutter, at best. At worst it has no business being in your view controller class at all, but for a lack of anywhere else to put it. Or is that so?
Amidst the Table View, Label, and Button components in the Interface Builder object library is the simply named “Object” element. The description reads:
Provides a template for objects and controllers not directly available in Interface Builder.
“Not directly available in Interface Builder?” Then what’s the point, if we can’t do anything with it? Ah, but we can do things with it: we can hook up outlets and actions, and configure the objects with user-defined runtime attributes. We can, simply put, eliminate large swaths of glue code by letting the storyboard instantiate our coordinating classes for us, configuring them with connections to each other and to our views, and even allow us to tweak each object on a case-by-case basis. All with barely any code cluttering up our classes.
You might have a strong intuition that a lot of that belongs in code, as part of “your app.” If so, ask yourself: if this belongs in code, does it belong in this class? Truly? Cramming bits of orphan code wherever we can just to have a place to put them is a strong code smell. Storyboards help eliminate that. Embrace them.
Huge drawback: The objects you instantiate this way have to be @objc, and as such you can’t have @IBOutlets for a protocol type that isn’t also @objc. This means you lose the ability to pass and return non-Objective-C types such as structs, enums, or tuples. This is really frustrating and a significant limitation on using the technique to clean up your view controllers more generally.
For a bit of fun I’m going through the DARPA Shredder Challenge puzzles. The challenge ended 5 years ago (and I’m not a computer scientist, besides), so I’m sticking with the tools and technologies I use professionally, despite their potential inefficiency or inappropriateness for the task.
My first problem, right off the bat—trying to load the puzzle image into a playground kept coming up with the wrong dimensions, by almost an order of magnitude. The full-size image is important since I’m basically working with the image on a pixel-by-pixel basis, and the details I needed for analysis was getting blown away. Pretty much nothing I did that involved NSImage in any way would work, and since I just needed to get at the raw pixel data, I skipped it entirely:
(Some lines split out to reduce horizontal scrolling)
Note that using NSData(contentsOfURL:) in conjunction with NSBitmapImageRep(data:) did not work, having the same dimensions problem as the simpler solutions.
About three years ago I left an NYC app startup—which I will not name here—after just over a year there. The immediate cause was personal: the emotional stress from an increasingly perilous interpersonal environment on top of an unsustainable and severe “crunch time” schedule brought me to my breaking point. Before too long the root causes that ultimately underlaid my own departure brought the entire company to the breaking point, as well. After this all went down I went through a few phases, emotionally: guilt, anger, and finally acceptance of everything that happened and my role in it all.
A lot of things were conceived of wrongly, planned wrongly, executed wrongly, and finally went wrongly. This isn’t just a story of how a short-funded startup with a popular niche app (at one point featured in an early iPhone “there’s an app for that” TV advertisement) gets put into the ground. Instead, this is a cautionary tale of what can happen when Agile goes wrong. It’s easy to claim to be “Agile” when what you really mean is that you’re just too small to have built up a bureaucracy around software development. In my case, there were some big warning signs that I completely ignored or didn’t know to look for, until it was too late.
The Complete Rewrite
What really attracted me to the company at first was the somewhat unique combination of a popular, seemingly simple app with a loyal user base that also had an atrocious user experience. It was well into the second version, both written by a contract studio, and had accumulated a fair bit of cruft around an initially awkward navigation scheme. It was a fantastic opportunity—fix up the design, make a popular app even better, and put a nice feather in my cap while simultaneously ridding the company of a chain around its ankle. I know I went into the position knowing we were going for “complete redesign,” but I’m pretty sure we didn’t decide on the complete rewrite until later. I don’t think there was any serious discussion of NOT doing a complete rewrite. That was a mistake.
The rewrite is basically pressing a reset button on your app. Almost everything gets thrown out, code-wise. You usually also decide to take advantage of newer tools and technologies, so some knowledge, experience, and time gets thrown out as well, in effect. Any testing, code confidence, or support history is gone. You’re starting completely from scratch. It can be pretty appealing, especially if you have no sentimental attachment to the current code base. It can also be disastrous.
In our case, a rewrite meant cutting ourselves off from Our App 2.0, our users, and all of our success up to that point. Rather than iterating feature-by-feature, cleaning up and improving the app by degrees, and letting our users come along with us while we improved a stable code base into the app we wanted we instead effectively stopped development—from the outside perspective—and got lost in an increasingly hellish year of trying to recreate what we already had. Finally, when the app was released, an explosion of user anger completely rejected the rewritten app, which plummeted to single star territory in the App Store.
The Grand Vision
At the root of any guilt I felt after the company folded was my role at the very start of the process. I don’t remember if this happened after we decided on the rewrite or if it was part of the discussion. Either way, I went into a planning meeting for 3.0 to pitch my vision, and it was a doozy. In hindsight, it probably could have been a 3-5 year vision, or even just a concept around which to build reasonable, real-world plans. In actuality, everyone either loved it or accepted it without much comment and it became our 6-month blueprint. Words and phrases I used—often simply as concept or metaphor—were explicitly applied to features and concrete elements in the app. Some of them even ultimately appeared on the marketing website after release.
Even Agile, which eschews specifications and up-front planning, has an ultimate objective that, at its core, will remain more-or-less fixed, barring catastrophe. The Vision was at once too much and too little—too ambitious and over-specified to give us the flexibility to adapt in order to preserve our core objective, and too underspecified to let us anticipate, plan for, and handle the problems we were going to hit. It got to the point where I would cringe every time someone would use one of those words I threw out during the pitch, which were increasingly sacrosanct, even as development dragged on and things just weren’t coming together.
The Business Case
I have to say, first, that I have the business awareness of a fruit fly. I once accidentally got going on a rant about “vulture capitalists”—while talking to a venture capitalist. If you need me outside of the dev shop for anything it’d better involve coaching and clear expectations of what it is I’m needed to wear, do, and say. That said, I understand the point of a business is to make money, and ultimately the responsibility of the CEO is to see to that. I have tremendous sympathy for a guy trying to turn an early, surprising success in the app space into a going concern with just a few months of time and with less than a million dollars raised.
What that meant for the developers is that the concept that was pitched and embarked upon for 3.0 quickly became an iron-clad part of the business plan. I was in no way involved in or privy to the money-raising or deal-making, but I saw how quickly what should have been a “stretch goal” became a hail mary, a hill for the company to die on for lack of anything else to do. It’s hard to be “agile” when your “minimum viable product” is determined not by your users or the features you want and can implement quickly, but by the need to meet unyielding business case requirements. Our “minimum viable product” was the company-saving potential of a “Wow!” release that would knock everyone’s socks off.
That reality, spoken or unspoken, set the tone for the majority of the development process, but it also manifested in very concrete ways, particularly during the end. We were writing code, designing interfaces, and implementing features for at least one business development deal that could be described, at best, as “ancillary.” Another was in the talking-and-planning stages but, thankfully, didn’t progress to the point of actually bogging down the development work. Agile, like any methodology, must be put to use in service of the company’s broader business goals—but it is also easy for it to be sabotaged by specific business goals. With money running out and everyone on edge it can be hard to see the difference.
The Unending Marathon
For me, the most important tenet of Agile is “release early, and release often.” This is the distillation and union of three simple ideas: the minimum viable product, the sprint, and producing “usable software”.
The “MVP” can mean a lot of things to different people—there’s a graphic out there somewhere ridiculing the idea that the MVP for an automobile is a push scooter, rather than, for instance, a bare chasis with 4 wheels and an engine. Whatever actual form it takes, in order to be meaningful it has to be two things: (1) Comprised mostly or entirely of code and features that will continue to be relevant for the entire remaining development process and (2) A fairly small fraction of the entire development process. Simply put: you need both something small and something you can build around.
A sprint is a well-defined period of time (1 or 2 weeks), at the beginning and end of which you should have a high quality piece of software. This is the “usable software” part—sprints are meant to be a sustainable, cyclical process of evolving a piece of software incrementally while maintaining high standards for the product at each junction.
For various reasons—implicit cultural reasons and unspoken business reasons—we weren’t ever going to consider releasing ANYTHING until we had 3.0 wrapped up in a bow. Unfortunately, it’s really easy for “MVP” and “usable software” to become a joke when nothing actually has to get released to anyone. Sprints become simple deadlines, which are blown through or extended as convenience requires. Our idea of “usable software” was whether the damn thing compiled and passed tests, not whether we had a piece of high-quality, releasable software at the end of each sprint.
On the flip-side, we went through a lot of changes with each sprint. Sprints felt like an excuse or opportunity to make or propose changes in a pretty ad-hoc fashion. Every time a sprint ended and I looked at the awkward, half-finished app there was another big tweak I wanted to make to steer it back towards the Vision. The sprint imposed upon our rigid big-picture plan a lattice which brought the lack of small-picture plan into stark relief. It was easy to see where things on a high level weren’t working, and all I could do to fix that was muck with them on the immediate scale of the sprint and whatever features we were working on at the time.
The closer we got to the finish line, the further away it seemed to be. At the end it felt like we were standing still. It was very distressing to find myself, after 10 grueling months, staring across the canyon between where we were, and where we needed to be a month ago—with no path across. It was increasingly paralyzing, and the only plan anyone else had was just to churn through the remaining holes to get us up to the near edge of the cliff. Rather than let the MVP, each sprint, and our releases guide our development we had random-walked right into oblivion.
And So It Goes
Like I said at the beginning, a lot of things went wrong. I don’t know how things would have turned out if we had been more Agile, in a meaningful way. I’d like to think we’d at least have put out some pretty good software releases along the way. Whether we’d have kept the amity of our users, or satisfied our investors… who knows. Maybe there wasn’t any winning—we were short on time, money, and room to maneuver in our industry. Maybe, in the end, we were always fated to be one of the 90% of startups that fail. The only thing I really know for certain is that whatever we were, from start to finish, we weren’t Agile.
Lately I’ve been thinking a lot about responsibilities, and when a given responsibility should be a class, when it should be a method on a related class, and when it should be a function. Methods are almost always a convenient and straightforward option, but they are also inappropriate for a great many of the things you want to do with/to an object. Refactoring, after all, very often involves restructuring a warren of methods on a single class into a constellation of objects that work together via composition. There are obvious examples of things that are simple to bang out as methods but really shouldn’t be done—saving to particular file formats, generating reports, business transactions—but what big picture rules are there to guide us?
I’m currently writing a series of classes to take a set of data and ultimately render it as a graph in a UIView. The controller is responsible for collating the data into a Graph struct, which is passed to GraphView. We cross the controller-view barrier with a medium-level object that describes the output we want, but we leave the particulars to the view. The first step in getting something that can be displayed (specifically, a CGPath to render using CAShapeLayer) is to use a GraphVectorizer object to generate a description of the graph as a path. GraphVectorizer is a protocol—so that different styles can be implemented as separate classes—with the GraphView being agnostic as to which one is actually used.
GraphVectorizer does not return a CGPath. CGPath is an opaque data type, and while technically it can be introspected in a limited fashion it isn’t really amenable to being compared to other CGPath values all that easily. GraphVectorizer isn’t simply doing grunt conversion work, however—a lot of our important logic about how things get displayed lives in these classes, with the potential for edge and corner cases. In order to facilitate easy testing, we instead return our own transparent Path type, which is essentially an array of CGPathDoSuchAndSuch function calls stored as enums. For each style we can vectorize a Graph, compare the returned array, and be confident that we’re going to end up with the CGPath we want to display.
The question now is what form does the code to turn my transparent Path type into a CGPath take? Pragmatics dictates that it simply be a method on my Path type—this will only ever conceivably be used as an intermediary for generating CGPaths, and we’ve already decided to couple to the CGPath interface fairly tightly. But step back for a second and consider that we might have other drawing system possibilities at play—perhaps something OpenGL based, or the slightly higher level UIGraphics. I often feel stuck seesawing between the unsatisfying options of a very simple—often single method—class, or a top-level function, floating off by itself. A third option—static methods on a bucket struct—is equally unsatisfying.
I’ve been ruminating on some rules to help guide myself in these situations, and others. These are just possibilities, and nothing I’ve set in stone:
Instance methods can receive and return values of the same type, or a lower-level type. Equals should meet only in a neutral place. Thus, for instance, a PNG could take a UIColor and return a count of pixels close to that color, but it could not take a JPEG and return an estimate of how similar the images are, nor could it return a JPEG from a conversion method.
Instance methods should never return a higher-level type.
When two different types that are “equal enough” need to interact, the default should be a full class, for flexibility of implementation.
If the implementation devolves into a single method, it should be removed to a free function.
Protocols and Extensions
As should be obvious from the above, I’m writing an iOS application. Not so obvious is that I’m using Swift, and not Objective-C. Swift allows the extension of types with locally visible additions. Random new methods could be added to a type, or protocol conformance could be added. It’s a very powerful feature, if a bit uncomfortably close to monkey-patching for my taste.
Is this a situation where a CGPathConvertible protocol could be declared, and an extension to my Path type provided to implement that conversion? It depends. My rule on extensions is that, if they’re not exceedingly low-level additions, then they should be exceedingly simple. An extension might be a good place for code that hits rule #3 to end up, provided it doesn’t violate rule #2. One can imagine a Rails-esque extension to Int along the lines of - number(int: Int, ofThing: Thing) -> [Thing], and weep.
Hang out in IRC, Slack, or Gitter rooms for OS projects for a few days and before too long you’ll see someone ask how to unit test some part of their app. It’s particularly common with large frameworks that encourage inheritance over composition, which usually results in a great deal of environmental setup standing in the way of efficient, automated testing on a unit basis. It sometimes makes me feel bad, but usually my answer is: you can’t.
If you’ve lashed your code so tightly to your framework that you need to jump through hoops to test it, then you’re almost certainly not unit testing it. Testing code that’s in a subclass of ActiveRecord::Base is an integration test. Testing how an Angular component renders using the framework’s templating system is an integration test. It’s hard to write a unit test when your app is forcing you to write an integration test.
Why do we even test at all?
When it comes to testing—any testing—one must always keep in mind that the actual point of the testing is to help us write better software, not to meet some quota for code coverage or tests written. So many devs are content to write bullshit, space-filling tests just to keep up appearances, or out of a sense of obligation. The emphasis in some communities (cough-ruby-cough) on “test-driven” design or development is particularly problematic here, since too often there’s an over-emphasis on writing the test first as the only hallmark of TDD, and a complete ignorance of how to let the test drive the code—the actually important part.
“Best practices” or “being idiomatic” aren’t magical outs here, either. Design patterns and best practices are great, insofar as they actually result in good code design. It is self-evident, however, that if the way a developer codes and tests is predetermined by cookie-cutter-style conventions then that developer is not letting the tests drive anything other than the clock. While this often puts the lie to claims of test-driven development, it isn’t just a concern for aspiring practitioners of TDD—Awkward, jury-rigged, and brittle tests should be setting off alarms and clueing us into code smells and technical debt whether we write tests before the code or after.
Just calling it a unit test doesn’t make it a unit test
When it comes to unit testing in particular—where TDD is most natural and effective—there are two rules to follow in order for something to be a unit test, in a meaningful sense:
You need to be able to mock any dependencies of the unit
You need to own all the dependencies of the unit
These rules, just like the rule of writing tests at all or writing the test first, are in service of a higher goal: allowing the process of writing tests to make it clear to us where our design needs to change. This is the most basic way tests “drive” development—by encouraging design choices that make it possible to test in the first place.
Two approaches to mocking
One approach to the first rule is to figure out how to reduce and simplify your dependencies. Just by chopping up one class into several—each with one or two dependencies—the code almost magically becomes much more easily tested, refactored, and extended. This is a classic example of the test driving the improvement of your code by encouraging the separation of responsibilities.
The second approach is to look at your oodles of dependencies and piss and moan about all this mocking you have to do. Slog through it for a few hours. Pop into a chatroom. Let someone tell you that you can just test directly against the database. Write an integration test disguised as a unit test. Finally, call it a day for the rest of your career.
One reason so many developers insist on the tests adapting to fit their design, rather than the other way around, is because it isn’t actually their design at all. Frameworks that encourage code to be piled into a handful of classes that fit a set of roles determined by some development methodology do developers a disservice. Frameworks aren’t bad, necessarily, but when it’s considered “best practice” for the developer to forfeit all responsibility for their app’s architecture and their code’s design, it makes it impossible for the developer’s tests to inform the development process.
If you start off by subclassing someone else’s code you’ve almost certainly fallen afoul of the first rule right from the start. You’ve introduced a massive, irresolvable dependency into the very foundation of your code. Sometimes you’ll have little choice but to rely on scaffolding provided by the dependency in order to test your own code, integration style.
The two operative words in the first rule are “you” and “able.” The rule isn’t “It must be theoretically possible for someone, with unbounded knowledge of the dependencies, to mock the dependencies,” or even “You need to have mocks for the dependencies, from wherever you can get them.” If you can’t look at the class and immediately know what needs to be mocked and how to mock it, that should be a huge red flag.
Only mock what you own
The second rule is a consequence of a third rule: only mock what you own. You own your project’s pure classes, and to the extent that you subclass you own whatever logic you’ve added. You don’t own the base classes, despite their behaviour being incorporated via inheritance. This is another rule where the face value isn’t so much the point of it as the consequences: by only mocking your own classes, you’re pushed into building out facade and bridge classes to formalize the boundaries between your app and any external systems.
Tests are much more confidence-inspiring when the mocks they depend on are rock-solid doubles of tiny classes each with a single responsibility. Tests that instead stub one or two methods on a huge dependency are brittle, are prone to edge cases, increase coupling, and are more difficult to write and tweak with confidence. Tests of classes that themselves have to be stubbed are almost worthless.
Thinking outside of the class
Following these three rules can help put the focus back on writing well-structured, maintainable code. It’s not always obvious, however, what changes need to be made. If a developer is staring at a class that descends from ActiveRecord::Base, and which includes a couple of plugins, along with a raft of methods that all need to be tested it’s understandable to look askance at the notion that AR and those plugins should be expunged in order to test the class. After all, without AR they don’t even have a class to begin with, right? The path of least resistance all too often is just to write an integration test using the entire stack.
In these situations one must keep in mind that “unit” and “class” are not identical, and to ask not “how can I possibly remove these dependencies from my class” but “how do I remove my code from this class, which I don’t really own?” By moving those methods off to other classes as appropriate (formatting, serializing, and complex validations are things that might be on an AR class that can easily be broken out into their own plain-old-ruby classes) we’ve accomplished the same thing. So much ActiveRecord-dependent code can be refactored to depend only on a hash (or OpenStruct) of attributes.
It’s possible to use monolithic frameworks and still care about good design. Finding ways to take ownership of our code away from the framework is crucial. Your tests should be a searchlight, pointing out places where your code is unnecessarily tangled up in someone else’s class hierarchy.
Preventing bad testing habits
Developers often begin their professional life with a few high-level heuristics that are, unfortunately, continually reinforced. A few relevant ones:
Minimize the number of classes to write and test
“DRY” code up by relying on libraries as much as possible
MVC means my app is made up of models, views, and controllers
It’s not difficult to see how these lead to large, fragmented classes tightly coupled to oodles of dependencies. The resulting code is going to be difficult to test well in any circumstance, and will bear little resemblance to anything that was “test-driven.” I’d like to suggest some replacements:
Minimize the number of dependencies per class.
Minimize the number of classes dependent on an external dependency.
Write the code first, worry what “category” each class falls into later.
The first will result in more classes, but they’ll be more easily tested, refactored, and maintained. The second encourages dependencies to be isolated into bridge, adapter, or facade classes, keeping the dev’s code dependent on interfaces he or she owns. The third breaks the MVC (among others) intuition pump that says every class we write has to fit one of two or three possible roles. A dev utilizing these heuristics will find themselves asking “how do I unit test this?” far less frequently.
Now, “how do I integration test this” is a different question entirely… more on that later.
Monads have a weird and varied reputation outside of the FP universe. For Rubyists, in particular, monads and functional programming can look alien and nearly unparseable. Ruby is aggressively object oriented—it doesn’t even have first-class functions, technically—and the foreign nature of a lot of the background necessary to grok monads leads to indifference at best and hostility at worst.
On that score, I once overheard (after mentioning monads at a Ruby meetup) someone define a monad as “something assholes talk about to seem smart.” There is way too much knee-jerk rejection by some in the Ruby community of things they don’t immediately understand or find comfortable, but that’s another post—or multi-year psychological survey—entirely. This isn’t an article about why monads are awesome and why Ruby devs should love them.
Other than the indifferent and the hostile, there’s also a weird middle group of Ruby developers who are enthusiastic about monads, but who drastically overthink their implementation. I recall coming across a project that was mostly just an ersatz implementation of algebraic data types and type checking in Ruby. I definitely appreciate the benefits of those things, but Ruby just does not have either, and besides that we don’t need them to use monads—in Ruby or any other language.
What is a monad? A refresher
At their core, monads are just another design pattern, like the command or visitor patterns. Here’s a simple definition of a monad, or at least I think so, courtesy of Jed Wesley-Smith, and paraphrased by myself:
A type T which encapsulates value a(T a), and for which there exist functions such that:
a T a
T a (a T b) T b
What’s interesting about this definition is that, in a philosophical sense, something is a monad regardless of whether you actually write down and implement those functions in code. Either the functions exist, and someone somewhere could write them and use them in their code, or they don’t.
This is the concept of mathematical realism, which underlies the notion that we “discover” mathematics as opposed to invent it. Max Tegmark, an MIT physicist, extends this into his hypothesis that the universe itself is essentially mathematical, and, as a consequence, all possible mathematical structures exist, in some meaningful sense. Here the idea is much simpler: if it is possible for an object to be a monad, then it is a monad, whether that was your intention, or not, and regardless of the extent to which it looks like a monad in another language.
As Rubyists, things shake out even more simply since we don’t have much in the way of typing to worry us. If we implement #bind (the second function) there’s no mechanism for defining or enforcing type signatures, so #bind and #map have the exact same signature in Ruby. As a result of duck typing the only real type signature is arity, but that being the case remember that a monad in the general case is a mathematical entity—not a type or a class—and as such it is and remains a monad only so long as we use it as one.
Ruby almost has a built-in monad already
We’ve already recognized the similar shape of #bind and #map, but what about that first function, usually called #return? #return, being a method that takes a value and returns an instance of a type, is, in Ruby, just a constructor. Actually, it isn’t strictly identical: with return, there’s a universal interface, while initializers have hard-coded and peculiar names. This is a direct consequence of dynamic typing and the differing natures of OO vs Functional Programming.
Array, of course, has both an initializer and #map. Can Array#map function as Array#bind? Unfortunately, not quite. Let’s look at that signature again.
T a (a T b) T b
In terms of Array, this looks like
Array a (a Array b) Array b
So, #bind takes a block that returns an Array of a given type, and then itself returns an Array of that given type. #map doesn’t work that way. If you tried to use #map like #bind you would get the following:
Clearly, not what we wanted. #map gave us an Array of Arrays, not an Array of Integers like we’d expect from #bind. Luckily for us, it looks like there’s a simple transform from one to the other. We’re just one #flatten call away from having an Array monad, in theory:
With just five lines of code we seemingly now have a monad in Ruby, no complex type enforcement necessary. It remains, however, incumbent on us, the developer, to maintain fidelity to the monad requirements, as with all other informal contracts in our code. With our implementation above we could ignore the laws and use #bind exactly the same as we’d use #map, and it, surprisingly or not, would work just fine:
This works thanks to the specifics of our implementation, and we all know not to depend on knowledge of an implementation rather than interface when we rely on a library method, right? In fact, in this case it’d be an even bigger mistake, because the implementation is flawed. It works for Arrays of numbers, strings, your own classes, etc… but it doesn’t work for Arrays of Arrays.
The monadic law has been broken: #bind has given us an Array of Integers, instead of an Array of Arrays of Integers. We can tweak our implementation to fix this, but in doing so we’d break (and have to fix) any uses of #bind that ignore the law and treat it like #map:
So that leaves us with an Array monad, which is of limited usefulness without the other List goodness in Haskell. A far more universally useful monad is Maybe (or Optional for Swift devs).
Call me (maybe)
Maybe represents the possibility of there being a value, or there not being a value, without using nil. This means we can call methods on the result of an operation without worrying about which situation we’re dealing with. If we actually do have a value, calling #bind (or other related methods) operates on the value. If we don’t have a value, #bind short-circuits and simply returns the empty Maybe. It’s basically rails’s #try on steroids.
Maybe and Optional are the names of types for this monad in Haskell and Swift, respectively, but that doesn’t mean there has to be a corresponding class in Ruby. Haskell and Swift’s implementation uses algebraic data types, which are great, but they’re not objects and Ruby doesn’t have anything similar. So when we talk about Maybe in Ruby, we’re not actually talking about anything called Maybe in our code, but the coupling of two types that we can implement: Some and None. In a sense that’s all Maybe/Optional are, as well: a combination (called a tagged union) of two other types.
Here are the Haskell-ish type signatures for Maybe (Haskell uses Just and Nothing rather than Some and None):
Maybe a = Some a | None
bind :: Maybe a (a Maybe b) Maybe b
#bind is the same as before: it takes a function that receives the value and returns another Maybe, and itself returns a Maybe. So, a block passed to #bind has to return either Some a with a new value or None without a new value. There are no other choices. Ruby obviously will let us return anything we feel like, or even however many different kinds of anything we feel like. We can’t rely on type checking to help us here anymore than we ever can, and trying to build some ersatz type enforcement just for this special case makes no more sense than it ever would. So, what do we do? Well, we return either Some a or None. It’s as simple as that.
That’s it. That’s all we need in order to conform to the laws. There is, almost literally, nothing to it when written down in actual code. Concerned that #bind will let you return anything you damn well please? So will almost any other Ruby method, so don’t sweat it too much. I don’t mean “don’t test your code,” or anything so laissez-faire, but don’t get too caught up in the lack of type checking. That’s a red herring, and either way your monadic code isn’t any worse off than the rest of your Ruby code in that regard.
One very real downside to forcing your code to care about return types is that you lose the benefit of duck typing, and couple your use of a monad to your specific implementation. Theoretically, if you were to use a library or other shared code with methods that returned a Maybe, its return values should be interchangeable with your implementations. Some#bind will work as expected, None#bind will short-circuit as expected, and so on. Now, there might be other differences you care about (particularly around what utility methods are implemented/exposed), but when it comes to the monad type, the behaviour of #bind is the only thing that matters.
One more thing to be careful of: #bind has to return a monad of the same kind. Returning Some from an Array#bind call, or  from a Some#bind are both monads, and will both respond to #bind in turn, but they aren’t valid invocations. You can nest #bind calls, of course, but when it comes time to return, make sure you’re returning the same kind of monad as you started with.
I have, in my own projects, a pair of module methods—Monad.bind and Monad.compose—for simplifying monad composition. compose actually is just a bit of sugar on top of reduce and bind, which does the heavy lifting. By using Procs and composition it’s trivial to build up a set of simple transforms into more complex operations. They’re very easily tested, as individually they’re just procs which respond to #call, same as always.
The Either monad is similar to Maybe, except instead of just one value or nothing you get either a Left with a value or a Right with a value. Left often represents an error, and Right a succesful result. The implementations are equally simple:
Does the rigid type enforcement of a language like Haskell or Swift help catch bugs in your use of monads that you might trip over in a permissive environment like Ruby? Absolutely. That doesn’t mean monads have no value to Rubyists, or that we have to turn the language on its head to mine that value. We can build some nice APIs on top of these basic implementations, of course, to add some safety or convenience, but at their core there’s nothing about monads that’s incompatible with Ruby or that even qualifies as nonidiomatic Ruby.
This weekend I went to implement sign-in with Github (specifically, since my app is dev-focused). Having used it to sign into apps in the past, I had the vague idea that Github was an OpenID provider. It’s not, it’s just a plain-old OAuth2 server, so even more than usual it’s incumbent on the developer, i.e. me, to ensure it is used securely. There are a few major considerations.
It should be well-known—but probably isn’t—that Oauth 2, by itself, is an authorization protocol, not an authentication protocol. In authorizing a user, we want to gain access to some protected resource. In authenticating a user, we want to affirmatively establish their identity. OAuth specifies a rock solid flow for the former, but mostly ignores the latter. This might seem counterintuitive if you’re used to thinking of OAuth as an authentication mechanism, but when you consider that the point of the protocol is to enable password-less access to data owned by a third party service it actually makes a lot of sense that authentication for your app is out of scope.
It’s still possible to use OAuth2 as part of a secure authentication process. In fact, there are extensions that build on OAuth2 to create such an authentication protocol, such as OpenID Connect or Facebook Login. OAuth isn’t inherently insecure, it just doesn’t go that last mile to provide the functionality we want by default. Absent implementation of an additional authentication layer spec, there are steps that an application developer can take to use OAuth for authentication.
Properly configured SSL is absolutely essential
SSL can be an incredible pain to setup: acquiring the certificates, processing them in obscure ways, installing them, configuring it all, and then dealing with potential errors and issues in your code. OAuth 2 relies heavily on SSL to provide security, however, and as such eschews techniques such as token signing to provide fallback security. Improper use of SSL can expose client secrets in addition to communications between your server and the browser. When faced with certain errors it can be tempting to toggle obscurely named settings that seem to resolve the error. Oftentimes this can have the effect of severely crippling SSL, for instance by disabling certificate verification.
Only use the server-side authorization grant
Using the implicit grant is inherently insecure, as a “bad actor” app developer can attain an access token for someone who has logged into their app and has also logged into your app. That token can then be injected into your app. Your app would have no way of knowing that the access token provided was issued to another app entirely. If a user is logged in based solely on verifying this access token, then your user’s account can be completely compromised. Github does not support the implicit grant.
Block cross-site scripting attacks
OAuth2 provides a ‘state’ parameter that, when provided as part of the authorization request, will get returned as part of the callback. This parameter should be an unguessable string that can be verified, usually by tying it to the session. Cross-site attackers won’t be able to guess the ‘state’ and thus won’t be able to inject arbitrary access tokens into your app. This sort of attack is obviously significantly more dangerous when it comes to apps that export data.