Playing with functional programming gave me a lot of new insights into object-oriented design. I like the lightweight “interfaces” of functional signatures, and I love the idea of really small and reusable functions. However in most object-oriented systems I’ve seen Header Interfaces and composition through inheritance, or directly mashing more code in one class.
Thanks a great deal to Mark Seeman, I’ve discovered that adhering to SOLID principles can produce object-oriented code which has many of the properties I liked in the functional style. I would almost say that purely SOLID OOD is equivalent to functional design, but that’s enough material for another article.
As you refactor your classes closer to the principles, you will see many patterns emerging. I’d like to have a look at two paticular ones.
Let’s have a look at some code, a simple feature request, and what we may call refactoring to patterns (although in short term rather than what Joshua Kerievsky describes in his book).
Caching a Storage Reader with a Decorator
Suppose we have a storage reader interface:
Also we already have a simple implementation for reading items from DB storage
Now we get a feature request to add in-memory caching to our reader, because our database is overloaded, it hinders performance, while the application server has plenty of time and free memory.
The obvious and easiest way would be for example to add a
Dictionary<string, StoredItem>to the
DbReaderclass and then in the
Readmethod check its contents prior to reading from the database. Such easy implementation of the
Readmethod would look like this:
As you can see, the trivial one-expression method immediatelly bloated into 10+ lines, multiple-branches monstrosity. Of course I’m overstating here a bit, but it was a trivial example code and you can already see an order of magnitude increase in lines count and complexity because of a simple feature.
This class now also has multiple responsibilities. In addition to reading a value from the database, this implementation handles reading from cache and updating it, as well as maintaing the cache itself.
Most code I’ve encountered in practice does exactly this and stops there.
It’s not a big deal yet from maintenance perspective, I give you that, but we’re purists here (right? :) ) and staying on this level would prevent us from seeing the more general patterns emerging here.
To clean the solution up a bit, let’s separate the cache into its own class.
This encapsulates the dictionary and gives clearer cache interface by publishing only
SetItemmethods. Not only it makes the cache easier to use and reuse, it also allows for easier changes to the in-memory store method. Maybe in the future we’d like to use
DbReadercould use this class as a dependency for handling the cache, but it really only takes care of one of the responsibilities, maintaining the cache. The
DbReader.Readmethod would still have to make all the decisions.
Before we modify the solution even further, notice one thing. The signature of
GetItemis the same as the one of
IReader.Readmethod. The cache can implement
Now that the cache implements
IReaderinterface, we may have another look at its relationship with the
The original requirement was to add caching to the database reader. According to the Open/Closed Principle, a class should be open to extension, but closed to modification. We did modify the
DbReaderimplementation, so we may have done something wrong. We can extend the class without modification by decorating it. We can rewrite it as follows:
It looks about as complicated as our first caching attempt and it adds an extra class, but it has its benefits.
Most importantly, we haven’t modified the original implementation - this makes it easier on other
DbReaderusers, who thus aren’t forced into caching, but can optionally add it using this decorator.
Also as long as you always program against interfaces (like
IReaderinstead of concrete
DbReader), such decoration can take place only in your composition root without a single change to the rest of the application code.
I have used composition to extend the original reader, but in this case it would also be viable to use class inheritance instead (provided
virtual). However inheritance wouldn’t work for the following scenario, and I suggest favoring composition over inheritance anyway.
From Decorator to Composite
You’ve seen creating a class extension via Decorator, adding the caching logic around a delegated call to the wrapped
DbReader, but what if I want to reuse my new cache class as a stand-alone in-memory storage instead of just a decorator, for example as a fake storage in tests?
In this case I can still implement the same interface, but I cannot delegate the call directly - in the stand-alone cases scenario there won’t be any decorated
DbReaderto attempt reading from another source. The item either is in the in-memory storage or it isn’t.
The implementation is straighforward:
Notice two things: I have removed the
SetItemmethod, and I’m passing the dictionary from the outside. The point in that is to adhere to the
IReaderinterface more strictly. Somewhere in your application there is probably an
DbWriterand a decorator to update the cache - in our case the shared dictionary. In practice I often implement this as a single class implementing both
IWriter, and encapsulating the cache implementation. In this example however let’s keep the
InMemoryReaderas small as possible.
Such in-memory cache implementation is indeed simpler, but now how do I compose it together with the
DbReaderto cache the stored items?
The answer to this is the Composite pattern. It’s similar to Decorator from the outsite - it’s a special implementation of the same interface it wraps, but it owns more wrapped instances. In our case the cache and the database reader.
In composition root we can pass
readerToBeCached. This allows us to use both types completely separately, but when necessary compose them to get the original requested caching functionality.
This is fairly solid (pun intended), but if you think about the
Readmethod now, it’s a
IReader.Readwhich tries to call one
IReader.Readfirst and if it didn’t return anything, call another
This sounds like lazy collection reduction to me, so we can try to rewrite it as one. In the same step, we can further generalize the aggregate to accept more than two readers - for example you may want to have an in-memory cache, NoSQL cache and the SQL storage, or a NoSQL cache, SQL cache and a 3rd party service etc.
This implementation can take arbitrary number of individual and independent reader instances and when
Readis called, it will try them one by one until an item is returned or we’ve tried all the readers.
Notice that the implementation complexity is back to one-expression method. Pretty neat, huh? The only responsibility of a composite is the orchestration - making sure the wrapped objects are called in a particular manner. And again, all changes to caching policies and implementations happen in the composition root, while the rest of the application stays completely intact.
SOLID principles can lead you to write reusable, single-purpose classes. Often this leads to emerging patterns that you may have heard of, but haven’t seen or used yourself.
In the example I’ve focused on Decorator and Composite, because in my opinion they embody the simplicity of functional interfaces in object-oriented world. There are of course many more manifestations of the similarities between object and functional approaches, if you take the time to discover the patterns.
I’ve recently participated in several threads and online chats discussing the relationships between static typing, unit testing, and stepping through your code with a debugger. It made me think about how I use these tools and what are their roles in the development process. All three can be viewed as means of making sure the code “works”, but how do they work together? What can each do and where do they need help from the others?
Automated testing is a great opportunity to eat your own dog food. The tests describe your application code’s use cases and behaviors, valid inputs, and expected outputs. Unit test (and I deliberately use the term rather loosely here) are the most interesting. They are closest to the code, written by the same person, and are integral part of development.
I think it’s easy to see (especially with property-based testing) that unit tests usually describe the breadth of your code’s intended contract. It expands your knowledge and confidence of how the code behaves in a part of its runtime state space.
Some functional programming environments also provide a REPL console. It can be handy especially in prototyping phases of the project. It lacks automation and persistence, but in some sense it serves the same purpose - dogfooding the contract.
Static Type Checking
First and foremost, types are about memory safety. They make sure you don’t take the memory representation of string reference and interpret it as let’s say a Person structure value (with overflow bugs and everything). Some languages provide weaker guarantees, some provide stronger, but most modern languages try to ensure memory safety through types.
Static type checking simply means that these rules are enforced during compilation - failing it if violated, thus providing a very short feedback loop. Even shorter than unit tests do in most environments.
Types also most often carry specific meaning - they describe your domain model, the kinds and shapes of your data and their public contracts. Maybe you begin to see similiarities to unit testing, but hold on: while unit tests explore the possible runtime state space, static type checking constrains it. Code that won’t typecheck won’t compile, so there is no possibility (if the typesystem is sound) to reach that state during runtime. In this sense static type checking is a dual of unit testing.
Most languages and environments offer you a debugger to step through your code. Expression after expression observe the state of all your objects, maybe even change it in the middle of the program’s execution.
In many cases it’s the fastest way to explore the state of a running program, and get to the root of a problem.
Alternative debugging methods may be various debug outputs, logs or dumps.
By your powers combined…
I’ve said above that type system lets you constrain the state space of your application, and that unit tests let you explore and gain confidence in the unconstrained parts of it. But unless you have 100% test coverage and a perfect type system (whatever may that mean in you application’s domain) it will leave unexplored parts of the runtime state space. And there may be bugs lurking in those dark places, hidden from your language facilities and automated testing.
This is where I think debugging steps in. Debugger can serve as a way to bridge the gap between typecheking and tested code.
I think I’ve shown that all three tools have their place in the development process. Unit tests act as executable documentation, static typechecking limits the untested possibilities, and debugger can help you in the space between, when you need a quick look inside the application.
Notice that you don’t necessarily need all three. Many popular programming languages provide only dynamic typing, yet people produce thousands of succesful applications with them. Legacy systems often lack automated testing, but they are still being used to this day. There has also been some resentment towards debugging, for example between some functional programmers, and yet they can produce very high quality code.
Finally, bugs will of course always occur. Using these tools properly may let you minimize the issues and react quickly when you discover them, but still have to be prepared.
subscribe via RSS