There are a number of questions to be addressed by a course like this:
- How should we think about modularization?
- What can we use for an MVC-like separation?
- What can we use for templating?
- What about data binding?
- What tools are available for building web apps?
- How can we debug a web application?
- How can we test a web application?
- Given that the industry is moving so fast, how can we build to grow and adapt?
This course tackles these questions, providing alternatives where it matters, but also doesn’t mind being opinionated and suggesting some of them especially.
This is an area where lots of interesting developments are happening right now, and at Edument, we’re proud to now offer a course covering it.
You can read more about the course on our home page.
As a part of this work, we noticed that there was a need for a separate one-day CSS course to focus on styling the web. Some web developers will want to focus on scripting the web, others on styling it, and some will want to take both courses.
The web stack is immense, and takes time to master. It’s a journey of discovery for every web developer. At Edument, we feel proud to employ our specific expertise to provide you with courses to set you off in the right direction, with useful techniques, tools and best practices.
Carl Masak describes a way to mass-produce “anti-confusion particles”, bringing more clarity into the process of reporting bugs, unit testing, or tracking down surprising behaviors in code.
Just like bad hair days, we sometimes have bad programming days. And sometimes good ones. I know that some days I’m unfit to work even on a simple script, and other days I’m happily hacking on the innards of some compiler or other. Our skills don’t just fall on a single point on some skill spectrum. We fall within a bell curve.
And we have precious little ability to tell for ourselves when we’re being smart, and when we’re being stupid. See, part of the reason ignorance is bliss is that we’re usually not aware we’re having a Homer moment. We’re so confused that we don’t even notice that we’re confused.
The times when I do mess up, I’m usually really grateful that I wrote unit tests. Conversely, the times I mess up and didn’t write unit tests, I usually curse the hubris and laziness that led me down that path. I sincerely believe that I don’t write buggier code just because I’m using unit tests. The number of bugs going in is probably about the same, but the number of bugs discovered right away is larger with the tests. Which makes me go “d’oh!”, and fix the code. But every such “d’oh!” is one fewer “aaargh” in production, so it’s not so bad.
Though tests are only a small part of the thing I wanted to write about today: a systematic way to improve our worst performance as programmers.
Every good bug report needs three things
I’m on an open-source project with an active issue tracker, a test suite, and an actively developed implementation. (It’s a Perl 6 compiler.) I’ve submitted over a thousand issues by now, and I’m at the point where the whole process of submitting an issue for something has lost the charm of novelty, and I’m just formatting the thing in as short a time as possible. However, I’ve learned from Joel Spolsky long ago that every good bug report needs exactly three things:
- Steps to reproduce,
- What you expected, and
- What you got instead.
Since we’re all living in the future nowadays, we have tools to make this easier. On our IRC channel, there are bots that run code for us in all of the big implementations. They reply on-channel with the output, and include pertinent build revision numbers. Which covers 1 and 3 above. The ensuing discussion usually covers 2; otherwise I add some clarification. But the upside is this: bug reports basically write themselves. Nine times out of ten, they consist of a few lines of (heavily trimmed) IRC conversation between humans and bots.
I’m proud to be called a bot sometimes myself on the channel. (Only 52 times, according to the IRC logs.) I make it a thing to submit as many issues as I see, because the process is painless and semi-automatic to me by now. I can do something useful even when I’m having a bad programming day. The whole thing has led me to believe that we should make more parts of the coding experience painless and semi-automatic, because it makes us more effective.
The Anti-Confusion Particle
But we’re our own worst enemy. We’re programmers; we like complexity; we live for it and tackle it every day. And the times when the complexity conquers us, we’re often to absorbed in the problem to realize that we’ve lost.
In the surprisingly well-written fanfic Harry Potter and the Methods of Rationality, the first step to regain touch with reality and start thinking clearly again consists of saying out loud:
I notice that I am confused.
That’s all. You’ve broken the spell of confusion. You’re free, and can do something about it.
The thing we do is simple. We describe what we find confusing. We do this as succinctly as possible. We do it using three things:
- What did I do,
- What did I expect, and
- What did I get.
Look familiar? Why, it’s the same triple as with bug reports! And with good reason: if a program or a system goes against our expectations, that’s confusing. It’s also a potential bug report. In a way, we can get rid of our confusion by submitting a mental bug report to ourselves.
I call this triplet the “anti-confusion particle”. It’s like an element from the periodic table, which catalyzes a reaction with your confusion and helps turn it into understanding. It has three quarks.
A natural part of getting the anti-confusion particle to work well is to remove all the cruft. Get it to have as few moving parts as possible. Throw away bits of code, all the while trying to retain the surprising element. In our community, we’ve started calling this “golfing the code”; the practice being named after the recreational activity sometimes seen in Perl 5 circles, of writing programs with as few characters as possible. The shorter you get your code, the clearer the unexpected thing will be.
Unit tests fit nicely into this kind of thinking. A unit test consists of quarks 1 and 2 (do and expect), and your testing framework runs all the tests and spits out anti-confusion particles at you.
When confusion goes away
Sometimes, in programming help channels on IRC, I’ve seen newcomers enter and pityingly utter “I tried this and that, but it didn’t work!”. (And the fact that they omit the expect part is a sign that they’re too confused to know they’re confused.) A regular on the channel will approach them, and say “tell us what you did, what you expected to happen, and what actually happened”, providing the newcomer with the recipe to figure out stuff yourself.
You know what the eventual reply usually is?
“Thanks, it works now!”
You see, what usually happens — what in a sense must happen — with confusion is that it eventually resolves itself. We just have to make sure we have the tools and vehicles to arrive at the resolution as quickly as possible. The anti-confusion particle is the first such tool we use.
What follows next is usually popping the hood of the system, following code flows and connections, and basically playing digital doctor. This process can take seconds, hours, or days. It may be trivial, or it may be oh-my-complexity-class NP-hard. But at that point, we’re well on our way to understanding the problem.
We’re un-stuck. The spell of the Homer moment is removed. We can be productive again.
Carl Mäsak has a passion for software and software process. He works as an architect and programming mentor at Edument. He likes to work on healing systems in need of an architecture, or to help introduce order into a chaotic domain. In his spare time, he likes riding a bike, cooking good food, and writing music. Not necessarily all three at the same time.
Through Informator, Edument offers courses in testing, Git, CQRS and Event sourcing, C#, Perl, testing, and web security. All our courses are delivered by our passionate and experienced coworkers. Check them out.
The Nordic Perl Workshop is an annual event that the Nordic countries take turns hosting. This year’s workshop will take place in Malmö, Sweden. The companies Edument and Informator have graciously agreed to sponsor the event.
Perl is a modern scripting language with a vibrant developer community, active core developers, and just massive amounts of modules. Perl source code drives companies, web pages, IRC bots, and parts of the Internet infrastructure. However, much has happened since the first release of Perl in 1987; as a way to celebrate that, the theme of the workshop this year is “Perl in 2011″.
Edument is a newly started company in the Skåne region in southern Sweden, providing consultancy, education, and development mentorship. Informator is a well-established provider of education in the Nordic countries. Their joint sponsoring of the event means that the attendees will have a lovely venue close to the station, with a nice view of the Malmö harbour area.
Last Friday, Jonathan Worthington and I (Carl Mäsak) decided to get our feet wet with CQRS and event sourcing. The toy project we settled on: a simple but realistic web site for two-player board games.
In this post, I summarize how things went.
Architect meets domain expert
Since there were only the two of us, I took the role of the domain expert, and Jonathan took the role of the architect. He expertly teased a model out of me. We arrived at two aggregate roots:
Game. Easy enough.
Design: commands and events
RegisterPlayerCommand ActivatePlayerCommand InvitePlayerCommand AcceptInvitationCommand RejectInvitationCommand (no StartGameCommand) PlaceStoneCommand SwapPlayerColorsCommand ResignGameCommand TimeOutGameCommand PlayerRegisteredEvent PlayerActivatedEvent PlayerInvitedEvent GameStartedEvent InvitationRejectedEvent StonePlacedEvent GameWonEvent PlayerColorsSwappedEvent GameResignedEvent GameTimedOutEvent
For each command and event, we took a moment to model through what data we needed to send along. It gave us an appreciation for one of the ways in which commands and events differ: on the inside.
There was a moment of joyful insight as we realized that we had gotten this far into the design of the system and not once talked about state. Quite a
Being the one with the “domain expert” knowledge, I kept unwillingly slipping back into the role of the client. Otherwise we’d have gotten some things wrong, which wouldn’t have shown up until the next “meeting with the client”. Jonathan remarked: “There’s got to be a lesson in here somewhere.”
(Afterwards, we’ve changed two things in the above model: we eventually realized that we would need an
InvitationAcceptedEvent after all. The reason we originally figured we’d be able to do without it is that we noticed that it would fire off a
GameStartedEvent, and that would be enough. But no, it needs to fire off both, otherwise the
Invitation would still be open. The other thing we realized was that a better name for
PlayerInvitedEvent would be
InvitationMadeEvent. That way, all the commands and events contain in their names the aggregate that they are acting on — which makes a lot of sense.)
Commands, events, test framework
We wrote our first test, and the necessary classes to go with it.
Our goal now was to get enough of the system running for our first test to fail. That took a few hours, partly due to the fact that we were figuring out how to fit everything together.
The wiring is like this: The test contains a ‘given’ list of events, a ‘when’ command, and a ‘then’ list of events or an exception. The test fixture creates an aggregate root to do the testing on, and loads it up with the events from the ‘given’ part. Aggregate roots have a special flag on the
apply_event method for applying events without having them register as changes to be committed. That’s all the concession to testing that’s needed. Quite neat.
The test fixture then sends the command to a bus-like thing. This triggers the right command handler, which does the required validation and then calls a method on the aggregate. That’s the command part of things.
Now, the method on the aggregate is just a thin wrapper for applying an event. The event is mapped through a lookup table (our workaround for the lack of method overloading in Perl 5) to an apply-event method. Note that on the way, we visited the same
apply_event method as when we prepared the aggregate with the ‘given’ events. This time the generated events are saved, though… and that’s exactly what we’re then using to check against the ‘then’ events. (Or, if we got an exception, the test fixture captures that and compares it with whatever was expected.)
It’s quite a simple system, though it took us a few hours to understand and get running. Still not too bad for our first attempt.
Getting the first test to pass
Trying to get the test we’d written at the beginning of the day to pass, we realized that we were still missing one component: a repository to store the aggregate in while we were testing it. We settled on writing a test repository, with a total capacity of one (1) aggregate.
After that, things fell into place quickly. We got our event wired up, and the test passing. Thus, we entered into the next phase…
Ping-pong pair programming
By the looks of the commit log, that’s where I became unconscious and Jonathan kept on hacking.
Ordinary pair programming has a “driver” and a “navigator”. In ping-pong pair programming, the idea
is for the two people to alternate by taking turns writing a test for the other to implement. This was the first time we tried that, and it went very smoothly. Definitely something to try again. In regular pair programming, the navigator can sometimes doze off. But doing things this way, both of us were engaging with the process of writing code and tests, even when we weren’t in the role of driver.
We got through eight such cycles of ping and pong. At this point, things were really effortless: all the groundwork was already made, and now that we were finally implementing state in our aggregates, there were no longer any obstacles left. A very weird feeling; the aggregate was its own little world, merely responding to commands and events as they came flying by. Coding was effortless, not least because we managed to time it with the Ballmer Peak.
We surprised each other a bit by turning what appeared to be quite tricky tests into excessively simple bits of implementation. Things generally required less wiring up than we expected. (Again, because object state wasn’t the driving component, leaving us free to structure the innards of an aggregate any which way we wanted.)
One thing we also discovered is that we generally had to write fewer tests than “usual”. Each new test covered a bit more ground than we expected, and we often didn’t bother to write a test because we already knew it was going to pass. We’re not sure whether that’s (a) a good thing, and we shouldn’t worry, (b) a bad thing that’s going to cost us in the future, or just (c) a sign that we knew too much about the implementation. Guess more practice with this way of testing will tell.
All in all, a happy first day with CQRS and event sourcing in actual practice.
Carl Mäsak has a passion for software and software process. He works as an architect and programming mentor at Edument. He likes to work on healing systems in need of an architecture, or to help introduce order into a chaotic domain. On his spare time, he rides a bike, cooks food, and writes music. Not necessarily all at the same time.
Upcoming CQRS events
Edument will arrange a Software Architecture – Community Day in Malmo, Gothenburg and Stockholm.