Author Archives: Weronika Łabaj

To DDD or not to DDD? What to do if your domain is boring?

I’ve heard many times that DDD should be applied to complex, interesting domains only. Even some experts that I deeply admire say that, so it must be true, right?

The problem is that I’ve never heard what exactly makes a domain interesting or complex. Especially, what makes it interesting or complex enough for the purpose of DDDifying it. I think the problem is we’ve been asking the wrong question here.

This tweet by @DDD Borat summarizes my thinking perfectly:

borat

But first let’s have a look at two examples that shaped my opinion on the subject.

Case study # 1. Anemic domain model. Boring, boring, boring LOBA (BBBLOBA).

A few years ago I’ve worked on an e-commerce system. We’ve covered basically everything you could imagine online store does, starting with displaying catalog, managing promotions, calculating prices, checkout, through emails, payments integrations, to all the backend stuff that customers never see, like refunds or shipments integration.

The business logic was randomly divided between controllers, data access classes and stored procedures. There was no business logic layer whatsoever (at least not when I started there).

One day I was particularly frustrated. Looking at yet another property bag which was proudly (and inaccurately) called our domain entity, I mumbled something about Anemic Domain Model. My colleague heard it and he sighed. “Yep, no wonder we have anemic domain model here”, he said. “There’s no interesting business logic. That domain is just soooo boring.”

I didn’t know what to say. I’ve just spent a week tracking how refunds are calculated. I found a bug, which was rejected by our QA saying “It’s too complicated, I’m not raising it”. If that wasn’t an interesting piece of business logic, then I have no idea what else it was. So I only nodded my head in faked agreement and went back to work.

Case study # 2. Domain crunching applied. Boring, boring, boring pet project.

A few months ago together with my boyfriend we got really pissed off with the mess which is called our personal finance. We didn’t know exactly where our money went and how much we spent. We used Excel before, but that didn’t seem good enough anymore. Our finances got more complicated over time. Also our expectations were higher. After a few years of tracking expenses, we knew what we wanted out of the app.

We did some research, I checked dozens of apps and none of them fit our criteria. We could hack them to fit our needs and do some calculations outside of the app. Instead we did what any programmer would do in such a situation – we decided to write our own.

Now this surely is the most boring domain one could imagine, right? Apart from maybe a TODO list, it’s the most popular pet project ever. Every programmer writes it at some point.

But over the period of a few months we’ve learned a lot about the domain. I think we can say we did the proper domain crunching. I acted mainly as the annoying customer/business expert, who is never satisfied with what they get (I’m absolutely brilliant in this role). Often I didn’t know what I was after, but I knew exactly when something felt wrong.

The first insight was that in fact we have two main parts in our app, which are to large extent independent. Tracking and analyzing. That fact impacted our model in significant way.

For example accounts are very useful for tracking. I can compare the actual total on one of our 11 accounts and the data in our app. If there’s a difference I can start looking what I missed this time. FYI I can’t force myself to enter all transactions every day, so it’s extremely important that identifying what transactions I missed is convenient. Accounts are perfect for this.

On the other hand I don’t care about accounts at the analysis stage. I don’t care if the specific payment was made from my personal account, our joint account, or my boyfriends wallet. Consequently, why in the world would I filter my expenses by account in reports? It gives me no actionable insights whatsoever. Adding such a feature would be a waste of time.

You might think “Isn’t it obvious? What’s the big deal?”. This is exactly what shows that we’re on the right track with modeling our domain. Afterwards it seems like the most obvious thing under the sun. But I can assure you it wasn’t that obvious when we started. Plus, most apps DO reports by account. Don’t ask me why would anybody use this. I have no idea. I wouldn’t.

Then we hit our first problem. I’m self-employed, so I pay taxes and health insurance myself. It’s not really an expense, in the sense that I can’t do much to make it lower. So from the perspective of optimizing expenses it was only a noise. Yet, it showed on my account and it was useful to track in some way. At first we added a flag which indicated whether the specific expense should be included in analysis. But after a while we’ve realized we have a missing concept here – an income cost. We have other expenses that shouldn’t be included in reports, but tracking my income costs on its own is actually useful information. Making this concept explicit made both things easier and less error-prone.

The next challenge came from my boyfriend. He does a lot of business trips. He covers expenses during his trips himself, and then his employer returns the money in the next salary. He goes mainly to countries more expensive than Poland, so those expenses are rather high. They completely ruined our analysis, and made tracking all expenses challenging. We still wanted to track those in the app, so we don’t forget to check if the money was returned, but those weren’t just normal expenses like groceries. That lead us to discovering another new concept – reimbursable transactions.

Since I occasionally lend money to family or friends we thought about explicitly modelling loans too. After giving it some thought we decided that using reimbursable transactions in that case is good enough. If we ever need more detailed distinctions between those two types of transactions, we can adjust our model. For now it’s good enough.

We’ve made many more discoveries like those described above. We used the app, noticed that something feels wrong, discussed it and refined our model. With each new discovery, our model got cleaner and made more sense. Looking back we were surprised we didn’t come up with all of this from the very beginning. It looks so obvious!

Back to DDD Borat

If you think about this, an e-commerce platform is way more complex and interesting than a personal finance app. Or is it?

I think we’ve been asking the wrong question here. It’s not about the domain per se. It’s not that one domain is more interesting or more complex than the other. Being interesting (whatever that means) is not an intrinsic characteristic of a particular domain.

What makes a particular domain interesting are problems you are solving.

In many cases it doesn’t make sense to go with your “domain crunching” very deep and some basic, off-the-shelf model, or slightly customized solution would do. If you can get away with CRUD, then by all means, do CRUD. If you’re a physicist then a bird is a bird, that level of abstraction would do just fine. It wouldn’t work for a biologist though.

The other thing that makes a particular domain interesting is you.

Can you notice when your model is not good enough anymore? Do you know when it’s time for a re-modeling session? Do you ever talk to your domain experts using their language? Are you truly interested in their problems and how to best solve them?

If not then no domain would be “interesting enough” to justify using DDD.

No excuses

So don’t let the popular myth to hold you back. You can apply the so-called “strategic design” principles even on your pet project.

In our case applying DDD concepts to the pet project pays off. The app is easy to use, we make fewer mistakes than in the beginning. When it was a simple CRUD app, I messed something up at least few times a week, because I forgot what flags to set in what situation. Fixing my mistakes could take us anything between 10 min. to 1 hr. Now each concept is modeled explicitly and I have no doubts how to enter all transactions, even when I’m particularly tired.

In many ways our app is simpler than the apps available on the market (e.g. our “budgets” are primitive), but it’s more sophisticated in other aspects. Above all – it makes sense to us.

To re-write or not to re-write. That is the question.

Thanks to discussions around a post for a company blog I came across the paper “An Agile Approach to a Legacy System”. It’s been published 11 years ago, but I’m surprised how relevant it is today. It reads like a novel, so I really recommend to give it a go.

A few things that really resonated with me:

– Legacy code is not (only) a technical problem. The authors mention that trust and politics are important factors. Like it or not, you can’t go away without it and very often “human factors” determine whether you succeed or not.

– If you want to build something better than the legacy system then you need to focus on the users. Find their current pains and solve their problems, provide business value. If you do your job well they’ll become the biggest advocates of your work.

– Take care of yourself and your team. Take breaks, don’t overwork yourself. Take time to bond. Take pride in your work. Work on interesting problems and have fun!

– If you rewrite legacy code you’ll reproduce legacy system. Since I’ve worked on a few systems that have been rewrites of the legacy I know what they’re talking about here 😉 You can throw in new shiny technologies and come up with better architecture diagrams but it still doesn’t feel quite right.

There is also a list of “case-studies” for projects where this approach was used.

Wstrzykiwanie zależności a testowalność kodu

Pierwszy raz z wstrzykiwaniem zależności (ang. Dependency Injection, albo krótko DI) zetknęłam się trzy lata temu. Trafiłam wtedy na świetną książkę Marka Seemanna “Dependency Injection in .NET“. Wtedy wydawało mi się, że DI to jest TO! Rozwiązanie mnóstwa problemów, które wtedy mieliśmy w kodzie.

Podobnie jak wielu ludzi utożsamiałam oczywiście DI z użyciem stosownego kontenera, takiego jak Autofac. W końcu “wstrzykiwanie dla ubogich” (ang. poor-man’s DI) brzmi jakoś niestosownie. Przez ostatnie parę lat pewne koncepcje lepiej poukładały mi się w głowie. W praktyce wyszły różne plusy i minusy, o których nie miałam pojęcia czytając książkę.

Jednym z popularnych przekonań na temat wstrzykiwania zależności jest, że dzięki temu twój kod jest łatwiejszy do przetestowania. Na myśli mamy tutaj oczywiście testy automatyczne, najczęściej jednostkowe, stworzone za pomocą narzędzi typu NUnit, MSTest, itp.

Niby prawda, a jednak nie do końca.

Dwa powody, dla których kod był ciężki do przetestowania z którymi spotykałam się najczęściej:

  • użycie statycznych klas i metod w metodzie, którą chciałam przetestować;
  • tworzenie instancji obiektu (czyli po prostu zawołanie new MyClass()) bezpośrednio w metodzie, którą chciałam przetestować.

Użycie kontenera rozwiązuje oba powyższe problemy ponieważ:

  • nie da się wstrzyknąć statycznej klasy;
  • to kontener tworzy instajcę obiektu i robi to poza metodą, którą testujesz.

Można więc powiedzieć, że użycie kontenera narzuca pewne reguły dzięki którym kod staje się łatwiejszy do przetestowania.

Dobra wiadomość jest taka, że te same reguły można zastosować bez użycia kontenera. Po prostu nie nadużywaj statycznych klas i przekazuj potrzebne zależności jako parametr do metod, które ich potrzebują. Jeśli wiele metod w jednej klasie potrzebuje tej samej zależności to można też przekazać ją jako parametr konstruktora.

Proste, prawda?

Dodatkowo udaje nam się uniknąć problemów związanych z użyciem kontenerów, takich jak niewłaściwe zarządzanie czasem życia obiektów czy nadużywanie interfejsów.

Zła wiadomość natomiast jest taka, że programiści to też ludzie. Z mojego doświadczenia wynika, że dużo łatwiej jest wprowadzić nowe zasady jeśli przy okazji pojawia się jakaś nowa “zabawka”. Ludzie mają większą motywację (bo nowe, bo brzmi mądrze, bo można wpisać w CV, bo jesteśmy cool, bo wszyscy tak robią, itede, itepe).

Użycie kontenera dostarcza też całkiem przydatnych argumentów przy egzekwowaniu nowych zasad. Na przykładzie używania statycznych klas (przykład z życia wzięty). Tłumaczenie czemu statyczne klasy sprawiają, że naszego kodu nie da się przetestować robiło się męczące po 10 razach, powiedzieć “bo tak” nie wypadało. Za to “bo statycznej klasy nie da się wstrzyknąć” brzmi mądrze, przekonuje bo przecież zdecydowaliśmy już, że używamy kontenera i w dodatku to jedno krótkie zdanie.

Na koniec najważniejsze. Nie jest prosto zmienić nawyki, szczególnie jeśli przez lata pisaliśmy kod, który jest nietestowalny i jeśli nasz architekt kocha statyczne klasy i zawsze wie lepiej. Łatwiej uzasadnić czas i wysiłek potrzebny na nauczenie się nowych zasad jeśli wiąże się z wprowadzeniem nowego narzędzia. Bez sensu, a jednak tak działamy 🙂

Wydaje mi się, że właśnie te praktyczne aspekty przyczyniły się do powstania nieporozumień. Teoretycznie nie potrzebujesz wstrzykiwania zależności, ani tym bardziej fajnego kontenera, żeby twój kod był łatwiejszy do przetestowania. W praktyce jednak (paradoksalnie) jest to łatwiejsze do wprowadzenia w zespole, który dopiero uczy się jak pisać testowalny kod.

Po jakimś czasie możesz dojść do wniosku, że kontener i DI to jest przesada, że nie potrzebujesz takiej armaty. Ale wtedy już pisanie testowalnego kodu będzie utrwalonym nawykiem.

Czas na polski

Zauważyłam, że wielu polskich programistów-blogerów w pewnym momencie zaczyna przerzucać się na pisanie po angielsku. Ma to sens. W ten sposób ćwiczą język obcy i mogą dotrzeć do szerzej grupy odbiorców. Przydaje się to też zapewne przy poszukiwaniu pracy w zagranicznej firmie.

Dla mnie największy plus pisania po angielsku jest taki, że nie trzeba zastanawiać się jak przetłumaczyć różne pojęcia. Wiele nowych terminów albo nie zostało przetłumaczonych albo istnieją różne ich tłumaczenia.

Trochę przykrym efektem ubocznym jest, że coraz mniej jest ciekawych wpisów po polsku. Autorzy ciągle się uczą, ewoluują, ale… swoją wiedzą dzielą się już po angielsku. Większość polskich programistów dobrze zna angielski, ale nadal jest to pewna bariera.

Dlatego dla mnie przyszedł czas na zmianę w drugą stronę. Mam nadzieję, że pisząc po polsku dotrę do paru osób, które dzięki tym wpisom dowiedzą się czegoś nowego.

Jest też drugi powód, mniej istotny, egoistyczny. Odkąd w pracy ciągle komunikuję się po angielsku zaczynam mieć problemy z mówieniem (i pisaniem) po polsku. Bardzo uwsteczniłam się przez ostatnie 2 lata i nawet zaczęłam robić głupie błędy ortograficzne. Wstyd. Nadszedł więc czas popracować nad swoją polszczyzną 😉

How to lose a client with a simple form

Banks in Poland are fully embracing the power of technology. You can open a new bank account, take a loan or do lots of other things without leaving home. You don’t need to send a photocopy of your ID or passport, so your personal data is safe. All you need to confirm your identity is an existing bank account.

The system is based on trust. One bank trusts the other that they checked your documents when you opened the first account. So all you have to do now is to fill in the form online, then make a small payment (~30 cents) to a given account. That transfer is used for confirmation of your identity and then money is returned. The bank checks personal details, including your address, date of birth, etc.

That’s really brilliant. I’ve used this system a few times before and it worked like a charm.

But not this time.

I come from a small village. Streets in my lovely village don’t have names. My village doesn’t have its own post code. That’s common in Poland. The national post has closed many branches a few years ago. We use post code of a nearby town, just like other villages in the area.

Depending on the design of a particular form the name of my village becomes “Street”, “City” or “Name of the place”. The name of the town with the postcode is either “City” or “Post”.

About ~40% Polish people live in villages, so I’m not alone. Yet, it seems like many IT specialists have no idea about all of this. Maybe they all live in cities.

Found on Flickr

About a week ago I decided to exchange currency using an online platform. I filled the form to open the account. But my identity can’t be confirmed.

Even though I provided the same values as for my other account, they are in different fields. Because of how the forms were designed I couldn’t possibly fill them in the same way.

I understand it, help-desk people understand it, but system doesn’t. Apparently, humans can’t override the automatic match. Probably that’s some super-duper-smarter-than-human security policy. Oh, did I mention both services (online banking and currency exchange) are provided by the same company?

I’ve been trying to untangle this mess for the last week and lost hope. I switched to their competitors.

All their wonderful support people couldn’t make up for the wrong model. For a stupid address. I wonder how much money they’ve lost by this.

How is address represented in your model? Would this model work for me?

Choosing the right tool is not enough

As a programmer I’ve been taught that picking the right tool for the job is THE THING. I’m thinking here about the kind of decisions that are based on combination of extensive knowledge (not only tools, but also paradigms behind them and their limitations) and lots of thinking (most likely inspired by solving challenging, real-world problems).

But looking at projects I participated in I realized that picking the tool is only one tiny step in a very long journey.

Here are a few challenges that I observed as far.

The right tool won’t help you if you use it in a wrong way

I have very little experience with NoSQL databases as far, but a few things surprised me already. In SQL we usually start modelling by thinking what data we have and what would be the best way to store it. But in NoSQL databases we need to start by thinking how we want to access this data and optimize structure for that goal. What is considered a dubious practice in SQL world might be perfectly acceptable in NoSQL one (e.g. data duplication is very common).

If we don’t know how we’re going to use the data then I don’t think it’s possible to come up with a good model. You can’t just make it up as you go, because you’ll end up trying to build SQL-like model for querying on top of NoSQL database. That means you’ll have great performance problems. For example, document databases work best if you access documents by ids and if each document is an independent aggregate (meaning it contains all the data you need at the moment). If you create lots of tiny documents and top it up with many indexes, with each index touching the majority of the documents to check a single field on them… Well, probably it won’t work well in the long run.

I’ve came across similar situation a few times. Usually the problem was that the person who picked the tool understood it really well, but wasn’t involved in implementation or was involved in a limited capacity. Think typical architects, typical “enterprisy” software development process, or just not enough code reviews.

take_a_horse_they_said

Image from http://www.omniref.com/blog/blog/page/2/

The right tool won’t help you if your problem changes

… and you don’t notice that the once perfect tool doesn’t fit the problem any more. Or worse, you notice but can’t switch the tool because of other factors (like cost or time limits).

That change might be the result of new requirements, growing understanding of the system, or using the system in a new context, e.g. when you find access to a new type of clients.

That situation can lead to the previous point. It’s hard to decide when the line is crossed and it’s time time to stop abusing our perfect tool to fit the new problem. It’s even harder to do it when we have limited resources and are expected to keep delivering.

The right tool won’t help you if you solve a problem you don’t have

I’ve seen two types of situations when people solved problems they didn’t have. One is the infamous premature optimization. It’s very hard to resist the temptation to use your skills to the maximum and just aim for the simplest thing that works. It’s also very hard to accept that you can’t predict the future requirements, and what seems like a great idea today might turn out to be a waste of time tomorrow.

The other situation is equally popular communication issues resulting in misunderstanding what the actual problem is.

Image from https://medium.com/interactive-mind/the-evolution-of-ux-f27dd1ac56b4

Image from https://medium.com/interactive-mind/the-evolution-of-ux-f27dd1ac56b4

The right tool won’t help you if you solve a problem that is not worth solving

The great example would be adding lots of features to the system, knowing well that only a very tiny percentage of users will use them (or maybe none at all). It’s estimated that 30% of features in our systems are not used or rarely used, so don’t think your system is an exception.

Working on problems that are not worth solving not only generates maintenance costs in the future, but also is a lost opportunity. We could’ve be working on something else instead.

Start at the end

Level 0: Commit message

Once upon a time somebody told me the commit message should be written before any code. Of course, I’ve ignored the rule many times since then. But every time I follow it I’m amazed that such a simple thing actually helps with focus. Plus it makes me super guilty if I decide to batch multiple unrelated things in a single commit.

Tiny habit, but results add up quickly and actually make a difference.

Level 1: ATDD, BDD, Specification by example, etc.

Some time later I noticed that even though we, developers, can be very good at writing code the right way, quite often we work on the wrong things. We make wrong assumptions that break everything in the last moment, we keep making business decisions disguised as technical ones, or we simply don’t care about solving the business problem as much as about playing with all the cool technology.

I was fascinated by the stories where asking a few extra questions saved hundreds of hours of work, made existing model completely unsuitable or led to a complete change in approach.

But after putting the ideas into practice I lost some of the early enthusiasm. Those methods work and they certainly brought a lot of benefit to a few projects I worked on. Even though the actual acronyms were not always used, we found a few serious problems or misunderstandings early in the process with those methods. So I know they work.

The issue I have with them though, is that there are so many tools and processes around them now that it’s hard to not get distracted. It’s so tempting to play with a new framework or argue which tool is better instead of talking to the human being on the other side of our program. And there’s a lot one can play with.

Level 2: Sell it before you build it

At my current job we have a process that tries to force us to focus on users and business aspects before we even think about writing any code. We make so called “impact analysis” to determine what is the expected benefit of building a specific feature. Sometimes in the process we realize that in fact there are better things we could be working on at the moment and the idea is dropped. Then we prepare announcement and documentation drafts to think how will we communicate this feature to the users. If we can’t explain it or it doesn’t sound very attractive then it’s probably not worth building.

To be honest it’s not easy to work that way. We’re developers, not marketers after all. It really takes some effort. It’s easier to just code or cheat the process.

The obvious benefit of that approach is not wasting time for building things nobody would use. But there’s another one. A few times by going back to the announcement we realized we had made mistakes. Last time it happened to me yesterday. Another time we came up with a few extra things to discuss, noticed holes in the initial approach to solving problem or identified new edge-case scenarios.

The main benefit of starting with customer communication over BDD is that it doesn’t involve any fancy technology. There are not as many powerful distractions.

Next level: ???

I’m not sure what the next level is, I guess that it would have something to do with metrics and verifying assumptions we made using numbers. But it might be something completely different.

I hope to find out soon.

That’s just unacceptable

A few years ago I had a big challenge at work. With two or three other programmers we spent almost 8 hours browsing through code, drawing diagrams, swearing and thinking to the point our heads started to hurt. The conclusion was clear – we can’t do it. It’s impossible.

A few months before that day our batch job started processing too much data to fit a single processing window. Not a big deal. Business adapted. They started the job a few days early each month, to make sure all reports were ready before the deadline.

But now they came up with a new product. One requirement didn’t fit our model. From then on information that used to be always in one piece, could be spread across multiple records. What’s worse, related records didn’t have to be located next to each other and could arrive in  a random order.

We started analyzing edge cases. What if the processing window ends before all related records are processed? We’ll have incorrect results. We KNEW that was unacceptable. So we were thinking and thinking and thinking…

Eventually we realized that ensuring correctness at the end of each processing window is either impossible or extremely expensive. So on the very same day we set up the meeting and broke the bad news to the business. We felt uneasy about admitting that our system can’t accomodate their business idea. We were sure they’ll be extremely angry and unhappy.

Then somebody from the business asked whether we can ensure that results are correct when the whole batch job completes. We could. So no big deal, they said. We never look at this data in the middle of the job anyway. The call ended at that and we just looked at each other in disbelief.

That day I realized that business indeed copes with eventual consistency better than most programmers. I also learned that if we try to disguise our own technical biases as business requirements we might at best waste some time. If we’re less lucky we can lose a lot of money.

It might seem like an isolated example, but the world is full of similar stories.

Amazon decided they can deal with an occassional “absolutely unacceptable” situation that resulted in selling a single paper book to two different customers. Fixing that and making customers happy costs them some money, but they make much, much more thanks to the design that allows for that situation to happen.

Gojko Adzic in one of his books mentioned a betting company that was discussing key examples for their specifications (i.e. test cases). They realized that allowing customers to spend more money than they had is indeed a great idea. Even though it was an “absolutely unacceptable” scenario and would make every programmer cringe. Turns out that the customer that spent some money was more likely to come back and make another transaction. In order to do that they would have to pay off their debt first, so the company would get their money back eventually… But even if not, it’s better to make 5 dollars instead of 10 than make 0 instead of 10. Or at least that’s the logic business people apparently follow.

I try to keep those stories in mind every time I feel like saying that something is just unacceptable. Somebody else might already make money on proving me wrong.

What is good for everything, is not really good for anything

Some time ago I worked on an ecommerce system, which allowed for very flexible promotions setup. If I remember correctly, it was poosible to create roughly 30 types of promotions. As you might imagine, testing all of it was a pain, since for every single promotion type I would need at least 10 test cases. So instead I decided to find out what combinations are actually used and focus only on realistic scenarios.

One day later I learned that business used only 5 fairly static kinds of promotions. And thank you very much for asking, they are happy with that number. They don’t need more. But since we’re already talking, would it be possible to have a promotion name in a dropdown and prepopulate fields accordingly? Most of the setup was static per promotion type and there was sooo much clicking, wouldn’t it be easier to just select a name from the dropdown, enter one number and voila, new promotion is ready? Knowing that promotions are much less complicated than we thought would also simplify implementation, not to mention testing…

I remembered that situation when Udi Dahan told us a universal reporting tool story during his ADSD course. He asked if we were ever told to create a universal reporting tool with all sorts of fancy filtering, sorting and presentation features? Did it allow for saving all those uber-flexible setups and giving them names, so business can run the same reports over and over on regular basis? Maybe you also find it disturbingly familiar?

The problem with flexible tools is that building them is extremely complex, they need to perform well on huge amounts of data and using all possible combinations of available options. That’s exactly what MSExcel does and Microsoft worked on it for a number of years now. Yet, usually we’re asked to build ‘mini-excel’ in much shorter time and given much smaller team.

The sad part is that most of the time the flexibility is not really useful. Most users save their 5 or, in extreme cases, maybe 10 static setups and run reports on them for months. So first of all, it’s a wasted effort. The exception is small percentage of users that really don’t know what they’re looking for in data until they actually see it. They play with models, look for patterns. But they’re minority and would be probably better off using tools dedicated for that kind of exploration.

Then, knowing what information user looks for, we could give them much better experience, so it’s also a lost opportunity. In case of promotions, we could save them typing and clicking. Given they had to setup dozens of promotions each month that would add up very fast and result in much better experience. In case of reporting we could think about generating ‘real time reports’ and sending them alerts. For example if every day somebody needs to run a report to check whether there were any orders matching some parameters, wouldn’t it be great if they got an email every time it actually happened? I bet they would love you for that feature. Not to mention how much work you would save yourself by implementing simpler, narrow solution.

So if ‘the other way’ is so bright why we keep working on universal, super flexible, yet not fully utilized features? In my experience the following factors contribute to the problem:

  • We don’t talk (enough) to business. The most obvious one. Maybe we’re not interested in business side of things, or maybe we don’t have opportunity to talk to the right people. Either way it’s obvious that something is missing here.
  • Business has to constantly fight for IT resources. So since we’re already doing something for them, they tend to ask for anything they can think of. After all nobody knows when they get hold of any geek the next time. By having a flexible tool they can do their job without needing us that often.
  • We (as an industry) taught business to speak our language. As an industry we didn’t do a very good job in learning how to get requirements out of business users. Not stories, not features, not “I want Y shown when I click X on ABC screen”, not CRUD. I’m talking about getting to the source, to what problem business is trying to solve with the feature they dream about, about discovering their why. Only then we can help them to come up with a good solution. Blindly following the specification is just not enough. Business users don’t come up with best solutions and designs. That’s our job.
  • The more complicated the feature, the more fun it is to implement. Let’s face it, sometimes it’s just too tempting 🙂

Are there any more factors you observed? Please, share them in comments.

What legacy projects can teach you?

Last week I attended Udi Dahan’s ADSD course. At the moment seems like I need a few years to process and truly understand all the information he packed there. But surprisingly enough a few things sounded familiar. More organized, deeper analyzed, better phrased… Still sounded like he was on my previous projects.

I’ve never worked on a greenfield project (yet?). But I worked on a few that were “new, shiny, better rewrites”. Or at least that was the idea. In most cases people who created them were not available or forgot a lot of stuff. So we were required to do lot of detective work as well as navigate carefully whenever we tried to add a new feature. The challenge was not only figuring out how to get something done. The most fun part was trying to guess what was the most probable reason for why something was done in a specific way. Was it laziness? Unknown requirement? Easiest approach? Personal preference? Great way to learn how humans think and work.

Udi has quite an interesting perspective on “maintenance” work. He disagrees with the popular belief that it’s easy and requires less skills than working on a new project. He says that most likely it originates from the wrong metaphor. Maintenance is indeed less challenging than designing and building a skyscraper or a house. But it’s not true for software.

The most important distinction is that software is never finished. And is not meant to be. It’s finished only if it’s dead. Not only maintenance is not that different from greenfield, extending existing system is even more complex. There are many more things you can break. Users know the system, work with it. Whatever you do, it impacts them. You have to take that into account.

After he said that I’ve realized there are a few things that you are more likely to learn if you work on a legacy system. You might notice them on a greenfield project too, but I bet the lessons are more painful during maintenence. And pain boosts memory and pattern recognition.

Here’s my list:

1. Technology is just a tool. No matter how shiny, new, life-changing… It’s just a tool.

Every now and then people get enthusiastic about yet another fantastic idea. This time it’ll be different. We’ve learned a lot in the previous project. Said somebody on every project. Yet you’re working on third reincarnation of the same system and you hate this codebase. And for good reasons. At some point you realize that it’s more complicated than just picking the right technology.

2. Context and why is more important than what and how.

 Sometimes it’s not that important what exactly you decided on, but why. Best practices are not silver bullets, they need to be applied in a specific context. Domain models are great, but are overkill if simple CRUD would be good enough. In a long-lived project you have many opportunities to see how great ideas work when used (with best intentions) in the wrong  context. You learn to think for yourself, even if you hear recommendation from an expert you deeply respect. They make mistakes too.

3. Simplicity rules.

You can realize how bad humans are in predicting future. System is easy to extend in places that you never touch. Then you waste a week for understanding overcomplicated, overflexible, overgeneralized module to add a mere two lines of code. You might even dare to think that the simplest, least flexible solution would be better than that. It would be easier to replace it all completely if necessary.

4. Consistency is worth more than having “the best” tool/design.

Said a person who’s had 4 different ORMs in one codebase… Just trust me on that one.

5. It’s all about habits.

A fair number of issues we had could and eventually was mitigated by developing good habits. Be a good scout, do a tiny improvement. Every day. It adds up.

The less intuitive part is that big leaps don’t work as well as small, boring changes, applied consistently on a daily basis. If you look carefully enough you might find remnants of a few revolutions in your codebase. And remember, you won’t have more time to fix it later. Either do it now or never. Sticky notes get lost.

6. Bad code is not (only) a technical issue.

Often this is a result of (mis)management, time pressures, bad planning, people not speaking up… Even lack of sufficient technical skills might be tied back to the “soft side of things”. Why can’t you attract and/or keep more experienced people? Do you give them what they need for optimal work? Do you know how to judge people’s skills? Do you keep each other accountable? Do you let the loudest person make significant decisions even if they don’t have anything to back them up? Who deals with their consequences?

The less visible problem is a lack of trust between “business” and “technical” parts of organization. If business believes you know what you’re doing, they’ll give you what you need if you ask for it. If not… Well, life is hard.

7. It’s (almost?) never easier to “just rewrite it”.

You might discover that the “new feature” you’ve just estimated was implemented before you joined the company. For some reason it hasn’t been widely used though. Another day you realize that the bug you’re fixing was in production for the last 5 years. How come nobody noticed? So much for “just make it work as an old system”.


If you have some more lessons to add, please share them!