Author Archives: Weronika Łabaj

Technical debt or how to finance your real estate empire

Every now and then I see people questioning the usefulness of the technical debt metaphor. I think it’s brilliant though often misunderstood or not fully utilized by technical people. It may be helpful to see its potential by taking a closer look at one industry I’m more familiar with, i.e. real estate.

Let’s focus here only on people in a situation similar to mine, i.e. where real estate is a “side business”, mostly treated as a way to allocate your investments and retirement savings vehicle.

Real estate is one of those industries in which very few people can avoid any form of debt. Even if they can, they rarely want to do that.

Why?

Because even if you can afford it, building your Empire (or “empire”) using just cash is not very efficient, relatively slow and, counterintuitively, it’s not the least risky path. Even if you have enough cash to buy something, it may be better to take a mortgage anyway and keep the cash for a rainy day. For example, to help you survive a pandemic where your income may even drop to zero for a while.

On the other hand, maxing out your credit cards to get a downpayment is plain stupid in most cases, though sometimes may be justified. It may allow you to secure a really awesome deal, so if you’re able to pay it off quickly and know it’s an investment with a very low risk it may be worth it. If the deal is time-sensitive, it’ll be gone before you even call the bank to ask how much they can borrow you.

So that covers a couple of the edge cases (out of thousands) and only three options of financing the deal, i.e. your own cash, credit cards, and mortgage. There are many more, e.g. contrary to common knowledge borrowing money from the banks often is not the best and safest option. They may have stupid credit risk scoring rules (have you known not all banks understand how amortization works and don’t realize it’s the virtual cost?), offer worse conditions than your friends that prefer to invest in your business than keep their savings at 0.001% interest in the bank, typically require a lot of paperwork and are not very flexible in negotiating contract terms, etc. So the more “advanced” your business becomes, the more options you discover, and the more often you realize rules of thumb are not that helpful.

To make the long story short, good debt/bad debt is part of business 101, even for side businesses, yet it’s a very nuanced and complicated topic. There are no simple black and white answers, managing debt is a constant process of balancing risk with future potential while adapting to changing circumstances around you. Whether a decision is reasonable depends on your overall strategy, the current situation in your business, and overall market conditions.

I think at the end of the day, taking on technical debt is a business decision, not technical. The role of the technical team is to clearly articulate potential alternatives, associated costs, and risks, not to decide on a financing strategy for the business. It’s a very hard job and takes a lot of practice, but if you can learn it you’ll provide incredible value for the business. Especially if you learn to do that using a language and decision framework they’re using daily, giving them a chance to really understand the situation.

It may be a good rule of thumb for beginners to avoid the debt altogether, while they learn, but insisting on always producing perfect code or run a cash-only business won’t get you very far in the real world.

And let’s be clear, some situations are really off-limits, such as borrowing money from the mafia or building MVP to validate ideas using microservices and Kubernetes 😉 But that’s a topic for another post.

The harsh reality of a domain breakthrough

One of my favorite stories from the Blue Book is the description of how changing the model of syndicate loans greatly simplified the code and resolved many bugs, not to mention that it made the communication with business experts way easier.

I’ve read this story many times and even though in my experience it’s been quite accurate and helpful, every time I can’t help thinking that one aspect that deserves more coverage is the harsh emotional reality of such a breakthrough.

So how do you know you came across something worth calling a breakthrough?

I think there are no strict rules, but I’ve noticed the following pattern in my experience:

  • Over time I have a growing sense that something is not quite right in a particular area of code. I find it really hard to get on the same page with business experts, the code is getting more and more complex, the bugs start piling up… But there’s no easy solution in sight for quite a while and I can’t put a finger on what exactly is wrong.
  • Then one moment it hits me. I look back, connect all the dots and suddenly can reduce the whole mess to something trivial like: “we should represent the loan as a pie chart”, “we only use these 5 types of promotions, we don’t need this uber-flexible form with 10 checkboxes”, “we should track accounting period for each co-author of the book, rather than the book”.
  • I verify the new assumptions. I run a few scenarios in my head or rewrite a small piece of code and confirm that this change will likely resolve those few nasty bugs and make code simpler. Suddenly it seems like adding this long-awaited feature will become trivial, even though until this point it was estimated to be months of work.
  • Finally, I double-check with the business experts and they look at me like I’m an alien. Of course, that’s right, they’ve been telling me this for the last 6 months, why I never listen? Is that my “big discovery”? Are you kidding me?

And this is where the difficulty lies. The initial excitement of the discovery may become clouded by the fact that trivial as it sounds, in code it’s quite a large change. It may be so fundamental that you’ll also need to rewrite your tests because the model in code is significantly misaligned with how the expert sees it and that has ripple effects throughout various parts of the code. You may come to the conclusion that it’ll be easier to “run in red” for a while, instead of struggling to make the tests pass. In a complex codebase, this may be a scary thought. And good luck explaining this to your team of software craftsmen…

Eric hints on that when he says that upon this discovery we have two choices, both hard: either bite the bullet and change the code (it probably will be closer to weeks rather than hours and may be hard to estimate) or not take the risk and suffer the consequences, at least for a little longer.

I’ve seen it play both ways and I think both choices have merit, depending on the circumstances. For example, if the problem is in a rarely used feature or consequences of mistakes are very low in the business sense compared to the cost of fixing it, or there’s a bunch of things that have a way bigger impact that need to be done sooner, then it may make sense to delay it or just decide to live with suboptimal model forever.

That was the case with the promotions code I’ve mentioned. The calculations were extremely complex and we knew there were mistakes in some scenarios when promotions overlapped, but on the rare occasion when a customer noticed the problem and complained, the helpdesk issued a small voucher for their next purchase.

On the other hand, in the case of two co-authors having separate accounting periods the issue was critical because any inaccuracy in royalty calculations would result in distrust that the system works correctly. It didn’t matter if the mistake was tiny in the monetary sense, even if it was just a couple of dollars, the consequences in reputation would be huge.

A few important lessons I’ve learned in practice:

  • The difficulty of code change related to domain breakthrough is not caused by the lack of code quality, it’s an orthogonal aspect. It’s really important to make it clear at the very beginning to avoid misunderstandings. You can have the cleanest and best-tested code in the world, but if the model is misaligned with the business reality then breaking changes or rewrites of some modules are unavoidable. On the other hand, having high-quality code makes the change easier. In fact, I personally think it’s a prerequisite to actually doing anything with your insight.
  • Domain breakthroughs are expected and a good sign of getting familiar with the domain. I’m not sure if it makes it any easier to handle, but I don’t think any amount of up-front design or analysis can prevent such events. They simply show that both developers and business stakeholders learn more about the system, priorities, and gradually come to a better alignment.
  • The so-called “soft” aspects of a change are way more important than code. It’s really hard to avoid falling into the trap of looking for people “responsible” for the misalignment because in hindsight the problem looks very obvious and business stakeholders may feel like they communicated it for a long time. It may take a conscious effort and a fair amount of explicit communication to focus everybody on moving forward instead of debating how this could have been avoided. Most likely it couldn’t, because this is how you learn anything complex, occasionally making mistakes.
  • This is a moment where you may need to use all the trust capital you’ve amassed with the business over time. If you’ve spent the last few weeks or months trying to tame the weird bugs and exploding code complexity that were very hard to explain, people may be skeptical about the potential of finally addressing the problem with such “trivial” change. If it was so obvious, why haven’t you done it months ago?
  • Talking about a breakthrough requires a lot of courage. It may be hard to even admit that the problem exists, it may feel embarrassing that we haven’t noticed it earlier, or you may unintentionally make your colleague feel like you’re attacking their design decisions. If the relationships in the project are not based on trust, mutual respect, and egoless programming principles, then instead of an opportunity for improvement you wanted to discuss, you may end up with lots of misunderstandings.
  • Last but not least, the hardest part for me was that I wasn’t certain if the proposed changes will deliver the results I hoped for. Timeboxing, some initial analysis to estimate or small POC were quite helpful in minimizing the risk, but the only way to know for sure was to make the change and verify this in practice. On the one hand, I didn’t want to promise the impossible, on the other I needed some enthusiasm for the change to get a chance to make it. The solution for me was to be very clear and explicit about assumptions and expectations and providing frequent updates about the progress. Turns out that in practice visibility (regular updates) is more important and helpful than predictability (accurate estimates and detailed plans) or having direct control over the process (micromanagement).

Those are a few observations regarding domain breakthroughs based on my experience. What are yours? Have you ever come across a domain breakthrough? Which way did you go: make the change or ignore? What have you learned? Let me know in the comments, I’m curious.

 

 

So you want to be a tools developer?

 

A few years ago I joined a tools company, where I worked on a messaging framework. Without even realizing it, I had a lot of expectations and assumptions about how that kind of work looks like.

A few days ago @mathiasverraes  reminded me about this with this Tweet:

 

I admit that I also used that believe that people working on infrastructure are smarter, have more interesting, fun and valuable project, etc. etc. etc. Some turned out to be true, but there were also a few surprises:

1. I spent (significantly) more time interacting with users and doing things other than coding than in any other previous job

That was to a large extent my choice and that probably varies a lot across companies… but when you think about it it makes total sense!

As a user, you want documentation, blog posts, educational content that is high-quality, informative, is actually useful and not trying to sell you anything. When you consider using some tool for your company you want your technical questions answered asap and have the ability discuss the architecture of your project with people who understand your questions, and, ideally, have done something similar before. When raising a support issue you’d expect the other person to be able to put themselves in your shoes and help you solve the problem instead of doing the minimum to satisfy the SLA contract, even if ultimately it turns out that the issue was in your code, not the framework.

In some companies, programmers don’t participate in those kinds of activities. Based on my experience I believe in the long run it’s a huge advantage for both customers and for the development team when they do. The feedback and insights you get are hard to get otherwise and help you develop better tools.

2. Good API design is sooo hard and frustrating

With every iteration, you have better ideas for improving the codebase, making APIs easier to use and more elegant. On the other hand, nobody likes having to deal with breaking changes! I remember how it felt when Angular 2 has been announced when we were in the middle of the huge project using AngularJS…  Balancing those two forces requires a lot of skill, experience and there’s no way to make everybody happy.

It’s hard to resist adding a feature that can help 2 customers, but what if it will also make lives harder for another 20? Or what if you ship a feature that nobody uses but you can’t just remove it without any notice and have to do it across 2 major versions, according to your support policy? Or what if somebody relies on an undocumented feature or uses it in an unintended way and your “fix” will break their solution?

Similar dilemmas are your daily reality as a tools developer. There were many days when I wished we could forget about all the history and start from scratch instead of making changes in a less-elegant backward-compatible manner and patching 5 versions that were still officially supported, but this is what provides huge value to your customers.

3. Clarifying “business” requirements is even harder

When customers are highly technical it may be orders of magnitude harder to understand what problem they’re solving than when dealing with “business” users. What seems like a simple feature request on the surface may turn out to be a problem with the design, a rare edge-case scenario unique to the customer or a great suggestion for the next version. You never know until you dig deeper and get more context. You may also need to reach out to more customers to verify that the same solution won’t have a negative impact on them or will be beneficial for more than one team.

While doing so, you have to resist the urge to jump straight into the solution mode, ignore for a moment the first proposed solution, so conveniently offered by the customer, skillfully navigate through buzzwords and jargon to get to the essence of the underlying issue, and make an honest attempt to solve the problem without writing any code (at least at first :)).

Sometimes you just look dumb when you keep asking the same “please, can you explain to me what is the problem from the business perspective”, other times people can’t provide you that information simply because they don’t have the whole picture themselves. But often without that context and understanding, the solution you offer will be suboptimal.

4. Abstractions have their downsides

Contrary to the popular saying, there are problems in software that can’t be solved with an additional layer of abstraction.

In fact, many problems are way easier to solve when you operate on a lower abstraction level and can safely make certain assumptions about the environment, expected load, usage characteristics, etc. When working on a general-purpose tool you have to balance the needs of many different users and their contexts.

What does that mean in practice?

For example, reaching impressive limits of optimization is not a great investment of your time and effort, if it’s only beneficial in a few narrow edge-cases. You optimize for the majority. It’s reasonable to be (overly?) defensive in your design, to minimize the risk that some features will be used in unintended ways. When experimenting, you have to keep in mind your support policy and ensure you’re not taking on a large maintenance burden that’s not likely to provide a lot of value. You do lots and lots and lots of testing. And then some more.

In practice, sometimes it looks like it’s actually more “fun” to work on the business-facing app, where you have more context, can make educated assumptions and are free to experiment in a more aggressive way.

5. Value is harder to measure and often code is not the most valuable part

When you’re working on the business-facing apps you can directly see an impact of many things you shipped. You see that automating a process saved users 15 mins. here or increased sales by 5% over there. Even if you don’t have a specific number, the impact of a given feature is rather easy to verbalize and understand.

In case of tools, it’s a bit more complicated, because you’re in the middle of the process, providing just one little piece. A lot of things are out of your control.

Sure, you can say that it’s better if you can process 10’000 messages per second than 100, but how much impact it actually has for the end-user that’s clicking on a link to track his package? Would they be much worse off if the developer used the tool provided by your competition? How much time did they save or how much more money did they make by using your tool?

Measuring the impact on the end result of just one of the 1000s moving pieces is way more complex than measuring the impact of a single end-to-end feature. And you’d be surprised (disappointed? :)) if you knew how often we concluded that the highest impact thing we could provide is better documentation, call with a customer or a bug fix, rather than a new feature.

So, overall, is it better or not?

Don’t get me wrong, working for an infrastructure company is great! If you have an opportunity, give it a try and see for yourself how it is. For example, it indeed tends to attract ambitious, experienced, smart people. Every now and then you work on a problem that you’d likely not encounter in a typical BLOBA (Boring Line of Business App). Most of the time you talk to other technical people and can learn a ton of useful things from your customers.

However, at the end of the day, both kinds of jobs may be more similar than you’d expect. The grass is always greener on the other side of the fence etc., but the reality is a bit more complicated 🙂 Each line of work will come with their unique set of challenges that others don’t have.

So if you’ll ever find yourself daydreaming about working on an infrastructure project in the hope it’d solve all your professional problems, pause for a moment before you start polishing your resume. Give your current system a closer look, maybe it’s more interesting than you think.

Code is the easy part

When I shared what I’ve learned about DDD from a “boring” domain some people wanted to see the code. Somebody even said “Show me the code or it didn’t happen!” 🙂

On the one hand I understand that for programmers code is the tool of work, this is how we solve problems. On the other I was dissapointed, because in my mind code wasn’t the most interesting aspect of that project and in the post I intentionaly focused on those other aspects – how even in trivial projects learning more about the domain matters, how that knowledge is deepened and distilled over time, how business and technology interact.

But some time later I realized I was the same. For years I had been waiting for a “really interesting project”, for my holy grail of complex domain. Of course, I worked on interesting projects, occasionally solved really complicated problems… but nothing came close to the holy domain of cargo shipping.

Just looking at those large ships and how much stuff they move from one place to another, imagine planning routes, tracking where things are, troubleshooting when things get lost…

I wished I could work on a project like this.

cargo shipping

However, at some point I realized that most software is probably the “boring line of business” kind. Valuable, with the potential to make somebody’s work life hard or easy, but generally not very exciting all the time.

Seems that part of the challenge is that as programmers we think about complexity in terms of number of if statements and nested conditions (cyclomatic complexity). After getting some experience we learn that complexity is also in coupling and scale. After a few failed projects you also learn about the complexity of having too many moving parts and getting too excited about new technologies 😉

But if you pay close attention then you will eventually notice, that in the end technology is the easy part. I often noticed that technology was made the hard part even though it wasn’t necessary, because of overengineering, implementing features that nobody used or premature optimizations. But the real complexity was elsewhere.

Take the “boring” domain of personal finance. There’re literally hundreds of books on personal finance, thousands of blogs, and thousands of experts. Each saying different things and claiming they know the best, all the other guys are wrong. They make up their own vocabulary. They argue and challenge the “common wisdom”. They rarely agree with each other on anything. It’s easy to get overwhelmed and lost.

money

But after reading a few books you start noticing common themes, you distill the knowledge. You pick things that make sense for you, for your lifestyle, that fit your personal situation. At this stage you can start making decisions on what is important and should be explicitly modelled, and what can be safely ignored. You decide how optimal your solution should be, what are the trade-offs you’re willing to accept.

Often you ask experts for clarifications, but you know what you’re after, more or less. You’re not just a passive listener that waits for directions, you understand the assumptions and constraints, become an active participant in defining requirements and designing solutions.

In many projects the Domain is the hard part. If you solve a problem that has never been solved, or that hasn’t been solved in that specific way, then nobody is the expert in the tradidional sense of the word. You have to come up the “right” approach together with the domain experts, you need to collaborate. There are no “ready” requirements that you can just write down, it’s a very creative work.

Even more interesting is the situation when you have multiple experts or stakeholders, who can’t agree with each other on the priorities. Yet, you need an agreement in order to write any code. As many observed before me, most people don’t really know what they want, but they’re absolutely sure it’s not the thing you created for them. Or you might discover that the “official” reason for a project is not what you thought, and you optimized the wrong criteria or got hurt by office politics. That’s the People complexity.

I could go on and on and on… but you get the idea. I agree that technology is important, and always will be. We need to learn our craft and be fluent with the tools. But that’s simply not enough. There’re other kinds of complexity – domain, people, and more.

If you want to create great software that matters you need to take into account all those kinds of complexity. To me this is what DDD is about. It’s a tool for tackling the ultimate complexity in the heart of software. Technology is just one aspect, and often the (relatively) easy one.

real complexity

If you’re interested in hearing more, that’s one of the things I want to cover in my upcoming talk at DDD Europe.

Business or technical decision – which one is that?

It’s been a long, long time since my colleague said that “if the business can’t give us better requirements, it’s their problem, I’ll just do what I think they had in mind”. But I get frustrated every day with systems that apparently were created using that approach.

I think the essential problem is that we, programmers/designers/makers, very often don’t have enough “business” knowledge around the system we’re building.

I’m not talking about having more user stories or adding more detail to them.

I’m talking about the fact that it’s impossible to build a great product if you don’t understand the business around it.

It’s not just hard. In the long run it’s simply impossible. Because sooner or later you’ll make something that will ruin your system, that’s inevitable. Maybe you’ll be lucky and not many people will be affected. But if you don’t have enough context, you’ll keep making bad decisions over and over and over again. Interestingly enough, you might not even realize you make any decisions, because you make assumptions without noticing.

After a while your only hope will be lock-in and competition that is even worse, but who wants that?

If you don’t know how your target audience thinks, what they value, what mindset they have, then how can you make sensible technical trade-offs? Is speed the priority? Or rather ease of use? Or accuracy? Or pleasant design? If you do know that, then do you actually keep that in mind when you develop every new feature or fix every bug?

If you don’t know what’s the relative importance of the specific qualities of the system you’re building, then how can you judge if the design you selected is optimal? What criteria do you use to dismiss alternative approaches? Do you even bother to come up with a few alternatives?

Last but not least, how often and for how long do you think about business consequences of technical decisions you make?


On the one hand we brag that technology is making the world a better place, on the other it seems we don’t fully realize how big impact it has on daily lives of many people when it doesn’t deliver on the promises it makes.

A few small examples from my personal experience as a user:

For a long time I looked for an app to learn German vocabulary. Finally, I found something that looked like what I wanted. The content was interesting, I had lessons to every video, it was self-paced and had revisions. I was thrilled and right after the trial I bought 1-year access… and soon afterwards I stopped using the app completely.

Turns out the “learning” part is very rigid and I have to work around it. The content is not ideal, so I have to learn words that I don’t even know in English after learning it for 10 years. That’s on a newbie level (sic!). I have to misuse the app to dismiss the words I’m not interested in learning, because apparently nobody thought that ability to customize what I learn would be useful. Then on some days suddenly I have (literally!) hunderds of words for revision, even though I didn’t fall behind the schedule.

Talk about demotivating and overwhelming.

So my idea was to keep using the content (as it is really interesting) and use another app with more sensible revision part for learning vocabulary (by manually entering vocabulary I want to learn). Great idea, but I thought about doing it 2 months ago and still haven’t even started. I’m scared of opening the app, because probably all my 1000+ words are due for revision.


As a language learner I want an app that doesn’t force me to learn useless vocabulary, doesn’t torture me with boring content and doesn’t push me to work at a too fast pace, so that I don’t end up abandoning it and feeling miserable about not learning anything (again!).


Another example is generating invoices from the freelancing portal. For whatever reason (maybe local law?) the invoices are generated on a weekly basis, but the Polish tax authority requires accounting everything on a monthly basis.

Of course, I can manually reverse-engineer all the costs, all the data is there, no problem. Only it takes time and costs extra money for the accounting services, not to mention the “fun” of doing currency conversions and corrections 5 times more times than necessary every month (as I have to make that conversion for every invoice I submit, plus correct when I transfer money to my account). On top of that, I can’t be sure how the tax inspector will judge my creativity when they analyze my documents.


As a freelancer I don’t want to spend hours manually crafting reports that could easily be generated automatically (or semi-automatically) from the data you already give me, in a format that my tax authority demands, so that I can sleep well and not look for the first opportunity to stop using your portal as soon as a better opportunity arises.


Of course, those are just two tiny examples, there are many, many, many more. Once you start noticing such issues, it’s impossible to unsee them. And it’s all too easy to put myself in the shoes of the people that created those features and wonder if I’d even think about scenarios that now drive me nuts.

It’s not about the need for better user stories or being a more responsible programmer.

It’s realizing that nowadays many businesses are shaped by technology. For better or worse.

It’s about considering our work not only in the nice, sterile, non-existing box of technical requirements, but also in the context of how it impacts business and users. It’s about considering the business impact of decisions disguised as purely technical.

Because it might be more efficient to generate reports on a weekly basis and your task might be to optimize DB performance, but tax law specialists might tell you that’s problematic in some countries. If you ask them.

Because the acurracy of the vocabulary revision algorithm may be ideal and that might be one of your app’s selling points, but the expert teacher might tell you that encouraging regularity and persistence is more important than precision. If you ask them.

At the end of the day, it all comes down to asking constantly How this technical decision will impact business and users? Who should decide if that impact is OK?

I know that many people think PM/BA/everybody-but-them should worry about such things. In my experience assinging such responsibility to a specific person or group is only useful when you look for somebody to blame for yet another disaster. If the “makers” don’t think about it, then they’ll make a lot of decisions that weren’t really their to make.

For better or worse.

Beware of proxy domain experts

Today I’ve watched Greg Young’s keynote from DDD eXchange 2016. The talk is really awesome, though-provoking, turning everything upside down… I really recommend.

However, the point that caught my attention most was the answer to the question from the audience (at the very end). Greg mentioned that we need to be very careful with who we consider domain experts. He even said that on one project their end-users hated the app, because their “domain experts” actually had no idea about the work that end-users were doing.

I’ve experienced that mistake myself, in one of my summer jobs. For a couple months I worked in a call-centre for a mobile operator.

When you call somebody in the middle of the day to talk about their soon-expiring-contract with a mobile operator, what do you think is the most common answer?

I need to think about it/I can’t talk right now, please call me at XYZ time.

Pretty obvious, right?

The only problem was that it was impossible to do, because the calls were assigned randomly to people and it was impossible to make sure the call was assigned at a specified time! You could specify the preferred time range, but more often than not it didn’t work as you’d expect.

The result?

People entered random information and date in the system, wrote everything down in their paper calendars and handled the whole case outside the system. Some time later another unlucky operator who got that call assigned had a very bad day (you won’t believe how angry people are when you have no idea what they told your colleage a few months earlier and what they asked them to do back then).

In the end the system was an expensive phone book, that pissed off a lot of people, both working there and those that got the calls. The work was hard, stressful and annoying enough in itself, and the software we had to use only made it much worse instead of helping.

That’s what happens when you don’t realize you’re dealing with proxy domain experts.

Master paradigms, not syntax

Every now and then a programming newbie asks this question: what language should I learn? That of course is then followed by a long discussion with lots of strong opinions and no consensus.

That newbie decides to learn one of the suggested languages and gets the first job. Then they ask: what X frameworks should I learn? (where X refers to their chosen primary technology). A very similar discussion starts…

Then around an intermediate level the typical programmer starts freaking out on the point of becoming obsolete. Somebody suggests to learn a new language every year.

And it goes on and on and on…

I’ve asked those questions myself, multiple times, various people. But at some point I’ve realized that they don’t make much sense, and the answers even less so.

I think what matters is really just two things: to grow our skills over time and to not lose passion among daily struggles.

For the latter reason, I’d say it’s reasonable to just learn whatever you like and find interesting. Sometimes we have a particularly boring or depressing (think uber-legacy) project at work, or our coworkers drive us crazy. Having some pet project, reading an interesting book, watching a mind-bending presentation reminds us why we still keep doing it, despite all the challenges. I don’t think the importance of it can be stressed enough. In that case it doesn’t really matter if and how soon you’d be able to apply what you learnt at work. The purpose is different.

But there’s that other thing, more difficult, especially given that in most jobs we don’t have much support in planning our career. Heck, most people in IT can’t even believe there are people that keep programming for 30 years, we ask where do the old programmers go, so why would you plan for staying that long?

While I think coming up with a 30-year learning plan is insane (or even a 5-year for that matter), it makes sense to consciously think what paradigms and principles we’d like to learn. They don’t change that often, they get refined over time, but the underlying ideas are relevant for years and it’s quite easy to catch up with the newest developments. Mastering the syntax of the new tool is relatively easy, if you understand what problem it solves and how it works on a high-level. Of course, there’ll be quirks, gotchas and edge-cases, but those you learn by doing, very often on a project you’re paid for (let’s be honest, most problems don’t manifest in pet-projects).

To make the learning efficient it makes sense to use the technology that is restrictive and will guide our learning in terms of principles. Taking languages as an example, hybrid languages are not the best way to learn functional programming.  I’ve learned from experience that it also makes sense to invest money and learn from the authorities in the given subject. They spent a lot of time distilling their knowledge and often package it in a easy to digest way. If they’re good teachers they will guide you instead of just talking (so you still get all the joy of discovery ;)) and will cover most important principles, so you can continue learning on your own.

But to the point, for me the biggest jumps in skills as far was learning about clean code, testing, requirements, DDD, functional programming, messaging, actors and event-driven architectures. Even when I didn’t use the relevant tools at work right away, my way of thinking changed. My designs got better. I was able to come with a few completely different approaches to solving the same problem, and thus pick the better solution. I could foresee some challenges and prevent them.

To me the biggest benefits of learning about various paradigms is having more options and making better decisions. Because picking the best tool for the job shouldn’t be considered on the syntax level, it’s a paradigm-level decision.

Mechanical sympathy – not as low-level as you think

Last week I had a pleasure to attend a 1 day workshop on “Understanding Mechanical Sympathy” by Martin Thompson.

The special thing about this workshop, and Martin’s work in general is that he convinced me that performance optimizations are not black magic, assembly tricks. If you think otherwise, I encourage you to watch his “95% of performance is about clean representative models” presentation.

Besides, he’s a great teacher – the knowledge is distilled, complex things reduced to simple basics, explained in a straightforward way. Last but not least, even though the exercises and presentations are using Java, the lessons can be applied to any other programming language. For the workshop the basic knowledge of Java was sufficient.

So what did I learn?

Clean code leads to good performance

I did learn that at the uni, I’ve seen that before, even in Martin’s presentations, but only now it finally clicked for me. Martin spent a few hours (literally!) explaining how CPUs work and how they evolved across a few models. Before the workshop I believed him that clean code results in a good performance, now I feel I also understand why.

It comes down to a few simple rules. For example, one must realize that CPU is a mini-distributed system. Every level down the architecture diagram (so registry, L1 cache, L2, L3, etc.) is more expensive with regards to the communication in terms of latency. On the other hand , every level up can store less data. If your class doesn’t fit into that limited space, then you might be wasting a few cycles for getting necessary data to the registers when its needed. Also typically processors have a higher number of ALUs than for example JMP units. They also have dedicated units for performing operations on matrices and vectors. All that comes down to the fact that CPUs are better in arithmetic than in evaluating logical statements. All conditionals (loops, ifs, etc.) are expensive.

Another important point is that CPUs try to optimize the execution of our code and make a few bets: they assume that things which are close to each other will be used together, things which have been recently used will be used again, and that memory access will follow some kind of a pattern. That improves the performance of a “typical” (or should I rather say well-written?) code, but if you’re code doesn’t follow those rules you’ll pay for it. To be fair, the CPUs don’t expect anything wicked. On a higher level the hardware-friendly code can be translated to a few basic rules of clean code: small classes, high cohesion, short functions, low cyclomatic complexity, keeping loops very short and simple, etc.

I’ve never heard anybody else making the argument that clean code leads to good performance, and I think it’s a great shame. I regret now I didn’t use this argument in my previous job when advocating for improving code quality. And that I didn’t know how to measure it and show its impact. A few things I want to learn now after the workshop.

Things can go wrong when you measure performance

This is one of the areas I’d like to learn more about. Martin mentioned a few things to consider when measuring performance. Micro-benchmarks are generally hard to get right, even harder is to measure things that actually matter.

He suggested focusing on a higher-level tests, kind of end-to-end ones, which use realistic data and realistic use cases. The last two are very important, because compilers, etc. tend to be smart and will optimize our code (e.g. remove paths that will never get executed for our crappy data, even though they’ll be for realistic one).

Gil Tene talked more about such things in his “Priming Java for Speed at Market Open” presentation. Even if you don’t use Java it’s worth watching to get an idea what can be really happening when your code is executed, especially if you use JIT compilation.

Locks are evil and concurrency is hard

I knew it before, but I didn’t realize that communication between threads using locks might be slower than over the network. Apart from examples, we’ve heard a few “tales from the trenches” which were both funny and scary at the same time. Realizing you got your basic data structure wrong after over a year in production is not a place anybody would like to be one day.

Pro-tip for work: if somebody suggests using concurrency and claims it’s easy, then it’s time to run! They’re dangerous.

I think it’s worth a deeper thought that one of the best known experts on performance optimizations and concurrency in Java is saying that. If LMAX could do with just a single thread for business logic, then 90%+ systems in this world could also probably manage.

Martin ends his “The Quest for Low-latency with Concurrent Java” with Albert Einstain’s quote:
“Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius, and a lot of courage, to move in the opposite direction.” I let you guess where concurrency fits in it.

It’s all about the basics… and great teachers

We have lots of hypes in programming. Every now and then somebody writes that everything has changed and people get worried how they can ever keep up with the pace of change.

After Martin’s workshop I have one more example that hypes come and go. Performance seems to be an ever-green hype, every year we have more data and want to process it faster, or we have more users and want to give them even better experience. Luckily, it seems that principles don’t change that often. If you understand the underlying principles then you can learn the syntax of a new language, master 95% of API of a new framework, or understand impact of new hardware features very quickly.

At the same time a lot of the complexity we deal with in our daily work is unnecessary and generated by ourselves. Because we overcomplicate things and copy-and-paste without understanding what happens under the hoods, maybe we know what and how to do something, but don’t really understand why.

To be honest for me hardware and all the “low-level stuff” was one of such things that I never could fully grasp. It just didn’t seem relevant or as important as other things I could spend time on. Martin convinced me otherwise.

I’m not sure if performance will become my great passion, but even if I stop at the level of “20% of knowledge that gives 80% results” that will be great. At least now I understand where to start and “hardware and all this low-level stuff” seems less scary than ever before.

Summary

If you ever have opportunity to attend Martin’s workshop, go! It doesn’t matter what language you use or how high is the level of abstraction you operate on – it all runs on the same hardware. Learning a little bit about it will make you a better developer.

Good models are discovered

A few weeks ago Philip Wadler said a few interesting things in his keynote Propositions as Types, which he delivered at LambdaDays conference (really great conference, check it out!). The talk was about functional programming, and how its underlying principles were “discovered” independently by various people. I’m sure you know more examples of this “coincidence”, the history of science and human invention is full of them.

The most valuable takeaway for me was the distinction between things that are “discovered” vs things that are “invented”. Things are discovered when their time comes, when all the pre-requisite work is in place, when somebody finally connects the dots. The dots have always been there, though, just hidden. That’s why many people can come to similar conclusions independently using different tools. The representation might be different, but the conclusions, the essence of the discovery are the same.

That is very different from inventing things, which is unnatural, forced, and even if it solves the problem just feels wrong and nasty.

I think the same is true for models and domains.

It doesn’t matter how you reach your conclusions, how many iterations it took you, how many times you re-wrote that piece of code. Maybe you organized workshops, maybe talked with experts over a cup or coffee, or maybe the crucial insight was inspired by an email exchange.

You look back and you’re wondering why you didn’t notice that from the very beginning. It’s just so obvious! Most of the time it’s not even a very complicated thing, it’s just “common sense”.

The main challenge is that things look obvious only when you look back. You can’t connect the dots looking forward. Only when you look back you KNOW that what you came up with is right. Maybe it’ll change in the future, who knows, but it’s right for the time being, for the current context, for the current problem.

For me the most interesting insights came when something didn’t feel right. Very often I couldn’t explain what was wrong with the existing model, at least not at first. But I didn’t let it go, I was a real PITA (usually just for myself) and looked around for answers.

After trying out a few different approaches (even if only on a piece of paper) the differences between models became noticeable. One was simpler and cleaner. The other didn’t explicitly represent an important concept. Yet another had too much noise, some concepts were unimportant in this context and should be dropped.

In software we can represent the same model in multiple ways in code. As long as the underlying concepts, business rules are the same, invariants are honoured, and business goal is achieved, then I don’t see a problem with that. And those important insights could be discovered independently by various developers working on the same project. I know that many people feel differently, but I’m one of those people who simply don’t care about “tabs vs spaces” and don’t have favourite text editor.

It’s been a while since my partner came up with a crazy idea of writing our own Expenses Tracking App. But now and then we still are not sure how could we didn’t see all those obvious things from the very beginning. Even worse, we’re not sure how exactly we did discover them. But we know they’re right, because they fit perfectly to what we want out of this app.

The good news is that if good models are discovered then eventually we’ll all find them if we persist in the search. All it takes is a lot of patience, good listening skills and never being satisfied with the first model that comes to mind… or second, or tenth 🙂

To DDD or not to DDD – Is my model right?

  • “I’m new to DDD, please, help! Can you review my design?”
  • “Can you recommend some DDD code samples?”‘
  • “Why the’re not more DDD samples out there?”

Have you heard any of those before? Maybe it was you asking?

I know how it feels when you are trying to learn something that is really difficult (hello DDD!) and all you hear is “it depends” or “you’re asking a wrong question” (thanks for helping).

The challenge is that both of those answers are true. It’s impossible to give a good answer to any of those questions without having the full context. It’s easy to notice when something is obviously wrong (and you probably know that already anyway), but how do you decide if something is good? How do you know that there’s no better model out there? How do you even know your model is good enough? Is there something like “good enough”? Does DDD come in grades or is it all or nothing thing – either you do it right or not at all?

In that respect DDD is very much like investing.

Some time ago I attended a conference for real-estate investors (landlords). There were many interesting workshops, but my biggest lesson was lunch conversation.

I sat at a table with 3 investors. The first one introduced herself and said she invested in large flats for students and young professionals (flat sharing). The other said it is a really stupid idea. It’s obvious students would ruin your flat sooner or later (he ignored the first investor claiming it never happened to her in the last 10 years). So he invested only in small condos and studios for young working couples. The third observed that small flats tend to have low ROI, so it’s better to buy bigger flats for families. Another advantage is that families change rented flats less often and you don’t have to look for new tenants every few months.

And so it went… For two hours!

It was fascinating. Each investor was making money. Each was happy (and comfortable) with their investment strategy. Each thought that not only is their strategy THE BEST ONE, but that it’s THE ONLY reasonable option. And it was true. Their strategy was the best and only option FOR THEMSELVES.

If you’ve asked them what you should do with your $100 000, each would give you a different answer. All of them would be wrong. Because if you can’t make this decision for yourself then the only thing you should invest in is education. Blindly following other’s people orders won’t make you money in the long term.

Don’t get me wrong. Inspiration is important. Learning from other people’s mistakes is invaluable. It’s OK to ask for advice and consider different viewpoints. The more various options you consider, the better! I’ve learned a lot just by listening to those investors.

But in the end there’s no single best investment strategy and there’s no “right” domain model. You can stop looking now.

The truth is that there are countless strategies and models available. Each has their advantages and disadvantages. Some might be completely wrong, but most would be quite good or good enough for the time being or maybe just satisfactory given the constraints we have. In the end it all depends on your context. And the very important (and mostly ignored) part of that context is… you.

You and everything you know about the business, company and people in there. Your context.

Somebody said that investments are not risky. Investors are risky.  It matters how much you know, what are your past experiences, how well do you get along with your team and stakeholders, how well you deal with pressure, how much risk and overtime are you willing to accept before you talk about it with your PM…

Even if some DDD uber-expert comes to your team for a project and literally tells you what to type for a couple of months, after they leave you’re on your own. Back to square one. Or even worse, because he probably didn’t realize that the “business expert” he talked to is an ignorant know-it-all who has no idea what he’s talking about. He really should’ve talked to that shy, quiet guy in the darkest corner of the open-space. Everybody goes to him when something goes wrong or when they encounter a new edge case.

So don’t fret and start where you are. You already have everything you need at the moment. The rest you can learn by doing and analyzing results. By all means, ask for advice but don’t mistake it with outsourcing thinking.

After doing it 10, 50, 100 times you’d be surprised how much progress you’ve made.

Trust me. Good investors know that there always will be another great opportunity. Good modelers know that there will always be some other good models they could come up with. Don’t get stuck at looking for “perfect solution”. It doesn’t exist.

The better you become, the better will be your results.