Every now and then a programming newbie asks this question: what language should I learn? That of course is then followed by a long discussion with lots of strong opinions and no consensus.
That newbie decides to learn one of the suggested languages and gets the first job. Then they ask: what X frameworks should I learn? (where X refers to their chosen primary technology). A very similar discussion starts…
Then around an intermediate level the typical programmer starts freaking out on the point of becoming obsolete. Somebody suggests to learn a new language every year.
And it goes on and on and on…
I’ve asked those questions myself, multiple times, various people. But at some point I’ve realized that they don’t make much sense, and the answers even less so.
I think what matters is really just two things: to grow our skills over time and to not lose passion among daily struggles.
For the latter reason, I’d say it’s reasonable to just learn whatever you like and find interesting. Sometimes we have a particularly boring or depressing (think uber-legacy) project at work, or our coworkers drive us crazy. Having some pet project, reading an interesting book, watching a mind-bending presentation reminds us why we still keep doing it, despite all the challenges. I don’t think the importance of it can be stressed enough. In that case it doesn’t really matter if and how soon you’d be able to apply what you learnt at work. The purpose is different.
But there’s that other thing, more difficult, especially given that in most jobs we don’t have much support in planning our career. Heck, most people in IT can’t even believe there are people that keep programming for 30 years, we ask where do the old programmers go, so why would you plan for staying that long?
While I think coming up with a 30-year learning plan is insane (or even a 5-year for that matter), it makes sense to consciously think what paradigms and principles we’d like to learn. They don’t change that often, they get refined over time, but the underlying ideas are relevant for years and it’s quite easy to catch up with the newest developments. Mastering the syntax of the new tool is relatively easy, if you understand what problem it solves and how it works on a high-level. Of course, there’ll be quirks, gotchas and edge-cases, but those you learn by doing, very often on a project you’re paid for (let’s be honest, most problems don’t manifest in pet-projects).
To make the learning efficient it makes sense to use the technology that is restrictive and will guide our learning in terms of principles. Taking languages as an example, hybrid languages are not the best way to learn functional programming. I’ve learned from experience that it also makes sense to invest money and learn from the authorities in the given subject. They spent a lot of time distilling their knowledge and often package it in a easy to digest way. If they’re good teachers they will guide you instead of just talking (so you still get all the joy of discovery ;)) and will cover most important principles, so you can continue learning on your own.
But to the point, for me the biggest jumps in skills as far was learning about clean code, testing, requirements, DDD, functional programming, messaging, actors and event-driven architectures. Even when I didn’t use the relevant tools at work right away, my way of thinking changed. My designs got better. I was able to come with a few completely different approaches to solving the same problem, and thus pick the better solution. I could foresee some challenges and prevent them.
To me the biggest benefits of learning about various paradigms is having more options and making better decisions. Because picking the best tool for the job shouldn’t be considered on the syntax level, it’s a paradigm-level decision.