Tag Archives: BDD

Specification By Example applied

Humble beginnings

This was a very ordinary day, ordinary meeting, when one of my colleagues came up with a proposition of writing tests using Specflow. If I remember correctly, he watched a presentation from NDC and the idea appealed to him for some reason. We decided to give it a try and write tests for one of the old modules in our system.

We’ve already had some existing tests in this module, but we wanted to increase the coverage. Since we didn’t start from scratch, Konrad managed to get the first tests running pretty quickly. Few days later he was showing us how to get started. Some time later we decided that we could use our slack time for writing Specflow tests.

A few months later I was thinking about making a presentation at a local .NET user group. I mentioned it to my team-lead in our one-on-one meeting. He suggested that I could investigate the concept of Specification By Example a bit deeper and share our practical experiences. I read a few books, I made a presentation, then another, the idea stayed with me since then for good and I  still keep learning about it.

What’s the big deal?

Soon enough we started noticing benefits. The separation of abstraction levels (pseudo-natural language vs code) influenced the way we were thinking about the problems under test. We started discovering more and more inconsistencies. We documented them and discussed with client. We asked more questions than ever. After a while we questioned everything, just on principle.

Then we applied the concept to other areas, where we didn’t introduce Specflow and didn’t even want to. When requirements were unclear, we used examples and tables, we used natural language to provide specific scenarios, making sure that we and our business experts were on the same page. The side effect of this was that we started focusing more on the “why” part, on the big picture of the problem. We learned that some of our assumptions are incorrect, that we automatically add constraints that our experts didn’t came up with (e.g. they were ok with some kinds of temporal data incosistency, which seemed crazy for us), that sometimes we use the same words to describe different concepts (apparently “future date” sometimes might be equal to “last week”).

The scenarios provide a very neat framework for communication with business users. Our experts were in fact quite “techy”, which made things more difficult. Sometimes they used technical concepts in a different way than we. Sometimes they had “how” ready, before we even knew “what” we are supposed to do. Scenarios were a good entry point for getting conversations started. And those conversations were different from those we had before. They naturally kept us communicating on higher abstraction levels, talking about business, not ints and database indices. Thinking about the minimum set of key examples helped us spot potential issues even before we started implementation. We caught a few incosistencies in business rules without writing a single line of code.

We got it all wrong! Yet… it worked

I find it ironic, that when I started digging deeper I learned that we got it all wrong, we made all the possible mistakes, started from the completely false assumptions… and yet, it brought us a lot of value. How was that possible?

First of all, we had the “idea” champion. Konrad wored very hard to make it work, he started the whole process, provided some initial examples and occasional tweaks later, he tutored other people on the team how to use it. I think that if that idea “was given” to us from the manager or some external team, we would not care that much. But it was ours. Konrad wanted it to succeed and we wanted him to succeed in his efforts. So we tried hard to make it work.

Then, the tool itself was interesting. It was something new, something different, so a few of us were actually happy to spend our slack time writing more tests (by “us” I mean developers!). That meant that more tests were written, then would be if we used, say, NUnit for the same thing.

Our Specflow scenarios were far from ideal. We tested on the lower abstraction level than we should. We used too many examples. We didn’t share our scenarios with clients (sometimes we copied them to email, but they didn’t know it’s anything more than plain text). Tests organization probably could be better.

So we might say that even if we didn’t fail from the technical point of view, from the point of proper tool usage, we didn’t have much success with it either. But that wasn’t that important. We still could get the benefits the tool brings. I think that in the end we could just throw away all the tests and never use Specflow again, that wouldn’t matter that much. Even then we’d get some value from getting familiar with the tool.

To me the main benefit was that we learned to work differently in the process, now we don’t need tools and frameworks anymore to get the conversations started. I really experienced that I, as a developer, am also responsible for requirements and understanding the busines side of things. That I can (and should!) ask questions, clarify, propose alternatives and warn about difficulties as early as possible.

Of course, we could use a different tool or not use any tool at all. Also, there are other benefits to BDD or Specifications (I’ll write more about it in a separate post). But I think it’s pretty cool that we didn’t have to fight for budget, attend any formal trainings or look for communication experts, to learn all those things. If it worked for us, I guess it can also work that way for other teams.