NDC Oslo - Workshop days

After arriving in Oslo on Sunday and doing the touristy walkabout through the city, Monday saw the start of the Designing Microservices workshop by Sam Newman. Now personally I've been waiting for a good opportunity to jump on the microservices bandwagon for some time now and joining this workshop is definitely preparation to doing just that.

Of course with microservices being in the opinion of many, yours truly included, a bandwagon thing it doesn't hurt to actually try and understand what they actually are.

Now before I continue, these are some insights I've gained from the workshop but I'm not Sam and I highly recommend doing this workshop yourself if you have the opportunity. He's way better at explaining this (and arguably funnier)

With that out of the way, let's get go!

So then, should you build microservices?

Well the only answer to that is: It depends. (I have it on good authority that this is what a consultant would answer[1]). But in truth, it really does. In the two days of the workshop we kept coming back to this question and I'll try and explain why it depends.

The micro of microservices is a bit of a misnomer because it implies a tiny tiny service. How tiny? Nobody really knows. But no fear! There is a better definition. Sam gave us the following:

Small Independently Deployable services that work together, modelled around a business domain

This reminded me a lot of Uncle Bob's definition of single responsibility, of which he says to consider it as single reason for change [2]. One of the key points is that these services can be deployed without requiring another service or application to be deployed in lock-step with it. In a way, a monolithic system is in fact independently deployable, it's one unit of deployment.

But again, the definition mentions small. But what is small? When is a service small enough? I think that answer could either be 42[3] or that you need to define that for yourself. Size largely depends on what works for your team and organisation.

It is more important that the services you create are designed around business domains. This helps to isolate functionality for that domain inside a single service (which can use many others!). Also the responsibility for changing that service should be with one team and not many people making changes (and thus sharing responsibility, which doesn't work[4]).

When you start figuring out which parts of your application can be isolated and turned into microservices it is a good idea to read up on Domain Driven Design. Designing microservices is largely identifying bounded contexts and defining the models you expose to your consumers.

Defining the boundaries of your services will be the hardest part and what you need to keep in mind is the cost of change. Changing the shared model of your service when it's running in production is pretty high cost, moving a sticky note on a wall in your design phase is pretty darn cheap. Also keep in mind that when your system is still new it is perfectly fine to build as a modular monolith and split out into services later when the domains and contexts are more clear and less likely to change.

But wait! You are at a technical conference! What about the technology?

So far I've only talked about the design and businessy stuff but obviously there is more to it than that. Truth of the matter is, all the technology to build microservices already exists and has existed for a number of years already. It's more a matter of using it in the right way and applying practices rigorously.

Now you don't necessarily need to use REST to build microservices, XML-RPC or CORBA works just as well (and this is 1990's era technology!), however HTTP (and REST) give you advantages in providing a useful protocol for communication between services. Just be aware that for some circumstances it might make more sense to communicate over UDP because you need very high volume and losing some data is fine, keep your options open.

With the introduction of more services in your landscape you will need to focus more on how you deploy your services, manually deploying doesn't cut it anymore. Invest in a proper Continuous Integration / Continuous Delivery pipeline in which you can integrate automated testing. This helps reduce the time you need to spend to get services into production and provide more confidence that what is deployed actually works.

Once in deployed (to production), keeping track of how your landscape is performing becomes challenging. Like with deployment, manually checking status is madness. Manual alerting is even worse (getting that call from a user is a poor alerting mechanism!). You should collect logs from your various services in a centralised location like ELK or Splunk so you can diagnose failures better as looking for log files across many services becomes too painful.

Because in a microservices landscape an action performed by a user can cross multiple service boundaries it can be challenging to stitch together log entries when something goes wrong. To improve this it is a good idea to implement correlation id's and attach this to your log entries. It makes it that much easier to track calls across multiple services. Tools like Zipkin can leverage this to help track down which service is taking a long time for example.

One bit of advice Sam gave us, and I agree on this one, build this stuff in from the start! It is so much more difficult to do it afterwards (cost of change!). When you start building the first services and deploying them into production you will find out what works for your organisation and what you actualy need. Use that knowledge in the next services you build so you improve your overall platform while you are moving forward instead of trying to retrofit logging and monitoring onto existing services.

Another important thing to consider is using Infrastructure-as-Code to help you automate the deployment of your infrastructure. It also gives you more confidence that what is running in production is actually what you intended instead of a collection of special snowflakes that nobody can remember how they were built. With this approach you can also more easily destroy and rebuild environments which is really useful in for example testing scenarios or disaster recovery (to name a few, there are many more).

Of course there is a lot more to talk about on the technology side but I'll save that for the next blog post as this one got long already. More stuff from NDC to come, also I hope to do a lot more video Q-and-A sessions with some of the speakers so watch this space.

[1] Yes, I gave this answer and was slammed for it... [2] [https://8thlight.com/blog/uncle-bob/2014/05/08/SingleReponsibilityPrinciple.html](https://8thlight.com/blog/uncle-bob/2014/05/08/SingleReponsibilityPrinciple.html) [3] You have no idea where your [towel is](https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy) [4] Personal observation and gut feel. Shared responsibility makes it unclear who actually is responsible and leads to deflection of work (no you need to talk to that guy...)