9

In terms of software architecture and design, how do microservices "stack up" (pun intended) against middleware? I'm coming from Java, and it seems like as you move away from straight REST as an API, and abstract away different layers and connection parameters, at least in Java, you've almost come full circle back to some very old school ideas. We've come back to virtualization...wheras the JVM is already virtual.

In an agnostic way, you can, and I would argue the advantages to, abstracting a RESTful API to CORBA. Or, in a more java-centric way, JMS or MDB.

At one time EJB was a big deal in Java, then it was recognized to a bit of a cluster eff, but, now, are we back to the beginning?

Or, do microservices offer something which CORBA, or even better, MDB, lacks? When I read (TLDR) Martin Fowler explaining microservices, it strikes me as a good solution to a bad problem, if you will. Or rather, a closed minded approach which introduces a level of complexity only pushing the problem around. If the services truly are micro, and are numerous, then each has a dollar cost to run and maintain it.

Furthermore, if one microservice amongst many changes its API, then everything depending on that service breaks. It doesn't seem loosely coupled, it seems the opposite of agile. Or am I misusing those words?

Of course, there are an indeterminate amount of choices between these extremes.

Shark versus Gorilla...go! (For the pedantic, that's meant to be ironic, and isn't my intention at all. The question is meant to be taken at face value. If the question can be improved, please do so, or comment and I'll fix.)

Envision a multitude of microservices running in docker, all on one machine, talking to each other...madness. Difficult to maintain or admin, and next to impossible to ever change anything because any change will cascade and cause unforeseeable errors. How is it somehow better that these services are scattered across different machines? And, if they're distributed, then surely some very, very old school techniques have solved, at least to a degree, distributed computing.

Why is horizontal scaling so prevalent, or at least desirable?

Thufir
  • 224
  • 2
  • 10
  • 4
    Voting to close. It's unclear what you're asking and why you're asking it. Microservice architecture is just another architecture. Nothing more, nothing less. –  Mar 11 '15 at 07:18
  • 1
    You might also find this article worthwhile: http://devweek.com/blog/microservices-the-good-the-bad-and-the-ugly –  Mar 11 '15 at 07:23
  • 1
    I prefer "So now they have hundreds of little faux services, and instead of a monolith, they have to worry about what happens when someone wants to change the contract of one of those services. A ball of yarn, by a different name. Instead of wondering whether the code will compile if they make a change, they wonder whether it will run, in practice." --[microservices for grumpy neckbeards](https://news.ycombinator.com/item?id=8227721) disclaimor: no, I'm not a neckbeard, it's just a humorous article. – Thufir Mar 11 '15 at 09:18
  • "Instead of wondering whether the code will compile... they wonder whether it will run..." In fact this is not a problem at all. Simply because if a developer changes a contract without notifying all involved parties - that developer should be spanked very hard. Literally. If we use term contract then imagine if your mobile provider changes terms of contract without asking/informing you? That why it is contract - all parties involved must be aware of/agree on contract changes and in when this change happens (assuming proper development flow) all should be tested and run smoothly. – Alexey Kamenskiy Mar 11 '15 at 17:26
  • @AlexKey I think the quote is in the context of **your** company has n MS, all invoking each other. When something breaks, where did it break? As to the contract, if **you** are just deploying a **single** MS, then set your contract in stone. But what if it's a **system** of n MS? You're going to set the contract in stone, for **each and every** MS, **first**? Upfront? Doesn't sound very agile, sounds like you're..going..over..a..water..fall....... – Thufir Mar 12 '15 at 09:52
  • @Thufir Not in stone. Nothing is permanent, even real life contracts. But whenever the contract changes it is either backward compatible (say extending existing functionality) or all participants should be aware of changes. This is actually exactly same concept as in Java `contract first`. The only difference is that contract here is not actual implementation, but an agreement of how processes communicate. – Alexey Kamenskiy Mar 12 '15 at 10:04
  • @Thufir While I agree that `giant blob` approach can ensure that everything works on compile time, but this approach also brings benefits. First that comes to mind is that from team lead perspective - team that works on MS1, doesn't need to know anything about other parts, it has to only know how they should implement their interfaces and it is their responsibility to ensure that their implementation follows those agreed interfaces (APIs in fact) and that (prior any implementation starts) they participate in creation of interface (read contract). – Alexey Kamenskiy Mar 12 '15 at 10:08
  • Absolutely, team working on MS1 only implements contract (in theory) so that client MS's can use that API. That's fantastic, and works well if you're cranking out MS's **sequentially**. What if you're deploying a system? In relation to TDD, contracts, it **seems** inescapable that you've re-introduced waterfall technique? It **seems** that you've traded one problem for another without gain. I don't know, I'm asking. I'll have to actually read Martin Fowler et. al. when I have a chance, but his writing is very dense -- which is good. I just was skeptical. – Thufir Mar 12 '15 at 10:14
  • 1
    @Thufir As it was said before MS is just another approach, it will have its benefits and its disadvantages. I actually worked with this approach (even before I heard that it has a special name for it) on multiple projects. As a side note - it is not a waterfall, it is what you make it. As I worked for one project developing (in team) part of mobile OS this approach is the only way there because OS cannot be made as `giant blob`, it has to have interfaces, so each part starting from kernel is sort of MS, and first thing before any team started writing code was to agree on specifications v0.0.1. – Alexey Kamenskiy Mar 12 '15 at 10:23
  • see also http://www.infoq.com/news/2016/02/services-distributed-monolith?utm_campaign=infoq_content&utm_source=infoq&utm_medium=feed&utm_term=global – Thufir Feb 24 '16 at 11:51

3 Answers3

5

Every software development technique we've ever invented has been about managing complexity somehow. A huge portion of them have been and continue to be about abstraction, encapsulation and loose coupling. Microservices are yet another way of doing those things, which is probably why it resembles a lot of older techniques at a high theoretical level, but that doesn't make it any less useful or relevant.

Regarding loose coupling, I think you've misunderstood the goal a bit. If task A needs to call task B, there's never going to be way to make A and B 100% decoupled. Never going to happen. What you can do is ensure that, if task B calls task C, then task C should never have to worry about changes to A. If these three tasks are all linked together in one big blob, passing structs to each other, then there's a significant chance they'll all have to change if any one of them does. But if all three are microservices, then you're basically guaranteed that a change to A will only force B to update (unless it's such a huge change to A's core functionality that you probably should've made it a brand new service). This is especially true if all microservice updates are done in a backwards compatible way, which they should be.

Regarding the agile comment, I can tell you from personal experience that our microservice-y code plays far better with agile than our "linked into a big blob" code. In the latter, whenever someone fixes a bug in a low level function, he literally has to send the entire R&D department an e-mail saying "please relink your tasks or they'll all crash on Friday". We get a couple of these every week. If his code was in a microservice, we would all automagically benefit from the fix as soon as he deployed a new version.

I don't fully understand the comment about COBRA and MDB, as they don't seem to be software architectures but rather components of one; to my understanding they're potential ways of defining your microservices' messaging protocols and/or implementing said microservices, not in and of themselves alternatives to microservices.

Ixrec
  • 27,621
  • 15
  • 80
  • 87
  • 1
    "If these three tasks are all linked together in one big blob..." I only have Java perspective on this, but I'd say "no," bad idea, don't do that. Make a library, API#1, API#2, etc, to accomplish your exact point of "..task C should never have to worry about changes to A," because C is a client of B **only** and not of A at all. In that regard, I don't see that as new at all, pardon. I know that the question I asked was fuzzy. It's a fuzzy question because I'm fuzzy on what it's all about. Each and every answer has been useful for me, if only to help with my vocabulary. – Thufir Mar 12 '15 at 10:35
  • 1
    @Thufir If the libraries are all dynamically linked, and the tasks all run on exactly the same set of machines, then you're right, that would indeed allow for separate rollout. But microservices let you drop even those assumptions, if you want to go that far with decoupling. It's entirely reasonable not to. – Ixrec Mar 12 '15 at 10:40
  • CORBA is (was) a distributed technology that enabled distributed architectures at a time (late 1990) when there were not yet widespread names to define them. You were free to implement coarse-grained or fine-grained CORBA-based systems ending up in what would later be called SOA or Microservices. CORBA didn't survive, but we're doing it again, just with a different technology. The problem, though, was not technology. So yes, we're going full circle. I hope we learned something in the process. – xtian Jul 11 '19 at 09:06
4

How is it somehow better that these services are scattered across different machines?

Because of the cloud.

Done laughing yet? Seriously though - for many businesses, the biggest cost for software isn't the software anymore. It's the bandwidth, hardware, CDN costs, etc. Now that everyone has a mobile device, there's just that much more traffic. And that will only get worse as your toaster gets its own internet connectivity.

So businesses are looking to manage those costs. Specifically, they're trying to handle the business problem of "if this thing blows up, how can I serve millions of people getting/using my software - without paying ahead of time for the servers to serve millions of people getting/using my software?".

Why is horizontal scaling so prevalent, or at least desirable?

Because it answers this (huge and increasing) business problem.

When you have a dozen users, you can toss all of the services on one box. This is good, since you only want to pay for one box. And you also don't want to pay for changes to the app to split up the various services when your business scales. These days, you don't have time to do that before the mob of customers lights your servers on fire anyways.

It's also good because it allows you to juggle server allocations so that you can:

  1. use the most of the servers you have, leaving little to "waste".
  2. measure the performance individual elements of your software.
  3. reduce deployment/down time caused by releases.

Having very granular deployments makes those two things easier/better (in addition to helping to enforce better separation of concerns).

Telastyn
  • 108,850
  • 29
  • 239
  • 365
  • Ok, I can see the advantage. There's a low barrier to entry, but it **can** scale up. I suppose what throws me for a loop is when you scale horizontally n MS's, it just seems...very...retro? Something. I can't put words to why it just _seems_ wrong. – Thufir Mar 13 '15 at 11:10
  • Application scaling is a problem that does not necessarily need microservices: you can increase the power of a VM very easily on AWS (even on demand), or you could add more VMs behind a load balancer in a traditional architecture. – xtian Jul 11 '19 at 08:55
  • @xtian - sure, but you're often scaling the wrong things and thus spending more money than you need to. The idea behind microservices is you just scale what you need (cpu, memory, disk, throughput, gpu) – Telastyn Jul 11 '19 at 14:04
2

TL;DR. I have had the pleasure of drinking a lot of Microserver flavored Kool-Aid, so I can speak a bit to the reasons behind them.

Pros:

  • Services know that their dependencies are stable and have had time to bake in
  • Allow rolling deployments of new versions
  • Allow components to be reverted without affecting higher layers.

Cons:

  • You cannot use the new and shiny features of your dependencies.
  • You can never break API backwards compatibility (or at least not for a many development cycles).

I think that you fundamentally misunderstand how a microservice architecture is supposed to work. The way it is supposed to be run is that every microservice (referred to from here on in as MS) has a rigid API that all of its clients agree upon. The MS is allowed to make any changes that it wants as long as the API is preserved. The MS can be thrown out and rewritten from scratch, as long as the API is preserved.

To aid in loose coupling, every MS depends on version n-1 of its dependencies. This allows for the current version of the service to be less stable and a bit more risky. It also allows versions to come out in waves. First 1 server is upgraded, then half, and finally the rest. If the current version ever develops any serious issues, the MS can be rolled back to a previous version with no loss of functionality in other layers.

If the API needs to be changed, it must be changed in a way that is backwards compatible.

  • "The MS can be thrown out and rewritten from scratch, as long as the API is preserved." --nothing new there, but ok. In terms of performance, at the opposite end of the spectrum, how do all these MS's compare to a monolithic app/service/system? In terms of distribution, it **sounds like**, and please correct me if wrong, that there's a potential performance gain from putting n MS's on a single machine...virtualized on a mainframe?? It's almost like the more you scale MS's **horizontally** the simpler it becomes to then scale them **vertically**...? Bonus points for not reading my question:) – Thufir Mar 12 '15 at 13:32
  • 1
    As with any layer of indirection, you are taking a performance hit as compared to a big ball of mud. In the case of MS, it is particularly expensive since you are taking a network round-trip on every call. Using virtualization or containers makes this round trip significantly shorter since the call never actually leaves the machine. It also means that you gain more isolation (a runaway sevice cannot hurt its peers) with a smaller hardware cost. – Konstantin Tarashchanskiy Mar 12 '15 at 13:46