Tuesday, January 28, 2014

Comparision of different concurrency models: Actors, CSP, Disruptor and Threads

In order to make use of modern multi core/socket hardware, Java offers threads and locks for concurrent programming. It is well known this model suffers of a number of problems:
  • Deadlocks
  • Bad scaling due to "over-synchronization" or contention on common resources
  • Errors caused by invalid synchronization are timing dependent and hard to reproduce. It is common they appear under special conditions (load, hardware) in production.
  • As a software project grows, it gets hard to safely modify the application. A programmer needs global awareness of an applications control flow and data access patterns in order to change the software without introducing errors.
  • The subtleties of the Java Memory Model are not well known to most Java programmers. Even if JMM is known, incorrect programming isn't obvious and will fail rarely. It can be a nightmare to spot them from production logs of large applications.
So I am looking for alternative models to express concurrency. Therefore a simple benchmark of the most popular "alternative" concurrency models is done: Actors, CSP and Disruptor.



Actors


Real Life Analogy: People cooperating by sending mails to each other.

An Actor is like an object instance executed by a single thread. Instead of direct calls to methods, messages are put into the Actors "mailbox" (~queue). The actor single threaded reads and processes messages from the queue sequentially (with exceptions).
Internal state is exposed/shared by passing messages (with copied state) to other Actors.







CSP (Communicating Sequential Processes)


Real Life Analogy: People phoning each other.

Though a different terminology is used, CSP systems can be seen as a special actor system having bounded mailboxes (queues) of size 0.

So if one process (~Actor) wants to pass a message to another process, the caller is blocked until the receiver accepts the message. Alternatively to being blocked, a CSP-process can choose to do other things e.g. check for incoming messages (this introduces  non determinism to the order of outgoing messages). A receiver cannot accept incoming messages if he is occupied with processing a prior one.





Disruptor


The disruptor data structure came up some years ago and was invented and pioneered by Martin Thompson, Mike Barker, and Dave Farley at LMAX exchange.

Real Life Analogy: Assembly line




The disruptor is a bounded queue (implemented by a ring buffer) where producers add to the head of the queue (if slots are available, else the producer is blocked).
Consumers access the queue at different points, so each consumer has its own read position (cursor) inside the queue. Sequencing is used to manage queue access and processing order.



Thread + locking of shared data


contention
Ok, everybody knows this. Its Java's default to model concurrent execution. Actually these are the primitives (+CAS) used to build higher level concurrent models like those described above.











Conclusion


While the CSP/Actor-Model realizes inter-thread communication by passing/copying thread-local data to some queue or process, the Disruptor keeps data in place and assures only one thread (consumer or producer) is owner of a data item (=queue slot) at a time.
This model seems to fit existing CPU, memory and VM architectures better resulting in high throughput and superior performance. It avoids high allocation rates and reduces the probability of cache misses. Additionally it implicitly balances processing speed of interdependent consumers without growing queues somewhere.
There is no silver bullet for concurrent programming, one still has to plan carefully and needs to know what's going on.

While I am writing this, I realize each of these patterns has its set of best-fit problem domains, so my initial intention using a single benchmark to compare performance of different concurrency models might be somewhat off :-).


 

The Benchmark Application


 

Update: Jason Koch implemented a custom executor performing significantly better than the stock JDK executors. See his blog post.

The benchmark approximates the value of PI concurrently. Therefore N jobs are created, and the result of each job must be added to finally get an approximation of PI. For maximum performance one would create one large job for each Core of the CPU used. With this setup all solutions roughly would perform the same as there is no significant concurrent access and scheduling overhead.

However if we compute PI by creating 1 million jobs, each computing a 0..100-loop slice of PI, we get an impression of:
  • cost of accessing shared data when the result of each job is added to one single result value
  • overhead created by the concurrency control mechanics (e.g. sending messages/scheduling Runnables to a thread pool, CAS, Locks, volatile r/w ..).
The test is run with 2 configurations
  • 100 iterations per pi-slice-job, 1,000,000 jobs
  • 1,000 iterations per pi-slice-job, 100,000 jobs
As hardware I use
  • AMD Opteron 6274 Dual Socket 2,2 Ghz. Each socket has 8 cores + 8 Hardware Threads (so overall 16 cores + 16 HW Threads)
  • Intel XEON@3Ghz dual socket six core (12 cores + Hyperthreading turned off). (2011 release)
I never use more threads as there are "real" cores.



Stop the blabber, gimme results ! 


See bottom of this post for the benchmark source.
  • Threads - somewhat naive thread based implementation of the Pi computation. Enough effort invested it is possible to match any of the other results of course. At the core of the VM, threads, locks (+atomic, volatile r/w + CAS) are the only concurrent primitives. However there is no point in creating an ad-hoc copy of the Disruptor or an Actor system in order to compare concurrency approaches.
  • Akka - a popular Actor implementation on the VM. The benchmark has been reviewed and revised (especially the ActorSystem configuration can make a big difference) by the Akka crew. Threads are scheduled using Java 7's fork join pool. Actually the Pi computation is one of Akka's tutorial examples. 
  • Abstraktor - my experimental Actor/CSP implementation. It's using short bounded queues (so leans more to the CSP side) and avoids deadlocks by maintaining 2 queues per Actor (in and out). If the out-queue is blocked, it just reads from the in-queue.
    I am using Nitsan Wakarts excellent MPSC queue implementation (check his blog or github jaq-in-a-box) and that's the major reason it shows kind of competitive performance+scaling.
    I use this to get a rough baseline for comparision and experiment with different flavours of Actors/CSP. Probably the only thing one can do with it is to run the Pi bench ;-).
    Update: The experimental version benchmarked here has been consolidated + improved. You can find it ohttps://github.com/RuedigerMoeller/kontraktor
  • Disruptor - my naive approach implementing the benchmark based on the Disruptor 3.2. It turned out that I used a not-yet-polished utility class, however I keep this benchmark just to illustrate how smallish differences in implementation may have big consequences.
  • Disruptor2 - As Michael Barker would have implemented it (Thanks :-) ). Its actually more than twice as fast for the 1 million test as the closest runner up.
Intel (Xeon 2 socket each 6 cores, Hyperthreading off)

Well, it seems with 100k jobs of 1000 iterations, the benchmark is dominated by computation, not concurrency control. Therefore I retry with 1 million jobs each computing a 100 iteration slice.

Ok, we probably can see differences here :-)



AMD Opteron 6274 Dual Socket 2,2 Ghz (=16 real cores, 16 HW Threads)


Again the 1 million tiny-job variant spreads the difference amongst the approaches (and their implementation):


Note there is like 5% run to run jitter (GC and stuff), however that does not change the big picture.

Last but not least: Best results per CPU architecture per lib:




Discussion of Results


We see that scaling behaviour can be significantly different depending on the hardware platform used.

Akka loses a lot of CPU time doing GC and allocation. If I modify Abstraktor to use unbounded Concurrent Linked Queue (Akka uses those), it performs similar to Akka and builds up temporary queues >100k elements. This is an inherent issue of the Actor model. By leaning towards CSP (use very short bounded blocking queues with size <100), performance also suffers, as threads are blocked/spinning for input too often. However with a queue size of 5000 elements, things work out pretty well (introducing other problems like deadlocks in case of cyclic Actor graphs).
The Disruptor library is very well implemented, so a good part of the performance advantage could be attributed to the excellent quality of implementation.

Cite from the Disruptor documentation regarding queuing:
3.5 The Problems of Queues
[...]  If an in-memory queue is allowed to be unbounded then for many classes of problem it can grow unchecked until it reaches the point of catastrophic failure by exhausting memory. This happens when producers outpace the consumers. Unbounded queues can be useful in systems where the producers are guaranteed not to outpace the consumers and memory is a precious resource, but there is always a risk if this assumption doesn’t hold and queue grows without limit. [...]
When in use, queues are typically always close to full or close to empty due to the differences in pace between consumers and producers. They very rarely operate in a balanced middle ground where the rate of production and consumption is evenly matched. [...]
Couldn't have said it better myself =).



Conclusion


Although the Disruptor worked best for this example, I think looking for "the concurrency model to go for" is wrong. If we look at the real world, we see all 4 patterns used dependent on use case.
So a broad concurrency library ideally would integrate the assembly-line pattern (~Disruptor), queued messaging (~Actors) and unqueued communication (~CSP).



Benchmark source


AKKA




Multi Threading




Abstraktor




Disruptor naive




Disruptor optimized


7 comments:

  1. Rudiger, is hard to arrive to a conclusion. There are too many variables and misconceptions. But are we looking at the same thing. CSP and actors are both high level abstractions. And disruptor seems to me a very sophisticated queue implementation. Couldn't CSP and/or Actors be implemented with ring queues and memory barriers under the hood? Clearly platforms that rely too much on garbage collection may be at odds with the disruptor approach. But we could eventually see CSP and/or Actors catch up.

    ReplyDelete
  2. I don't think there is a general "best" model, it depends on use case.

    CSP has inherent problems with blocking, if one wants to avoid that, one has to accept indeterministic order of processing, which is a no go for many applications.
    Actors do not work well with any kind of bounded queue. As soon you limit queue size there is a deadlock scenario if your actor graph contains cycles.

    However the disruptor is more than a "queue" implementation. While actors need a queue for each processing unit following "move data not code", disruptor does this vice-versa "move code, not data". The analogy of a "mailbox" system and an "assembly line" pretty good illustrates the difference of the approaches. There are queues involved, but still the approach is quite diferrent.

    ReplyDelete
  3. And what about software transactional memory?

    ReplyDelete
    Replies
    1. There are no serious pure-software attempts for benchmarking available. However there are attempts to support this in hardware (Intel). I am not deep enough into the details to judge/guess viability of this approach. I'd assume it performes well for low contention but not so good in high contention cases. Additionally its not clear how and when the JVM will make this hardware feature avaiable.
      At time of writing I wasn't even aware of STM :-)

      Delete
  4. Useful read for me. Apart from throughput comparison I am also interested in know which concurrency model did you find easy to use and which one gives best safety guarantees. I am yet to dig to into concurrency models to reason about correctness and safety. Could you throw some light based on your experience?

    ReplyDelete
    Replies
    1. I think there is no "fits all" model.

      I found the Disruptor quite usable to setup high speed event/request processing engines, however it requires a different style to represent business logic and proper planning.
      As soon you hit some blocking API you'll need other ways to deal with concurrency.

      Actors: From my experience its manageable if you use it very course grained, as soon you setup complex actor systems with cyclic messaging/dependencies accross many actors, you'll run into serious trouble once real load is put on (queue overflows or deadlocks with bounded actor queues). Its by far not the "silver bullet" some folks try to make it.

      Try to think+design "reactive" e.g. a server component should be seen as a component processing a stream of incoming events single threaded (use NIO). If that is too slow, try to parallize sub-tasks, not the complete application flow. If that's still too slow, try partitioning. Avoid synchonous stuff (e.g. synchronous WebServices or remote calls, synchronous data base queries ). There are tricks to wrap blocking operations in order to avoid multithreading. Get used to queuing in order pass data from one processing unit to another.

      Unfortunately many of the old API's (e.g. servlets, jdbc) still are hard to use for a reactive style of programming and encourage/require creation of many threads without gaining anything but complexity.

      Delete
  5. This comment has been removed by the author.

    ReplyDelete