Temporal Paradoxes: Multitasking at Its Best

Temporal Paradoxes: Multitasking at Its Best
Temporal Paradoxes: Multitasking at Its Best


When computing began, it was once slightly simple to explanation why about issues as a unmarried sequence of computations. It did not take lengthy, regardless that, prior to we offered the facility to compute a couple of factor at a time. This present day, we take with no consideration computer systems’ talent to multitask. We all know that this is because now we have more than one cores, CPUs, and servers. But by some means, “single-threaded” such things as JavaScript and Python also are ready to “do a couple of factor at a time.”

How? There are two other ideas at play right here, frequently on the similar time, frequently perplexed, but completely distinct: Parallelism and Concurrency.

Temporarily outlined, they’re:

  • Parallelism: A couple of factor actually operating on the actual similar time.
  • Concurrency: The facility to maintain more than one issues in an undefined order.

Or, put in a different way, concurrency offers the appearance that more than one issues are going down on the similar time (because of how a program or runtime setting was once designed). With parallelism, more than one issues in fact are going down concurrently.


Let’s take a deeper glance. Then, I’m going to display you ways Temporal offers you dependable concurrency and parallelism with a sturdy, disbursed match loop.


Concurrency is most simply understood thru an idea officially known as “time-division multiplexing,” extra regularly referred to as “multitasking.” I may spend a couple of mins penning this put up, then test Slack messages, then learn some information, after which come again to writing. All of the duties are making growth prior to any one among them is finished, however I will be able to’t write and do the dishes on the similar time. That is concurrency: I am able to pausing something and coming again to it later after doing one thing else within the intervening time.

However simply because I will be able to’t test e mail whilst additionally writing a weblog put up doesn’t suggest you could not be checking e mail whilst I am writing those phrases. (That’d be parallelism.) You and I are nonetheless using the similar abilities, regardless that: writing a put up and studying an e mail. In a similar fashion, a well-decomposed program may have one serve as for calculating a Fibonacci quantity and a distinct one for a fractal series. Stated program may well be designed in this kind of approach as to allow pausing one serve as and permitting the opposite to run for a while. That, once more, is concurrency.

However, if we popped the ones purposes into separate threads or forked one off to every other procedure and ran on a multi-core gadget, abruptly, we would have parallelism, too!


Thus, parallelism may occasionally suggest concurrency, however “concurrent” does not essentially imply “parallel.” The principle distinction is that parallelism has issues operating actually on the similar time, whilst concurrency has them run interleaved.

Parallelism vs. concurrency

Parallelism is each trivially simple and wildly advanced to reach. Just about each fashionable computing instrument has a multi-core processor, even extremely small and inexpensive ones. With that, we do get some parallelism free of charge: Slack in reality is operating concurrently the Google Medical doctors tab I am penning this in.

In a single sense, if your whole pc had been a unmarried “program,” then the paintings of creating concurrency into it was once completed naturally as a part of one by one development out the other purposes—chat, browser, terminal. It simply occurs to run in parallel on account of the {hardware} it is operating on.7

Parallelism and Concurrency

As chances are you’ll believe, issues get difficult from right here. Including threads, hyperthreads, co-processors (e.g., GPUs or VPUs), or complete different machines offers us more than one ranges of parallelism and/or concurrency.

Both and

And so, realizing which one is going on inside an total method is frequently tricky to decide. (This can be a giant reason they are complicated and frequently combined up or interchanged.)

Believe a number of unbiased processes that you are expecting to take the same quantity of time to run (as in, they run in an inexpensive period of time and do, actually, end). Whilst you kick off the processes, and so they transfer on down for your multicore CPU, sooner or later, they entire and convey effects.

process -->results

Used to be the CPU operating the ones processes similtaneously, in parallel, each, or neither (serially and in a deterministically predictable order)? Does it even subject if the entirety took place effectively?

Believe as an alternative that those processes run on other CPUs in numerous servers but wish to keep in touch with each and every different. We are now solidly within the realm of parallelism and concurrency in disbursed programs.

If the ones processes may also be run on any gadget at any time, in any order, be capable of keep in touch with each and every different (additionally at any time), and get admission to commonplace sources (like a database), you are abruptly opening up an entire new international of complexity.


In one method, the ones complexities are treated by way of the running method, which acts because the mediator between an enormous crowd of resource-hungry processes and the sources they are craving for. This OS mediator was once just right sufficient for many of our computing wishes for a very long time (and nonetheless completely vital), giving us APIs for gaining access to shared sources (just like the filesystem or community) from any procedure at the method.

However as our packages have grown to be larger than a unmarried gadget, the single-system OS fashion breaks down: amongst different demanding situations, the kernel on gadget A has no concept what sources are to be had on gadget B. And but, processes want some roughly compute useful resource; they won’t care if it is on gadget A or B (so long as different homes like location, efficiency, and so on. are maintained) and simply wish to be run on one thing.

Disbursed Methods and Temporal

Therein lies the complication of a disbursed method. No longer best is it a extremely parallel setting, there is not anything inherent within the method to mediate get admission to to sources. Significantly, that is essential as a result of {hardware}, networks, and the entirety fails sooner or later, and paintings will get misplaced until it may be rescheduled on a “just right” useful resource.

To mitigate this truth of disbursed method lifestyles, we inevitably finally end up wanting to construct out one thing that permits for restoration from failure or is tolerant of the ones disasters within the first position. Machine design patterns like microservices vs. monoliths, match sourcing and CQRS, or message queues and manufacturers/customers rise up largely with the intention to tolerate and get better from numerous disasters.

Ceaselessly, those patterns paintings by way of breaking an software down into smaller portions and persisting the state of what is already took place, thereby duplicating as little paintings as conceivable when one thing fails. (That is, albeit in just a little of a psychological bounce, a type of concurrency.)

The ones patterns yield their very own headaches, regardless that. Because the method’s complexity skyrockets, debugging issues, upkeep, and even simply including options all get tricky and error-prone.

Temporal as a Concurrent (And Parallel) Disbursed Machine

Whilst Temporal does not clear up all issues inherent in extremely parallel programs, it does mitigate maximum of them. By means of development an software with Temporal, you’re getting parallelism, concurrency, and fault tolerance free of charge.

Believe a web based meals ordering and supply software the place the primary “trade unit” is an order. It is most commonly only a high-level conceptual view of the standing of 1 merchandise within the [distributed] method; each and every step within the order’s lifecycle is unbiased of the others in that after it will get the paintings request, it may paintings with none further conversation till it’s completed. However the lifecycle as an entire is the basic fear of the client (“The place’s my order at this time?”).


If we deal with the order itself as a unmarried tool entity — say, a serve as — it could glance one thing like this pseudo-JavaScript:

serve as orderStatus(order) {
    paymentSucessful = wait for validateAndPay(order);
    if (paymentSucessful) {
        // [update status display for "✔︎ order placed"]
} else {
    // [inform customer of failure]

    foodCooked = wait for sendToRestaurant(order);
    // [success/error handling as above]

    pickedUp = wait for findDeliveryDriver(order);
    // [success/error handling as above]

    delivered = wait for deliverOrder(order);
    // [success/error handling as above]

    go back "Good fortune!"

Assuming the OS-level procedure operating this serve as is completely dependable and not is going down, JavaScript is in a position to similtaneously run many, many circumstances of this serve as. JavaScript and lots of different languages function on a single-threaded, non-blocking match loop fashion. Extraordinarily in short, this works as follows:

1. Code runs till completed or when it yields (by the use of an wait for or a timer, e.g., setTimeout in JavaScript).

a. If yielding from an wait for or timer, the object being awaited is added to a heap of duties.

2. The principle thread, aka the development loop, tests for ready-to-run duties. This may well be different purposes, a timer whose time has come, or an awaited long run/promise that has now been resolved.

a. If sure, run them.

3. Repeat for ever and ever.

This permits two purposes for, say, Order A and Order B, to run interleaved with each and every different and make unbiased growth, irrespective of how lengthy the opposite takes. By means of this fashion of concurrency, you get an excessively sturdy phantasm of parallelism: assuming the eating places and supply drivers take a random period of time to finish, occasionally A will make it right through findDeliveryDriver prior to B even begins, and whilst A is looking forward to deliverOrder to get to the bottom of, B completes. Different occasions, A will end prior to B. Those two purposes seem to run in parallel.

Temporal works nearly precisely like this. In truth, Temporal’s SDKs all the time best run something at a time for a given Workflow, even if the use of ostensibly parallel such things as the Move SDK’s workflow.Move().

The elemental distinction, regardless that, is that the queue of ready-to-run duties isn’t saved in reminiscence however within the Temporal Server. This manner, you’ll be able to have many, many “major threads, aka match loops,” within the type of Temporal Employees.

So, whilst the JavaScript match loop + activity queue may seem like this:

JavaScript Process

Temporal’s model appears to be like extra like this:

An abstract cloud labeled "Worker Fleet" contains numerous boxes, each of which is labeled "Event Loop" and contains an arrow looping back on itself. Outside of the cloud is a box labeled "Temporal Server", which contains a representation of an infinitely-growing "Task Queue". There are bi-directional arrows between many of the "Event Loop" boxes and the "Temporal Server" box.

An summary cloud categorised “Employee Fleet” incorporates a lot of packing containers, each and every of which is categorised “Match Loop” and incorporates an arrow looping again on itself. Out of doors of the cloud is a field categorised “Temporal Server,” which incorporates a illustration of an infinitely rising “Job Queue.” There are bi-directional arrows between most of the “Match Loop” packing containers and the “Temporal Server” field.

(Notice: The Temporal Server is in fact a cluster of many alternative services and products, now not only a Job Queue.)

With this, Temporal provides you with parallelism thru with the ability to have, almost talking, as many employees as you want. It provides you with concurrency by way of the ones Employees operating with one of those match loop. It provides you with disbursed computing by way of supporting the ones Employees being run now not simply in a single procedure however on any machines that you wish to have (so long as they may be able to nonetheless succeed in the Temporal Server).

It additionally in large part solves the issue of a unmarried procedure crashing, dropping connectivity, and dropping paintings consequently: If the “duties” created by way of an wait for are saved somewhere else, the processes operating the development loop can come and cross and not using a affect at the serve as’s growth. Because of this, the handbook effort of enforcing manufacturers/customers, message queues, match sourcing, or different architectural patterns is nearly completely got rid of.


Parallelism and Concurrency are subjects which can be frequently conflated with each and every different. Maximum frequently, it is because if a unmarried program is successfully and accurately designed to profit from parallelism, then it’s also concurrent.

On the other hand, “just a unmarried factor at a time” environments like JavaScript or Temporal Employees even have a excessive level of concurrency however and not using a inner parallelism for operating a unmarried program. (With parallelism completed best thru operating more than one JS VMs or Temporal Employees.)

Concurrency, due to this fact, is the facility of a program to maintain other elements, interleaving with each and every different as they run. Parallelism is the surroundings’s talent to run a couple of factor at a time.

Whether or not you had been happy with the dignity between those two ideas or now not, with a bit of luck, this put up has advanced your sense of them and the way they’re interested by Temporal packages.


0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments
Back To Top
Would love your thoughts, please comment.x