Logical Time has been re-worked, and is now VirtualTime (q.v.). This article is superseded.

Overview#

Logical time is a means for enforcing ordering between events in different threads based on a global logical clock.

  • Every event in an Orc program (meaning a site call or other observable behavior) has an associated logical time.
  • A thread may call the site Ltimer(n), which returns after exactly n units of logical time have elapsed globally (relative to the current logical time).
  • All events occurring at logical time n are guaranteed to happen before all events occurring at logical time n+1.
  • If an event occurring at time n was caused by an event at time m, then it is guaranteed that m is less than or equal to n.
  • Logical time is composable: Ltimer(1) >> Ltimer(1) is equivalent to Ltimer(2).
  • New logical timers can be created using the library function withLtimer. (For more on this, see "Multiple Timers" below.)

Ltimer is intuitively similar to Rtimer. The main difference is that Rtimer time advances in accordance with a physical clock, while Ltimer time advances only under specific conditions (detailed below).

Discrete event simulation is the prototypical application of logical time. Understanding it may help you to understand logical time. ChucK is another programming language which uses logical time, although its logical timers are coupled to real time, producing something they call "strong timing".

See the attachments for an initial attempt at a formal semantics of logical time.

How does time advance?#

The global logical clock will advance only when all threads are individually ready to advance.

Here is a diagram which illustrates three threads using logical time. Notice that when thread 2 calls Ltimer(2), it must block until T=2.

    THREAD 1    THREAD 2   THREAD 3
       |           |          |         T=0
       |           |       Ltimer(1)    BLOCK 3
       |        Ltimer(2)     .         BLOCK 2
       |           .          .
--- Ltimer(1) ---- . -------- v ------- T=1, RESUME 3
       |           .          |
       |           .          |
       |           .          |
--- Ltimer(1) ---- v ----- Ltimer(1) -- T=2, RESUME 2
       |           |          |
       v           v          v

When may any given thread advance? In general the difficulty is that before we can advance to time n+1 we have to finish doing everything that should be done at time n. So a thread can't advance to the next logical time until it "finishes what it is doing". There are different ways to measure this which have been implemented in different versions of Orc.

Loose (versions <0.9.4)#

A thread may advance: during a call to LTimer or any call which is non-immediate (does not produce an event "immediately", in some appropriate sense).

As a consequence, logical time is permitted to advance during, for example, Rtimer. There are significant flaws with this approach:

  1. It is impossible to predict the relative logical time at which events preceded by non-immediate site calls will occur. For example, in the program: Rtimer(1) >> Ltimer(1) | Ltimer(1) >> Ltimer(1) , the first Ltimer call may return during or after either of the other two.
  2. You cannot change an implementation of a site from immediate to non-immediate without affecting its logical semantics. For example, say a single step of a simulation needs to do a currency conversion. You cannot use the Google webservice for this purpose, even if you incorporate a timeout to ensure it always returns, because the service would be non-immediate and would therefore cause the step to take more than one logical time unit.
  3. It is not clear what is a good definition for an "immediate" site.

Here is a diagram which illustrates two threads using logical time. Notice that when thread 2 blocks, it is impossible to predict the logical time at which it will unblock -- this depends entirely on how fast the simulation engine runs.

    THREAD 1     THREAD 2
       |            |         T=0
       |         Rtimer(1)    BLOCK 2
       |            .
--- Ltimer(1) ----- . ------- T=1
       |            .
--- Ltimer(1) ----- . ------- T=2
       |            .
       |            v         UNBLOCK 2
       |            |
       v            v

Strict (version 0.9.4)#

A thread may advance: only during a call to LTimer.

One consequence of this rule is that every thread must be able to determine in advance exactly how long any logical process will take (so it knows the value to pass to Ltimer). While this is acceptable in many simulations, in others it is not. For example, in dining philosophers, when a thread attempts to take a fork, it does not know in advance how long that fork will be held (only the thread holding the fork knows this). There are inconvenient workarounds, such as polling for a signal every logical timestep, or asking the thread holding the fork how long a request will take. However the obvious solution, to use a blocking semaphore to communicate between the threads, is not permitted since logical time could not advance while the semaphore is blocked.

Here is a diagram which illustrates two threads using logical time. Notice that logical time does not advance until thread 2's Rtimer has unblocked and thread 2 has called Ltimer.

  THREAD 1    THREAD 2
     |           |         T=0
     |        Rtimer(1)    BLOCK 2
     |           .
  Ltimer(1)      .         BLOCK 1
     .           .
     .           v         UNBLOCK 2
     .           |
---- v ------ Ltimer(1) -- T=1, UNBLOCK 1
     |           |
     v           v

Causal (versions >=0.9.5)#

A thread may advance:

  • during a call to LTimer
  • during any site call whose return is causally dependent on another thread
  • while waiting for a future to receive a value (which also introduces a causal dependence between threads)

This concept is closely related to Lamport's logical clock, with a fundamental difference: Lamport's clock observes event ordering constraints arising from communication. Orc's logical clock imposes event ordering constraints above and beyond those arising from communication. In both cases, an effect's logical time is dictated by its cause. This adds expressive power because a thread no longer has to know in advance how much logical time a site call will take; instead some other thread causes the site call to return at the appropriate time.

Here is a diagram illustrating how one thread can use a semaphore to wait while some other thread advances logical time.

  THREAD 1      THREAD 2
     |             |         T=0
     |         s.acquire()   BLOCK 2
     |             .
-- Ltimer(1) ----- . ------- T=1
     |             .
-- Ltimer(1) ----- . ------- T=2
     |             .
  s.release() ---> v         UNBLOCK 2
     |             |
     v             v

In practice it is impossible to enforce this rule exactly, because we may not be able to determine dynamically whether a thread is blocked at a site which is causally dependent on another thread. For example, two threads may communicate via email (one thread sends email and the other polls for it), but we don't know whether the polling thread will receive an email from the sending thread or from some external, causally-unrelated source. Or, to give a more complex example, two threads may attempt to use real time to introduce a causal dependence (via temporal ordering) -- this is especially problematic since we don't have a good semantics for real time.

Therefore we classify sites into those which introduce logical causal dependencies and those which do not. It is up to the implementor of these sites to inform the engine when a causal dependency is introduced, and it is up to the programmer to utilize the sites correctly.

For example, logical time can advance during calls to sites like:

  • Buffer.get
  • Semaphore.acquire
  • Counter.onZero

And logical time cannot advance during calls to sites like:

  • Rtimer
  • Prompt

ChucK's timers have a similar notion: time can advance either during an explicit setting of now (analogous to our LTimer site) or during an event handler. Events are used by threads to communicate with each other and introduce causal dependencies between threads. All other synchronization and communication is implemented in terms of events.

Multiple Timers#

Multiple logical timers in a single program: sounds reasonable. But what is the relationship between two timers? In real time, physics imposes some relationship. In logical time, the possible relationships are unclear. Two useful and easy-to-describe relationships are:

  • Run a simulation within another.
  • Run two independent simulations simultaneously.
(Aside: this sounds a lot like how state machines can be composed: two state machines can be run independently, in parallel, or one can be run within a single state of another. Can we exploit this analogy more formally?)

Logical timers with dynamic scope give rise to these relationships. Dynamic scope means that the timer is accessible only during the execution of the expression over which the timer is scoped. When a new timer is created within the dynamic scope of another, it is "nested" in that other timer. The nesting of timers gives rise to a tree structure, so that each execution belongs to exactly one immediate parent timer, and each timer belongs to at most one parent timer (the root timer has no parent).

To create a new timer, we use the library site withLtimer, as follows:

withLtimer(lambda () = F)

withLtimer creates a new timer which is a child of the current timer and executes the expression F in the context of that timer. During the execution of F, any call to Ltimer refers implicitly to the timer created by withLtimer. This circumlocution ensures that it is impossible to directly refer to a timer, so you can't pass a timer to a site and allow it to escape its dynamic extent. It also ensures that we can easily tell which timer a given execution is running under.

Time Dependencies#

What is the relationship between time in different logical clocks? We have considered two different alternatives.

Strict Nesting (versions <=0.9.6)#

A parent timer can only advance when its child timers have halted completely. This means that a child simulation always takes place during a single logical timestep of its parent. This has the advantage of being easy to describe and understand, but several disadvantages:

  • more complicated to implement
  • introduces an artificial distinction between tokens in a sub-clock and tokens which are not
  • less powerful (although we don't have any good examples of how nested clocks are used so this is hard to argue)
  • easy to deadlock

Priority (version 0.9.7)#

A parent timer can advance when the inner timer becomes quiescent (i.e. incapable of advancing on its own, because all tokens are quiescent and there are no future events scheduled). Note that this rule is logically consistent: every event is assigned a logical time under every outer timer, and a cause is never assigned a logical time after any of its effects under any timer.

This rule means that clocks are transparent to their parents. I.e. as far as the parent is concerned, tokens involved in a child clock are no different than any other tokens. As a result, we can get nice guarantees about when it is safe to add a child clock. Specifically, if two events are causally unordered, then it is always safe to add a child clock over any scope which orders the two events.

To see why that is not true of the "Strict Nesting" rule, consider the following program:

val s = Semaphore(0)
  M() >> s.acquire()
| N()
| Ltimer(1) >> s.release()

The site calls M() and N() are causally unordered in this program. Let's add a timer to enforce an ordering on them:

val s = Semaphore(0)
withLtimer(lambda() =
    M() >> s.acquire()
  | Ltimer(1) >> N()
)
| Ltimer(1) >> s.release()

The nested timer does not halt until s.acquire() returns. Under the "Strict Nesting" rule, therefore, the outer timer cannot advance and s.acquire() will never return. Under the looser "Priority" rule, however, once the nested timer becomes quiescent at s.acquire(), the outer timer is allowed to advance and the program completes.

Another nice feature of this rule is that it is now reasonable to allow a token to refer to any parent clock (not just its immediate local clock) and advance time on that clock. However it's not clear how to express this syntactically; using regular variables to refer to clocks is problematic since variables do not have dynamic scope (as clocks do), and a clock value stored in a variable may escape its dynamic scope by being passed to a site.

Causal Dependencies#

It would be nice to require that a site call which has a causal dependency on an independent (i.e. non-parent) logical timer is guaranteed to take zero logical time. This rule embodies the Lamportian view that causal relationships carry logical time -- since a purely external event can't carry any logical time, it should not advance the local logical clock.

This rule accords with our intuition that logical time should be self-contained: it should only advance in response to events internal to the simulation. As with our original Causal rule, this rule is impossible to enforce exactly, so we must approximate it by requiring that certain sites are only used to communicate within the same logical timer. Here is a diagram illustrating the rule in action:

  LTIMER 1      LTIMER 2
    T=0           T=0
     |             |
    T=1        s.acquire()   BLOCK 2
     |             .  
    T=2            .
     |             .
  s.release() ---> v         UNBLOCK 2
     |             |
     |            T=1
     v             v

In theory we should provide versions of Buffer, Semaphore, etc which can be used between different logical timers. In practice, we expect that independent simulations are truly independent and there is no reason to synchronize between them at all.

Implementation#

Each token is associated with a logical clock in the manner of a region. In fact, logical clocks are exactly analogous to regions in the sense that they have dynamic scope and form a tree structure. The only reason for not implementing logical clocks as regions is efficiency -- since most regions are not logical clocks, messages intended for a clock would have to traverse many other regions unnecessarily before reaching it.

Each logical clock has a counter which tracks the number of tokens which have not yet reached a point where they can advance. When a token blocks waiting for logical time to advance (either at a call to Ltimer or to some causally-dependent site), it becomes quiescent and decrements its clock's counter. When a token is created or leaves the quiescent state, it increments its clock's counter. When the clock's counter reaches zero, it knows that all tokens are quiescent and the logical time advances to the next event. If there is no next event, the clock is quiescent.

Child clocks are counted similarly to tokens. When a child clock is created or becomes non-quiescent, it increments its parent's counter, and when the clock becomes quiescent, it decrements its parent's counter.

Examples#

Here are some example programs which explore various aspects of logical time.

This program runs three threads on two clocks.

  1. Outer clock, thread 1: repeatedly puts items into a buffer.
  2. Inner clock, thread 2: repeatedly pulls items from the buffer.
  3. Inner clock, thread 3: waits for one logical time unit and prints something.

val c = Buffer()
repeat(lambda () = c.put(signal)) >> stop
| withLtimer(lambda () =
      repeat(c.get) >> "Got"
    | repeat(lambda () = Ltimer(1) >> "Inner " + Ltimer.time())
  )

You might expect that this program never advances the logical clock since the buffer is always busy. However the scheduler will sometimes end up calling c.get before the corresponding c.put, so thread 2 becomes quiescent temporarily and the clock advances. This program illustrates the danger of having events caused by threads outside the logical clock.

If we move thread 1 inside the clock, then we get the expected behavior (the clock never advances), because even when thread 2 becomes quiescent, the clock observes that thread 1 is still busy:

val c = Buffer()
  repeat(lambda () = c.put(signal)) >> stop
| repeat(c.get) >> "Got"
| repeat(lambda () = Ltimer(1) >> "Inner " + Ltimer.time())

Add new attachment

Only authorized users are allowed to upload new attachments.

List of attachments

Kind Attachment Name Size Version Date Modified Author Change note
pdf
ltimer_semantics.pdf 69.7 kB 2 08-May-2009 11:25 AdrianQuark revised to explicitly signal site returns
« This page (revision-3) was last changed on 15-Jun-2016 09:20 by John Thywissen