Trio’s core functionality

Entering trio

If you want to use trio, then the first thing you have to do is call trio.run():

trio.run(async_fn, *args, clock=None, instruments=[])

Run a trio-flavored async function, and return the result.

Calling:

run(async_fn, *args)

is the equivalent of:

await async_fn(*args)

except that run() can (and must) be called from a synchronous context.

This is trio’s main entry point. Almost every other function in trio requires that you be inside a call to run().

Parameters:
  • async_fn – An async function.
  • args – Positional arguments to be passed to async_fn. If you need to pass keyword arguments, then use functools.partial().
  • clockNone to use the default system-specific monotonic clock; otherwise, an object implementing the trio.abc.Clock interface, like (for example) a trio.testing.MockClock instance.
  • instruments (list of trio.abc.Instrument objects) – Any instrumentation you want to apply to this run. This can also be modified during the run; see Debugging and instrumentation.
Returns:

Whatever async_fn returns.

Raises:
  • TrioInternalError – if an unexpected error is encountered inside trio’s internal machinery. This is a bug and you should let us know.
  • Anything else – if async_fn raises an exception, then we propagate it.

General notes

Yield points

When writing code using trio, it’s very important to understand the concept of a yield point.

A yield point is two things:

  1. It’s a point where we check for cancellation, and can potentially raise a Cancelled error. See Cancellation and timeouts below for more details.
  2. It’s a point where the trio scheduler has the option of switching to a different task. (Currently, this scheduler is very simple: it simply cycles through all the runnable tasks, running each one until it hits its first yield point. But this might change in the future.)

So you need to be aware of yield points for two reasons. First, you need to know where they are so you can make sure you’re prepared to handle a Cancelled error, or for another task to run and rearrange some state out from under you. And second, you need to know where they are so you can make sure your code hits them fairly frequently: if you go too long before executing a yield point, then you’ll be slow to notice cancellation and – much worse – you’ll adversely effect the response latency of all the other code running in the same process, because in a cooperative multi-tasking system like trio, nothing else can run until you hit a yield point.

To help you out here, Trio tries to make this as simple as possible. If an operation blocks (e.g., trying to read from a socket when there’s no data available), then that has to be a yield point. (Otherwise timeouts wouldn’t work, and we couldn’t get other stuff done while waiting for data to arrive.) Therefore, we make a policy that for public functions exposed by trio, if a function can block then it is always a yield point. (So if you try to read from a socket and there is data available, that doesn’t block but it’s still a yield point.)

How do we know which functions can block? They’re the async functions, of course. So this gives us our general rule for identifying yield points: if you see an await, then it might be a yield point, and if it’s await <something in trio> then it’s definitely a yield point.

[Note: we really need to come up with a less confusing name for yield points]

Thread safety

The vast majority of trio’s API is not thread safe: it can only be used from inside a call to trio.run(). We don’t bother documenting this on individual calls; unless specifically noted otherwise, you should assume that it isn’t safe to call any trio functions from anywhere except the trio thread. (But see below if you really do need to work with threads.)

Time and clocks

Every call to run() has an associated clock.

By default, trio uses an unspecified monotonic clock, but this can be changed by passing a custom clock object to run() (e.g. for testing).

You should not assume that trio’s internal clock matches any other clock you have access to, including the clocks of other concurrent calls to trio.run()!

The default clock is currently implemented as time.monotonic() plus a large random offset. The idea here is to catch code that accidentally uses time.monotonic() early, which should help keep our options open for changing the clock implementation later, and (more importantly) make sure you can be confident that custom clocks like trio.testing.MockClock will work with third-party libraries you don’t control.

trio.current_time()

Returns the current time according to trio’s internal clock.

Returns:The current time.
Return type:float
Raises:RuntimeError – if not inside a call to trio.run().
await trio.sleep(seconds)

Pause execution of the current task for the given number of seconds.

Parameters:seconds (float) – The number of seconds to sleep. May be zero to insert a yield point without actually blocking.
Raises:ValueError – if seconds is negative.
await trio.sleep_until(deadline)

Pause execution of the current task until the given time.

The difference between sleep() and sleep_until() is that the former takes a relative time and the latter takes an absolute time.

Parameters:deadline (float) – The time at which we should wake up again. May be in the past, in which case this function yields but does not block.
await trio.sleep_forever()

Pause execution of the current task forever (or until cancelled).

Equivalent to calling await sleep(math.inf).

If you’re a mad scientist or otherwise feel the need to take direct control over the PASSAGE OF TIME ITSELF, then you can implement a custom Clock class:

class trio.abc.Clock

The interface for custom run loop clocks.

abstractmethod start_clock()

Do any setup this clock might need.

Called at the beginning of the run.

abstractmethod current_time()

Return the current time, according to this clock.

This is used to implement functions like trio.current_time() and trio.move_on_after().

Returns:The current time.
Return type:float
abstractmethod deadline_to_sleep_time(deadline)

Compute the real time until the given deadline.

This is called before we enter a system-specific wait function like :func:~select.select`, to get the timeout to pass.

For a clock using wall-time, this should be something like:

return deadline - self.current_time()

but of course it may be different if you’re implementing some kind of virtual clock.

Parameters:deadline (float) – The absolute time of the next deadline, according to this clock.
Returns:The number of real seconds to sleep until the given deadline. May be math.inf.
Return type:float

You can also fetch a reference to the current clock, which might be useful if you’re using a custom clock class:

trio.current_clock()

Returns the current Clock.

Cancellation and timeouts

Trio has a rich, composable system for cancelling work, either explicitly or when a timeout expires.

A simple timeout example

In the simplest case, you can apply a timeout to a block of code:

with trio.move_on_after(30):
    result = await do_http_get("https://...")
    print("result is", result)
print("with block finished")

We refer move_on_after() as creating a “cancel scope”, that contains all the code that runs inside the with block.

If the HTTP request finishes in less than 30 seconds, then result is set and the with block exits normally. If it does not finish within 30 seconds, then the cancel scope becomes “cancelled”, and the next time do_http_get attempts to call a blocking trio operation it will fail immediately with a Cancelled exception. This exception eventually propagates out of do_http_get, is caught by the with block, and execution then continues on the line print("with block statement").

Note

Note that this is a simple 30 second timeout for the entire body of the with statement. This is different from what you might have seen with other Python libraries, where timeouts often refer to something more complicated. We think this way is easier to reason about.

Handling cancellation

Pretty much any code you write using trio needs to have some strategy to handle Cancelled exceptions – even if you didn’t set a timeout, then your caller might (and probably will).

You can catch Cancelled, but you shouldn’t! Or more precisely, if you do catch it, then you should do some cleanup and then re-raise it or otherwise let it continue propagating (unless you encounter an error, in which case it’s OK to let that propagate instead). To help remind you of this fact, Cancelled inherits from BaseException, like KeyboardInterrupt and SystemExit do, so that it won’t be caught by catch-all except Exception: blocks.

It’s also important in any long-running code to make sure that you regularly check for cancellation, because otherwise timeouts won’t work! This happens implicitly every time you call a cancellable operation; see below for details. If you have a task that has to do a lot of work without any IO, then you can use await sleep(0) to insert an explicit cancel+schedule point.

Here’s a rule of thumb for designing good trio-style (“trionic”?) APIs: if you’re writing a reusable function, then you shouldn’t take a timeout= parameter, and instead let your caller worry about it. This has several advantages. First, it leaves the caller’s options open for deciding how they prefer to handle timeouts – for example, they might find it easier to work with absolute deadlines instead of relative timeouts. If they’re the ones calling into the cancellation machinery, then they get to pick, and you don’t have to worry about it. Second, and more importantly, this makes it easier for others to re-use your code. If you write a http_get function, and then I come along later and write a log_in_to_twitter function that needs to internally make several http_get calls, I don’t want to have to figure out how to configure the individual timeouts on each of those calls – and with trio’s timeout system, it’s totally unnecessary.

Of course, this rule doesn’t apply to APIs that need to impose internal timeouts. For example, if you write a start_http_server function, then you probably should give your caller some way to configure timeouts on individual requests.

Cancellation semantics

You can freely nest cancellation blocks, and each Cancelled exception “knows” which block it belongs to. So long as you don’t stop it, the exception will keep propagating until it reaches the block that raised it, at which point it will stop automatically.

Here’s an example:

print("starting...")
with trio.move_on_after(5):
    with trio.move_on_after(10):
        await sleep(20)
        print("sleep finished without error")
    print("move_on_after(10) finished without error")
print("move_on_after(5) finished without error")

In this code, the outer scope will expire after 5 seconds, causing the sleep() call to return early with a Cancelled exception. Then this exception will propagate through the with move_on_after(10) line until it’s caught by the with move_on_after(5) context manager. So this code will print:

starting...
move_on_after(5) finished without error

The end result is that we have successfully cancelled exactly the work that was happening within the scope that was cancelled.

Looking at this, you might wonder how we can tell whether the inner block timed out – perhaps we want to do something different, like try a fallback procedure or report a failure to our caller. To make this easier, move_on_after()‘s __enter__ function returns an object representing this cancel scope, which we can use to check whether this scope caught a Cancelled exception:

with trio.move_on_after(5) as cancel_scope:
    await sleep(10)
print(cancel_scope.cancelled_caught)  # prints "True"

The cancel_scope object also allows you to check or adjust tvhis scope’s deadline, explicitly trigger a cancellation without waiting for the deadline, check if the scope has already been cancelled, and so forth – see open_cancel_scope() below for the full details.

Cancellations in trio are “level triggered”, meaning that once a block has been cancelled, all cancellable operations in that block will keep raising Cancelled. This helps avoid some pitfalls around resource clean-up. For example, imagine that we have a function that connects to a remote server and sends some messages, and then cleans up on the way out:

with trio.move_on_after(TIMEOUT):
    conn = make_connection()
    try:
        await conn.send_hello_msg()
    finally:
        await conn.send_goodbye_msg()

Now suppose that the remote server stops responding, so our call to await conn.send_hello_msg() hangs forever. Fortunately, we were clever enough to put a timeout around this code, so eventually the timeout will expire and send_hello_msg will raise Cancelled. But then, in the finally block, we make another blocking operation, which will also hang forever! At this point, if we were using asyncio or another library with “edge-triggered” cancellation, we’d be in trouble: since our timeout already fired, it wouldn’t fire again, and at this point our application would lock up. But in trio, this doesn’t happen: the await conn.send_goodbye_msg() call is still inside the cancelled block, so it will also raise Cancelled.

Of course, if you really want to make another blocking call in your cleanup handler, trio will let you; it’s trying to prevent you from accidentally shooting yourself in the foot. Intentional foot-shooting is no problem (or at least – it’s not trio’s problem). To do this, create a new scope, and set its shield attribute to True:

with trio.move_on_after(TIMEOUT):
    conn = make_connection()
    try:
        await conn.send_hello_msg()
    finally:
        with move_on_after(CLEANUP_TIMEOUT) as cleanup_scope:
            cleanup_scope.shield = True
            await conn.send_goodbye_msg()

So long as you’re inside a scope with shield = True set, then you’ll be protected from outside cancellations. Note though that this only applies to outside cancellations: if CLEANUP_TIMEOUT expires then await conn.send_goodbye_msg() will still be cancelled, and if await conn.send_goodbye_msg() call uses any timeouts internally, then those will continue to work normally as well. This is a pretty advanced feature that most people probably won’t use, but it’s there for the rare cases where you need it.

Cancellation and primitive operations

We’ve talked a lot about what happens when an operation is cancelled, and how you need to be prepared for this whenever calling a cancellable operation... but we haven’t said which operations are cancellable.

Here’s the rule: if it’s in the trio namespace, and you use await to call it, then it’s cancellable (see Yield points above). Cancellable means:

  • If you try to call it when inside a cancelled scope, then it will raise Cancelled.
  • If it blocks, and while it’s blocked then one of the scopes around it becomes cancelled, it will return early and raise Cancelled.
  • Raising Cancelled means that the operation did not happen. If a trio socket’s send method raises Cancelled, then no data was sent. If a trio socket’s recv method raises Cancelled then no data was lost – it’s still sitting in the socket recieve buffer waiting for you to call recv again. And so forth.

There are a few idiosyncratic cases where external constraints make it impossible to fully implement these semantics. These are always documented. There is also one systematic exception:

  • Async cleanup operations – like __aexit__ methods or async close methods – are cancellable just like anything else except that if they are cancelled, they still perform a minimum level of cleanup before raising Cancelled.

For example, closing a TLS-wrapped socket normally involves sending a notification to the remote peer, so that they can be cryptographically assured that you really meant to close the socket, and your connection wasn’t just broken by a man-in-the-middle attacker. But handling this robustly is a bit tricky. Remember our example above where the blocking send_goodbye_msg caused problems? That’s exactly how closing a TLS socket works: if the remote peer has disappeared, then we may never be able to actually send our shutdown notification, and it would be nice if we didn’t block forever trying. Therefore, the close method on a TLS-wrapped socket will try to send that notification – but if it gets cancelled, then it will give up on sending the message but will still close the underlying socket before raising Cancelled, so at least you don’t leak that resource.

Cancellation API details

The primitive operation for creating a new cancellation scope is:

with trio.open_cancel_scope(*, deadline=inf, shield=False) as cancel_scope

Returns a context manager which creates a new cancellation scope.

Cancel scope objects provide the following interface:

deadline

Read-write, float. An absolute time on the current run’s clock at which this scope will automatically become cancelled. You can adjust the deadline by modifying this attribute, e.g.:

# I need a little more time!
cancel_scope.deadline += 30

Note that the core run loop alternates between running tasks and processing deadlines, so if the very first yield point after the deadline expires doesn’t actually block, then it may complete before we process deadlines:

with trio.open_cancel_scope(deadline=current_time()):
    # current_time() is now >= deadline, so cancel should fire,
    # at the next yield point. BUT, if the next yield point
    # completes instantly -- e.g., a recv on a socket that
    # already has data pending -- then the operation may
    # complete before we process deadlines, and then it's too
    # late to cancel (the data's already been read from the
    # socket):
    await sock.recv(1)

    # But the next call after that *is* guaranteed to raise
    # Cancelled:
    await sock.recv(1)

Defaults to math.inf, which means “no deadline”, though this can be overridden by the deadline= argument to open_cancel_scope().

shield

Read-write, bool, default False. So long as this is set to True, then the code inside this scope will not receive Cancelled exceptions from scopes that are outside this scope. They can still receive Cancelled exceptions from (1) this scope, or (2) scopes inside this scope. You can modify this attribute:

with trio.open_cancel_scope() as cancel_scope:
    cancel_scope.shield = True
    # This cannot be interrupted by any means short of
    # killing the process:
    await sleep(10)

    cancel_scope.shield = False
    # Now this can be cancelled normally:
    await sleep(10)

Defaults to False, though this can be overridden by the shield= argument to open_cancel_scope().

cancel()

Cancels this scope immediately.

This method is idempotent, i.e. if the scope was already cancelled then this method silently does nothing.

cancel_called

Readonly bool. Records whether this scope has been cancelled, either by an explicit call to cancel() or by the deadline expiring.

cancelled_caught

Readonly bool. Records whether this scope caught a Cancelled exception. This requires two things: (1) the with block exited with a Cancelled exception, and (2) this scope is the one that was responsible for triggering this Cancelled exception.

We also provide several convenience functions for the common situation of just wanting to impose a timeout on some code:

with trio.move_on_after(seconds) as cancel_scope

Use as a context manager to create a cancel scope whose deadline is set to now + seconds.

Parameters:seconds (float) – The timeout.
Raises:ValueError – if timeout is less than zero.
with trio.move_on_at(deadline) as cancel_scope

Use as a context manager to create a cancel scope with the given absolute deadline.

Parameters:deadline (float) – The deadline.
with trio.fail_after(seconds) as cancel_scope

Creates a cancel scope with the given timeout, and raises an error if it is actually cancelled.

This function and move_on_after() are similar in that both create a cancel scope with a given timeout, and if the timeout expires then both will cause Cancelled to be raised within the scope. The difference is that when the Cancelled exception reaches move_on_after(), it’s caught and discarded. When it reaches fail_after(), then it’s caught and TooSlowError is raised in its place.

Raises:
with trio.fail_at(deadline) as cancel_scope

Creates a cancel scope with the given deadline, and raises an error if it is actually cancelled.

This function and move_on_at() are similar in that both create a cancel scope with a given absolute deadline, and if the deadline expires then both will cause Cancelled to be raised within the scope. The difference is that when the Cancelled exception reaches move_on_at(), it’s caught and discarded. When it reaches fail_after(), then it’s caught and TooSlowError is raised in its place.

Raises:TooSlowError – if a Cancelled exception is raised in this scope and caught by the context manager.

Cheat sheet:

  • If you want to impose a timeout on a function, but you don’t care whether it timed out or not:

    with trio.move_on_after(TIMEOUT):
        await do_whatever()
    # carry on!
    
  • If you want to impose a timeout on a function, and then do some recovery if it timed out:

    with trio.move_on_after(TIMEOUT) as cancel_scope:
        await do_whatever()
    if cancel_scope.cancelled_caught:
        # The operation timed out, try something else
        try_to_recover()
    
  • If you want to impose a timeout on a function, and then if it times out then just give up and raise an error for your caller to deal with:

    with trio.fail_after(TIMEOUT):
        await do_whatever()
    

It’s also possible to check what the current effective deadline is, which is sometimes useful:

trio.current_effective_deadline()

Returns the current effective deadline for the current task.

This function examines all the cancellation scopes that are currently in effect (taking into account shielding), and returns the deadline that will expire first.

One example of where this might be is useful is if your code is trying to decide whether to begin an expensive operation like an RPC call, but wants to skip it if it knows that it can’t possibly complete in the available time. Another example would be if you’re using a protocol like gRPC that propagates timeout information to the remote peer; this function gives a way to fetch that information so you can send it along.

If this is called in a context where a cancellation is currently active (i.e., a blocking call will immediately raise Cancelled), then returned deadline is -inf. If it is called in a context where no scopes have a deadline set, it returns inf.

Returns:the effective deadline, as an absolute time.
Return type:float

Tasks let you do multiple things at once

One of trio’s core design principles is: no implicit concurrency. Every function executes in a straightforward, top-to-bottom manner, finishing each operation before moving on to the next – like Guido intended.

But, of course, the entire point of an async library is to let you do multiple things at once. The one and only way to do that in trio is through the task spawning interface. So if you want your program to walk and chew gum, this is the section for you.

Nurseries and spawning

Most libraries for concurrent programming let you spawn new child tasks (or threads, or whatever) willy-nilly, whenever and where-ever you feel like it. Trio is a bit different: you can’t spawn a child task unless you’re prepared to be a responsible parent. The way you demonstrate your responsibility is by creating a nursery:

async with trio.open_nursery() as nursery:
    ...

And once you have a reference to a nursery object, you can spawn children into that nursery:

async def child():
    ...

async def parent():
    async with trio.open_nursery() as nursery:
        # Make two concurrent calls to child()
        nursery.spawn(child)
        nursery.spawn(child)

This means that tasks form a tree: when you call run(), then this creates an initial task, and all your other tasks will be children, grandchildren, etc. of the initial task.

The crucial thing about this setup is that when we exit the async with block, then the nursery cleanup code runs. The nursery cleanup code does the following things:

  • If the body of the async with block raised an exception, then it cancels all remaining child tasks and saves the exception.
  • It watches for child tasks to exit. If a child task exits with an exception, then it cancels all remaining child tasks and saves the exception.
  • Once all child tasks have exited:
    • It marks the nursery as “closed”, so no new tasks can be spawned in it.
    • If there’s just one saved exception, it re-raises it, or
    • If there are multiple saved exceptions, it re-raises them as a MultiError, or
    • if there are no saved exceptions, it exits normally.

Since all tasks are descendents of the initial task, one consequence of this is that run() can’t finish until all tasks have finished.

Getting results from child tasks

The spawn method returns a Task object that can be used for various things – and in particular, for retrieving the task’s return value. Example:

async def child_fn(x):
    return 2 * x

async with trio.open_nursery() as nursery:
    child_task = nursery.spawn(child_fn, 3)
# We've left the nursery, so we know child_task has completed
assert child_task.result.unwrap() == 6

See Task.result and Result for more details.

Child tasks and cancellation

In trio, child tasks inherit the parent nursery’s cancel scopes. So in this example, both the child tasks will be cancelled when the timeout expires:

with move_on_after(TIMEOUT):
    async with trio.open_nursery() as nursery:
        nursery.spawn(child1)
        nursery.spawn(child2)

Note that what matters here is the scopes that were active when open_nursery() was called, not the scopes active when spawn is called. So for example, the timeout block below does nothing at all:

async with trio.open_nursery() as nursery:
    with move_on_after(TIMEOUT):  # don't do this!
        nursery.spawn(child)

Errors in multiple child tasks

Normally, in Python, only one thing happens at a time, which means that only one thing can wrong at a time. Trio has no such limitation. Consider code like:

async def broken1():
    d = {}
    return d["missing"]

async def broken2():
    seq = range(10)
    return seq[20]

async def parent():
    async with trio.open_nursery() as nursery:
        nursery.spawn(broken1)
        nursery.spawn(broken2)

broken1 raises KeyError. broken2 raises IndexError. Obviously parent should raise some error, but what? In some sense, the answer should be “both of these at once”, but in Python there can only be one exception at a time.

Trio’s answer is that it raises a MultiError object. This is a special exception which encapsulates multiple exception objects – either regular exceptions or nested MultiErrors. To make these easier to work with, trio installs a custom sys.excepthook that knows how to print nice tracebacks for unhandled MultiErrors, and we also provide some helpful utilities like MultiError.catch(), which allows you to catch “part of” a MultiError.

How to be a good parent task

Supervising child tasks is a full time job. If you want your program to do two things at once, then don’t expect the parent task to do one while a child task does another – instead, spawn two children and let the parent focus on managing them.

So, don’t do this:

# bad idea!
async with trio.open_nursery() as nursery:
    nursery.spawn(walk)
    await chew_gum()

Instead, do this:

# good idea!
async with trio.open_nursery() as nursery:
    nursery.spawn(walk)
    nursery.spawn(chew_gum)
    # now parent task blocks in the nursery cleanup code

The difference between these is that in the first example, if walk crashes, the parent is off distracted chewing gum, and won’t notice. In the second example, the parent is watching both children, and will notice and respond appropriately if anything happens.

Spawning tasks without becoming a parent

Sometimes it doesn’t make sense for the task that spawns a child to take on responsibility for watching it. For example, a server task may want to spawn a new task for each connection, but it can’t listen for connections and supervise children at the same time.

The solution here is simple once you see it: there’s no requirement that a nursery object stay in the task that created it! We can write code like this:

async def new_connection_listener(handler, nursery):
    while True:
        conn = await get_new_connection()
        nursery.spawn(handler, conn)

async def server(handler):
    async with trio.open_nursery() as nursery:
        nursery.spawn(new_connection_listener, handler, nursery)

Now new_connection_listener can focus on handling new connections, while its parent focuses on supervising both it and all the individual connection handlers.

And remember that cancel scopes are inherited from the nursery, not from the task that calls spawn. So in this example, the timeout does not apply to child (or to anything else):

async with do_spawn(nursery):
    with move_on_after(TIMEOUT):  # don't do this, it has no effect
        nursery.spawn(child)

async with trio.open_nursery() as nursery:
    nursery.spawn(do_spawn, nursery)

Custom supervisors

The default cleanup logic is often sufficient for simple cases, but what if you want a more sophisticated supervisor? For example, maybe you have Erlang envy and want features like automatic restart of crashed tasks. Trio itself doesn’t provide such a feature, but the nursery interface is designed to give you all the tools you need to build such a thing, while enforcing basic hygiene (e.g., it’s not possible to build a supervisor that exits and leaves orphaned tasks behind). And then hopefully you’ll wrap your fancy supervisor up in a library and put it on PyPI, because building custom supervisors is a challenging task that most people don’t want to deal with!

For simple custom supervisors, it’s often possible to lean on the default nursery logic to take care of annoying details. For example, here’s a function that takes a list of functions, runs them all concurrently, and returns the result from the one that finishes first:

async def race(*async_fns):
    if not async_fns:
        raise ValueError("must pass at least one argument")
    async with trio.open_nursery() as nursery:
        for async_fn in async_fns:
            nursery.spawn(async_fn)
        task_batch = await nursery.monitor.get_batch()
        nursery.cancel_scope.cancel()
        return task_batch[0].reap_and_unwrap(finished_task)

This works by waiting until at least one task has finished, then cancelling all remaining tasks and returning the result from the first task. This implicitly invokes the default logic to take care of all the other tasks, so we block to wait for the cancellation to finish, and if any of them raise errors in the process we’ll propagate those.

Task-local storage and run-local storage

Not implemented yet!

Synchronizing and communicating between tasks

Trio provides a standard set of synchronization and inter-task communication primitives. These objects’ APIs are generally modelled off of the analogous classes in the standard library, but with some differences.

Blocking and non-blocking methods

The standard library synchronization primitives have a variety of mechanisms for specifying timeouts and blocking behavior, and of signaling whether an operation returned due to success versus a timeout.

In trio, we standardize on the following conventions:

  • We don’t provide timeout arguments. If you want a timeout, then use a cancel scope.
  • For operations that have a non-blocking variant, the blocking and non-blocking variants are different methods with names like X and X_nowait, respectively. (This is similar to queue.Queue, but unlike most of the classes in threading.) We like this approach because it allows us to make the blocking version async and the non-blocking version sync.
  • When a non-blocking method cannot succeed (the queue is empty, the lock is already held, etc.), then it raises trio.WouldBlock. There’s no equivalent to the queue.Empty versus queue.Full distinction – we just have the one exception that we use consistently.

Fairness

These classes are all guaranteed to be “fair”, meaning that when it comes time to choose who will be next to acquire a lock, get an item from a queue, etc., then it always goes to the task which has been waiting longest. It’s not entirely clear whether this is the best choice, but for now that’s how it works.

As an example of what this means, here’s a small program in which two tasks compete for a lock. Notice that the task which releases the lock always immedately attempts to re-acquire it, before the other task has a chance to run. (And remember that we’re doing cooperative multi-tasking here, so it’s actually deterministic that the task releasing the lock will call acquire() before the other task wakes up; in trio releasing a lock is not a yield point.) With an unfair lock, this would result in the same task holding the lock forever and the other task being starved out. But if you run this, you’ll see that the two tasks politely take turns:

# fairness-demo.py

import trio

async def loopy_child(number, lock):
    while True:
        async with lock:
            print("Child {} has the lock!".format(number))
            await trio.sleep(0.5)

async def main():
    async with trio.open_nursery() as nursery:
        lock = trio.Lock()
        nursery.spawn(loopy_child, 1, lock)
        nursery.spawn(loopy_child, 2, lock)

trio.run(main)

Broadcasting an event with Event

class trio.Event

A waitable boolean value useful for inter-task synchronization, inspired by threading.Event.

An event object manages an internal boolean flag, which is initially False.

is_set()

Return the current value of the internal flag.

set()

Set the internal flag value to True, and wake any waiting tasks.

clear()

Set the internal flag value to False.

await wait()

Block until the internal flag value becomes True.

If it’s already True, then this method is still a yield point, but otherwise returns immediately.

await statistics()

Return an object containing debugging information.

Currently the following fields are defined:

  • tasks_waiting: The number of tasks blocked on this event’s wait() method.

Passing messages with Queue and UnboundedQueue

Trio provides two types of queues suitable for different purposes. Where they differ is in their strategies for handling flow control. Here’s a toy example to demonstrate the problem. Suppose we have a queue with two producers and one consumer:

async def producer(queue):
    while True:
        await queue.put(1)

async def consumer(queue):
    while True:
        print(await queue.get())

async def main():
    # Trio's actual queue classes have countermeasures to prevent
    # this example from working, so imagine we have some sort of
    # platonic ideal of a queue here
    queue = trio.HypotheticalQueue()
    async with trio.open_nursery() as nursery:
        # Two producers
        nursery.spawn(producer, queue)
        nursery.spawn(producer, queue)
        # One consumer
        nursery.spawn(consumer, queue)

trio.run(main)

If we naively cycle between these three tasks in round-robin style, then we put an item, then put an item, then get an item, then put an item, then put an item, then get an item, ... and since on each cycle we add two items to the queue but only remove one, then over time the queue size grows arbitrarily large, our latency is terrible, we run out of memory, it’s just generally bad news all around.

There are two potential strategies for avoiding this problem.

The preferred solution is to apply backpressure. If our queue starts getting too big, then we can make the producers slow down by having put block until get has had a chance to remove an item. This is the strategy used by trio.Queue.

The other possibility is for the queue consumer to get greedy: each time it runs, it could eagerly consume all of the pending items before yielding and allowing another task to run. (In some other systems, this would happen automatically because their queue’s get method doesn’t yield so long as there’s data available. But in trio, get is always a yield point.) This would work, but it’s a bit risky: basically instead of applying backpressure to specifically the producer tasks, we’re applying it to all the tasks in our system. The danger here is that if enough items have built up in the queue, then “stopping the world” to process them all may cause unacceptable latency spikes in unrelated tasks. Nonetheless, this is still the right choice in situations where it’s impossible to apply backpressure more precisely. For example, when monitoring exiting tasks, blocking tasks from reporting their death doesn’t really accomplish anything – the tasks are taking up memory either way, etc. (In this particular case it might be possible to do better, but in general the principle holds.) So this is the strategy implemented by trio.UnboundedQueue.

tl;dr: use Queue if you can.

class trio.Queue(capacity)

A bounded queue suitable for inter-task communication.

This class is generally modelled after queue.Queue, but with the major difference that it is always bounded. For an unbounded queue, see trio.UnboundedQueue.

A Queue object can be used as an asynchronous iterator, that dequeues objects one at a time. I.e., these two loops are equivalent:

async for obj in queue:
    ...

while True:
    obj = await queue.get()
    ...
Parameters:capacity (int) – The maximum number of items allowed in the queue before put() blocks. Choosing a sensible value here is important to ensure that backpressure is communicated promptly and avoid unnecessary latency. If in doubt, use 1.
qsize()

Returns the number of items currently in the queue.

There is some subtlety to interpreting this method’s return value: see issue #63.

full()

Returns True if the queue is at capacity, False otherwise.

There is some subtlety to interpreting this method’s return value: see issue #63.

empty()

Returns True if the queue is empty, False otherwise.

There is some subtlety to interpreting this method’s return value: see issue #63.

put_nowait(obj)

Attempt to put an object into the queue, without blocking.

Parameters:obj (object) – The object to enqueue.
Raises:WouldBlock – if the queue is full.
await put(obj)

Put an object into the queue, blocking if necessary.

Parameters:obj (object) – The object to enqueue.
await get_nowait()

Attempt to get an object from the queue, without blocking.

Returns:The dequeued object.
Return type:object
Raises:WouldBlock – if the queue is empty.
await get()

Get an object from the queue, blocking is necessary.

Returns:The dequeued object.
Return type:object
await task_done()

Decrement the count of unfinished work.

Each Queue object keeps a count of unfinished work, which starts at zero and is incremented after each successful put(). This method decrements it again. When the count reaches zero, any tasks blocked in join() are woken.

await join()

Block until the count of unfinished work reaches zero.

See task_done() for details.

await statistics()

Returns an object containing debugging information.

Currently the following fields are defined:

  • qsize: The number of items currently in the queue.
  • capacity: The maximum number of items the queue can hold.
  • tasks_waiting_put: The number of tasks blocked on this queue’s put() method.
  • tasks_waiting_get: The number of tasks blocked on this queue’s get() method.
  • tasks_waiting_join: The number of tasks blocked on this queue’s join() method.
class trio.UnboundedQueue

An unbounded queue suitable for certain unusual forms of inter-task communication.

This class is designed for use as a queue in cases where the producer for some reason cannot be subjected to back-pressure, i.e., put_nowait() has to always succeed. In order to prevent the queue backlog from actually growing without bound, the consumer API is modified to dequeue items in “batches”. If a consumer task processes each batch without yielding, then this helps achieve (but does not guarantee) an effective bound on the queue’s memory use, at the cost of potentially increasing system latencies in general. You should generally prefer to use a Queue instead if you can.

Currently each batch completely empties the queue, but this may change in the future.

A UnboundedQueue object can be used as an asynchronous iterator, where each iteration returns a new batch of items. I.e., these two loops are equivalent:

async for batch in queue:
    ...

while True:
    obj = await queue.get_batch()
    ...
qsize()

Returns the number of items currently in the queue.

empty()

Returns True if the queue is empty, False otherwise.

There is some subtlety to interpreting this method’s return value: see issue #63.

put_nowait(obj)

Put an object into the queue, without blocking.

This always succeeds, because the queue is unbounded. We don’t provide a blocking put method, because it would never need to block.

Parameters:obj (object) – The object to enqueue.
get_batch_nowait()

Attempt to get the next batch from the queue, without blocking.

Returns:
A list of dequeued items, in order. On a successful call this
list is always non-empty; if it would be empty we raise WouldBlock instead.
Return type:list
Raises:WouldBlock – if the queue is empty.
await get_batch()

Get the next batch from the queue, blocking as necessary.

Returns:
A list of dequeued items, in order. This list is always
non-empty.
Return type:list
await statistics()

Return an object containing debugging information.

Currently the following fields are defined:

  • qsize: The number of items currently in the queue.
  • tasks_waiting: The number of tasks blocked on this queue’s get_batch() method.

Lower-level synchronization primitives

Personally, I find that events and queues are usually enough to implement most things I care about, and lead to easier to read code than the lower-level primitives discussed in this section. But if you need them, they’re here. (If you find yourself reaching for these because you’re trying to implement a new higher-level synchronization primitive, then you might also want to check out the facilities in trio.hazmat for a more direct exposure of trio’s underlying synchronization logic. All of classes discussed in this section are implemented on top of the public APIs in trio.hazmat; they don’t have any special access to trio’s internals.)

class trio.Semaphore(initial_value, *, max_value=None)

A semaphore.

A semaphore holds an integer value, which can be incremented by calling release() and decremented by calling acquire() – but the value is never allowed to drop below zero. If the value is zero, then acquire() will block until someone calls release().

This is a very flexible synchronization object, but perhaps the most common use is to represent a resource with some bounded supply. For example, if you want to make sure that there are never more than four tasks simultaneously performing some operation, you could do something like:

# Allocate a shared Semaphore object, and somehow distribute it to all
# your tasks. NB: max_value=4 isn't technically necessary, but can
# help catch errors.
sem = trio.Semaphore(4, max_value=4)

# Then when you perform the operation:
async with sem:
    await perform_operation()

This object’s interface is similar to, but different from, that of threading.Semaphore.

A Semaphore object can be used as an async context manager; it blocks on entry but not on exit.

Parameters:
  • initial_value (int) – A non-negative integer giving semaphore’s initial value.
  • max_value (int or None) – If given, makes this a “bounded” semaphore that raises an error if the value is about to exceed the given max_value.
value

The current value of the semaphore.

max_value

The maximum allowed value. May be None to indicate no limit.

acquire_nowait()

Attempt to decrement the semaphore value, without blocking.

Raises:WouldBlock – if the value is zero.
await acquire()

Decrement the semaphore value, blocking if necessary to avoid letting it drop below zero.

await release()

Increment the semaphore value, possibly waking a task blocked in acquire().

Raises:ValueError – if incrementing the value would cause it to exceed max_value.
await statistics()

Return an object containing debugging information.

Currently the following fields are defined:

  • tasks_waiting: The number of tasks blocked on this semaphore’s acquire() method.
class trio.Lock

A classic mutex.

This is a non-reentrant, single-owner lock. Unlike threading.Lock, only the owner of the lock is allowed to release it.

A Lock object can be used as an async context manager; it blocks on entry but not on exit.

locked()

Check whether the lock is currently held.

Returns:True if the lock is held, False otherwise.
Return type:bool
acquire_nowait()

Attempt to acquire the lock, without blocking.

Raises:WouldBlock – if the lock is held.
await acquire()

Acquire the lock, blocking if necessary.

await release()

Release the lock.

Raises:RuntimeError – if the calling task does not hold the lock.
await statistics()

Return an object containing debugging information.

Currently the following fields are defined:

  • locked: boolean indicating whether the lock is held.
  • owner: the Task currently holding the lock, or None if the lock is not held.
  • tasks_waiting: The number of tasks blocked on this lock’s acquire() method.
class trio.Condition(lock=None)

A classic condition variable, similar to threading.Condition.

A Condition object can be used as an async context manager to acquire the underlying lock; it blocks on entry but not on exit.

Parameters:lock (Lock) – the lock object to use. If given, must be a trio.Lock. If None, a new Lock will be allocated and used.
locked()

Check whether the underlying lock is currently held.

Returns:True if the lock is held, False otherwise.
Return type:bool
acquire_nowait()

Attempt to acquire the underlying lock, without blocking.

Raises:WouldBlock – if the lock is currently held.
await acquire()

Acquire the underlying lock, blocking if necessary.

await release()

Release the underlying lock.

await wait()

Wait for another thread to call notify() or notify_all().

When calling this method, you must hold the lock. It releases the lock while waiting, and then re-acquires it before waking up.

There is a subtlety with how this method interacts with cancellation: when cancelled it will block to re-acquire the lock before raising Cancelled. This may cause cancellation to be less prompt than expected. The advantage is that it makes code like this work:

async with condition:
    await condition.wait()

If we didn’t re-acquire the lock before waking up, and wait() were cancelled here, then we’d crash in condition.__aexit__ when we tried to release the lock we no longer held.

Raises:RuntimeError – if the calling task does not hold the lock.
await notify(n=1)

Wake one or more tasks that are blocked in wait().

Parameters:n (int) – The number of tasks to wake.
Raises:RuntimeError – if the calling task does not hold the lock.
await notify_all()

Wake all tasks that are currently blocked in wait().

Raises:RuntimeError – if the calling task does not hold the lock.
await statistics()

Return an object containing debugging information.

Currently the following fields are defined:

  • tasks_waiting: The number of tasks blocked on this condition’s wait() method.
  • lock_statistics: The result of calling the underlying Locks statistics() method.

Threads (if you must)

In a perfect world, all third-party libraries and low-level APIs would be natively async and integrated into Trio, and all would be happiness and rainbows.

That world, alas, does not (yet) exist. Until it does, you may find yourself needing to interact with non-Trio APIs that do rude things like “blocking”.

In acknowledgment of this reality, Trio provides two useful utilities for working with real, operating-system level, threading-module-style threads. First, if you’re in Trio but need to push some work into a thread, there’s run_in_worker_thread(). And if you’re in a thread and need to communicate back with trio, there’s the closely related current_run_in_trio_thread() and current_await_in_trio_thread().

await trio.run_in_worker_thread(sync_fn, *args, cancellable=False)

Convert a blocking operation in an async operation using a thread.

These two lines are equivalent:

sync_fn(*args)
await run_in_worker_thread(sync_fn, *args)

except that if sync_fn takes a long time, then the first line will block the Trio loop while it runs, while the second line allows other Trio tasks to continue working while sync_fn runs. This is accomplished by pushing the call to sync_fn(*args) off into a worker thread.

Cancellation handling: Cancellation is a tricky issue here, because neither Python nor the operating systems it runs on provide any general way to communicate with an arbitrary synchronous function running in a thread and tell it to stop. This function will always check for cancellation on entry, before starting the thread. But once the thread is running, there are two ways it can handle being cancelled:

  • If cancellable=False, the function ignores the cancellation and keeps going, just like if we had called sync_fn synchronously. This is the default behavior.
  • If cancellable=True, then run_in_worker_thread immediately raises Cancelled. In this case the thread keeps running in background – we just abandon it to do whatever it’s going to do, and silently discard any return value or errors that it raises. Only use this if you know that the operation is safe and side-effect free. (For example: trio.socket.getaddrinfo is implemented using run_in_worker_thread(), and it sets cancellable=True because it doesn’t really matter if a stray hostname lookup keeps running in the background.)

Warning

You should not use run_in_worker_thread() to call CPU-bound functions! In addition to the usual GIL-related reasons why using threads for CPU-bound work is not very effective in Python, there is an additional problem: on CPython, CPU-bound threads tend to “starve out” IO-bound threads, so using run_in_worker_thread() for CPU-bound work is likely to adversely affect the main thread running trio. If you need to do this, you’re better off using a worker process, or perhaps PyPy (which still has a GIL, but may do a better job of fairly allocating CPU time between threads).

Parameters:
  • sync_fn – An arbitrary synchronous callable.
  • *args – Positional arguments to pass to sync_fn. If you need keyword arguments, use functools.partial().
  • cancellable (bool) – Whether to allow cancellation of this operation. See discussion above.
Returns:

Whatever sync_fn returns.

Raises:

Whatever sync_fn raises.

trio.current_run_in_trio_thread()
trio.current_await_in_trio_thread()

Call these from inside a trio run to get a reference to the current run’s run_in_trio_thread() or await_in_trio_thread():

run_in_trio_thread(sync_fn, *args)
await_in_trio_thread(async_fn, *args)

These functions schedule a call to sync_fn(*args) or await async_fn(*args) to happen in the main trio thread, wait for it to complete, and then return the result or raise whatever exception it raised.

These are the only non-hazmat functions that interact with the trio run loop and that can safely be called from a different thread than the one that called trio.run(). These two functions must be called from a different thread than the one that called trio.run(). (After all, they’re blocking functions!)

Warning

If the relevant call to trio.run() finishes while a call to await_in_trio_thread is in progress, then the call to async_fn will be cancelled and the resulting Cancelled exception may propagate out of await_in_trio_thread and into the calling thread. You should be prepared for this.

Raises:RunFinishedError – If the corresponding call to trio.run() has already completed.

This will probably be clearer with an example. Here we demonstrate how to spawn a child thread, and then use a trio.Queue to send messages between the thread and a trio task:

import trio
import threading

def thread_fn(await_in_trio_thread, request_queue, response_queue):
    while True:
        # Since we're in a thread, we can't call trio.Queue methods
        # directly -- so we use await_in_trio_thread to call them.
        request = await_in_trio_thread(request_queue.get)
        # We use 'None' as a request to quit
        if request is not None:
            response = request + 1
            await_in_trio_thread(response_queue.put, response)
        else:
            # acknowledge that we're shutting down, and then do it
            await_in_trio_thread(response_queue.put, None)
            return

async def main():
    # Get a reference to the await_in_trio_thread function
    await_in_trio_thread = trio.current_await_in_trio_thread()
    request_queue = trio.Queue(1)
    response_queue = trio.Queue(1)
    thread = threading.Thread(
        target=thread_fn,
        args=(await_in_trio_thread, request_queue, response_queue))
    thread.start()

    # prints "1"
    await request_queue.put(0)
    print(await response_queue.get())

    # prints "2"
    await request_queue.put(1)
    print(await response_queue.get())

    # prints "None"
    await request_queue.put(None)
    print(await response_queue.get())
    thread.join()

trio.run(main)

Debugging and instrumentation

Trio tries hard to provide useful hooks for debugging and instrumentation. Some are documented above (Task.name, Queue.statistics(), etc.). Here are some more:

Global statistics

trio.current_statistics()

Returns an object containing run-loop-level debugging information.

Currently the following fields are defined:

  • tasks_living (int): The number of tasks that have been spawned and not yet exited.
  • tasks_runnable (int): The number of tasks that are currently queued on the run queue (as opposed to blocked waiting for something to happen).
  • seconds_to_next_deadline (float): The time until the next pending cancel scope deadline. May be negative if the deadline has expired but we haven’t yet processed cancellations. May be inf if there are no pending deadlines.
  • call_soon_queue_size (int): The number of unprocessed callbacks queued via trio.hazmat.current_call_soon_thread_and_signal_safe().
  • io_statistics (object): Some statistics from trio’s I/O backend. This always has an attribute backend which is a string naming which operating-system-specific I/O backend is in use; the other attributes vary between backends.

Instrument API

The instrument API provides a standard way to add custom instrumentation to the run loop. Want to make a histogram of scheduling latencies, log a stack trace of any task that blocks the run loop for >50 ms, or measure what percentage of your process’s running time is spent waiting for I/O? This is the place.

The general idea is that at any given moment, trio.run() maintains a set of “instruments”, which are objects that implement the trio.abc.Instrument interface. When an interesting event happens, it loops over these instruments and notifies them by calling an appropriate method. The tutorial has a simple example of using this for tracing.

Since this hooks into trio at a rather low level, you do have to be somewhat careful. The callbacks are run synchronously, and in many cases if they error out then we don’t have any plausible way to propagate this exception (for instance, we might be deep in the guts of the exception propagation machinery...). Therefore our current strategy for handling exceptions raised by instruments is to (a) dump a stack trace to stderr and (b) disable the offending instrument.

You can register an initial list of instruments by passing them to trio.run(). current_instruments() lets you introspect and modify this list at runtime from inside trio:

trio.current_instruments()

Returns the list of currently active instruments.

This list is live: if you mutate it, then trio.run() will stop calling the instruments you remove and start calling the ones you add.

And here’s the instrument API:

class trio.abc.Instrument

The interface for run loop instrumentation.

Instruments don’t have to inherit from this abstract base class, and all of these methods are optional. This class serves mostly as documentation.

before_run()

Called at the beginning of trio.run().

after_run()

Called just before trio.run() returns.

task_spawned(task)

Called when the given task is created.

Parameters:task (trio.Task) – The new task.
task_scheduled(task)

Called when the given task becomes runnable.

It may still be some time before it actually runs, if there are other runnable tasks ahead of it.

Parameters:task (trio.Task) – The task that became runnable.
before_task_step(task)

Called immediately before we resume running the given task.

Parameters:task (trio.Task) – The task that is about to run.
after_task_step(task)

Called when we return to the main run loop after a task has yielded.

Parameters:task (trio.Task) – The task that just ran.
task_exited(task)

Called when the given task exits.

Parameters:task (trio.Task) – The finished task.
before_io_wait(timeout)

Called before blocking to wait for I/O readiness.

Parameters:timeout (float) – The number of seconds we are willing to wait.
after_io_wait(timeout)

Called after handling pending I/O.

Parameters:timeout (float) – The number of seconds we were willing to wait. This much time may or may not have elapsed, depending on whether any I/O was ready.

The tutorial has a fully-worked example of defining a custom instrument to log trio’s internal scheduling decisions.

Exceptions

exception trio.TrioInternalError

Raised by run() if we encounter a bug in trio, or (possibly) a misuse of one of the low-level trio.hazmat APIs.

This should never happen! If you get this error, please file a bug.

Unfortunately, if you get this error it also means that all bets are off – trio doesn’t know what is going on and its normal invariants may be void. (For example, we might have “lost track” of a task. Or lost track of all tasks.) Again, though, this shouldn’t happen.

exception trio.Cancelled

Raised by blocking calls if the surrounding scope has been cancelled.

You should let this exception propagate, to be caught by the relevant cancel scope. To remind you of this, it inherits from BaseException, like KeyboardInterrupt and SystemExit.

Note

In the US it’s also common to see this word spelled “canceled”, with only one “l”. This is a recent and US-specific innovation, and even in the US both forms are still commonly used. So for consistency with the rest of the world and with “cancellation” (which always has two “l”s), trio uses the two “l” spelling everywhere.

exception trio.TooSlowError

Raised by fail_after() and fail_at() if the timeout expires.

exception trio.WouldBlock

Raised by X_nowait functions if X would block.

exception trio.RunFinishedError

Raised by run_in_trio_thread and similar functions if the corresponding call to trio.run() has already finished.