The AsyncResult object#

In non-blocking mode, apply(), map(), and friends submit the command to be executed and then return an AsyncResult object immediately. The AsyncResult object gives you a way of getting a result at a later time through its get() method, but it also collects metadata on execution.

Beyond stdlib AsyncResult and Future#

The AsyncResult is a subclass of concurrent.futures.Future. This means it can be integrated into existing async workflows, with e.g. asyncio.wrap_future(). It also extends the AsyncResult API.

See also

In addition to these common features, our AsyncResult objects add a number of convenient methods for working with parallel results, beyond what is provided by the standard library classes on which they are based.


AsyncResult.get_dict() pulls results as a dictionary, keyed by engine_id, rather than a flat list. This is useful for quickly coordinating or distributing information about all of the engines.

As an example, here is a quick call that gives every engine a dict showing the PID of every other engine:

In [10]: ar = rc[:].apply_async(os.getpid)
In [11]: pids = ar.get_dict()
In [12]: rc[:]['pid_map'] = pids

This trick is particularly useful when setting up inter-engine communication, as in IPython’s examples/parallel/interengine examples.


IPython Parallel tracks some metadata about the tasks, which is stored in the Client.metadata dict. The AsyncResult object gives you an interface for this information as well, including timestamps stdout/err, and engine IDs.


IPython tracks various timestamps as datetime objects, and the AsyncResult object has a few properties that turn these into useful times (in seconds as floats).

For use while the tasks are still pending:

  • ar.elapsed is the elapsed seconds since submission, for use before the AsyncResult is complete.

  • ar.progress is the number of tasks that have completed. Fractional progress would be:

    1.0 * ar.progress / len(ar)
  • AsyncResult.wait_interactive() will wait for the result to finish, but print out status updates on progress and elapsed time while it waits.

For use after the tasks are done:

  • ar.serial_time is the sum of the computation time of all of the tasks done in parallel.

  • ar.wall_time is the time between the first task submitted and last result received. This is the actual cost of computation, including IPython overhead.


wall_time is only precise if the Client is waiting for results when the task finished, because the received timestamp is made when the result is unpacked by the Client, triggered by the spin() call. If you are doing work in the Client, and not waiting/spinning, then received might be artificially high.

An often interesting metric is the time it cost to do the work in parallel relative to the serial computation, and this can be given with

speedup = ar.serial_time / ar.wall_time

Map results are iterable!#

When an AsyncResult object has multiple results (e.g. the AsyncMapResult object), you can iterate through results themselves, and act on them as they arrive:

import time

import ipyparallel as ipp

# create client & view
rc = ipp.Client()
dv = rc[:]
v = rc.load_balanced_view()

# scatter 'id', so id=0,1,2 on engines 0,1,2
dv.scatter('id', rc.ids, flatten=True)
print("Engine IDs: ", dv['id'])

# create a Reference to `id`. This will be a different value on each engine
ref = ipp.Reference('id')
print("sleeping for `id` seconds on each engine")
tic = time.time()
ar = dv.apply(time.sleep, ref)
for i, r in enumerate(ar):
    print("%i: %.3f" % (i, time.time() - tic))

def sleep_here(t):
    import time

    return id, t

# one call per task
print("running with one call per task")
amr =, [0.01 * t for t in range(100)])
tic = time.time()
for i, r in enumerate(amr):
    print("task %i on engine %i: %.3f" % (i, r[0], time.time() - tic))

print("running with four calls per task")
# with chunksize, we can have four calls per task
amr =, [0.01 * t for t in range(100)], chunksize=4)
tic = time.time()
for i, r in enumerate(amr):
    print("task %i on engine %i: %.3f" % (i, r[0], time.time() - tic))

print("running with two calls per task, with unordered results")
# We can even iterate through faster results first, with ordered=False
amr =
    sleep_here, [0.01 * t for t in range(100, 0, -1)], ordered=False, chunksize=2

That is to say, if you treat an AsyncMapResult as if it were a list of your actual results, it should behave as you would expect, with the only difference being that you can start iterating through the results before they have even been computed.

This lets you do a simple version of map/reduce with the builtin Python functions, and the only difference between doing this locally and doing it remotely in parallel is using the asynchronous instead of the builtin map.

Here is a simple one-line RMS (root-mean-square) implemented with Python’s builtin map/reduce.

In [38]: X = np.linspace(0,100)

In [39]: from math import sqrt

In [40]: add = lambda a,b: a+b

In [41]: sq = lambda x: x*x

In [42]: sqrt(reduce(add, map(sq, X)) / len(X))
Out[42]: 58.028845747399714

In [43]: sqrt(reduce(add,, X)) / len(X))
Out[43]: 58.028845747399714

To break that down:

  1. map(sq, X) Compute the square of each element in the list (locally, or in parallel)

  2. reduce(add, sqX) / len(X) compute the mean by summing over the list (or AsyncMapResult) and dividing by the size

  3. take the square root of the resulting number

See also

When AsyncResult or the AsyncMapResult don’t provide what you need (for instance, handling individual results as they arrive, but with metadata), you can always split the original result’s msg_ids attribute, and handle them as you like.

For an example of this, see examples/