Launchers#

The Launcher is the basic abstraction in IPython Parallel for starting and stopping processes.

A Launcher has two primary methods: start() and stop(), which should be async def coroutines.

There are two basic kinds of Launcher: ControllerLauncher and EngineLauncher. A ControllerLauncher should launch ipcontroller somewhere, and an EngineLauncher should start n engines somewhere. Shared configuration, principally profile_dir and cluster_id are typically used to locate the connection files necessary for these two communicate, though explicit paths can be added to arguments.

Launchers are used through the Cluster API, which manages one ControllerLauncher and zero to many EngineLaunchers, each representing a set of engines.

Launchers are registered via entry points (more below), and can be selected via short lowercase string naming the kind of launcher, e.g. ‘mpi’ or ‘local’:

import ipyparallel as ipp
c = ipp.Cluster(engines="mpi")

For the most part, Launchers are not interacted-with directly, but can be configured.

If you generate a config file with:

ipython profile create --parallel

you can check out the resulting ipcluster_config.py, which includes configuration options for all available Launcher classes.

You can also check ipcluster start --help-all to see them on the command-line.

Debugging launchers#

If a launcher isn’t doing what you want, the first thing to do is probably start your Cluster with log_level=logging.DEBUG.

You can also access the Launcher(s) on the Cluster object and call get_output() to retrieve the output from the process.

Writing your own Launcher(s)#

If you want to write your own launcher, the best place to start is to look at the Launcher classes that ship with IPython Parallel.

There are three key methods to implement:

Writing start#

A start method on a launcher should do the following:

  1. request the process(es) to be started

  2. start monitoring to notice when the process exits, such that notify_stop() will be called when the process exits.

The command to launch should be the self.args list, inherited from the base class.

The default for the LocalProcessLauncher

    def start(self):
        self.log.debug("Starting %s: %r", self.__class__.__name__, self.args)
        if self.state != 'before':
            raise ProcessStateError(
                'The process was already started and has state: {self.state}'
            )
        self.log.debug(f"Sending output for {self.identifier} to {self.output_file}")

        env = os.environ.copy()
        env.update(self.get_env())
        self.log.debug(f"Setting environment: {','.join(self.get_env())}")

        with open(self.output_file, "ab") as f, open(os.devnull, "rb") as stdin:
            proc = self._popen_process = Popen(
                self.args,
                stdout=f.fileno(),
                stderr=STDOUT,
                stdin=stdin,
                env=env,
                cwd=self.work_dir,
                start_new_session=True,  # don't forward signals
            )
        self.pid = proc.pid
        # use psutil API for self.process
        self.process = psutil.Process(proc.pid)

        self.notify_start(self.process.pid)
        self._start_waiting()
        if 1 <= self.log.getEffectiveLevel() <= logging.DEBUG:
            self._start_streaming()

ControllerLauncher.start is always called with no arguments, whereas EngineLauncher.start is called with n, which is an integer or None. If n is an integer, this many engines should be started. If n is None, a ‘default’ number should be used, e.g. the number of CPUs on a host.

Writing stop#

A stop method should request that the process(es) stop, and return only after everything is stopped and cleaned up. Exactly how to collect these resources will depend greatly on how the resources were requested in start.

Serializing Launchers#

Launchers are serialized to disk using JSON, via the .to_dict() method. The default .to_dict() method should rarely need to be overridden.

To declare a property of your launcher as one that should be included in serialization, register it as a traitlet with to_dict=True. For example:

from traitlets import Integer
from ipyparallel.cluster.launcher import EngineLauncher
class MyLauncher(EngineLauncher):
    pid = Integer(
        help="The pid of the process",
    ).tag(to_dict=True)

This .tag(to_dict=True) ensures that the .pid property will be persisted to disk, and reloaded in the default .from_dict implementation. Typically, these are populated in .start():

def start(self):
    process = start_process(self.args, ...)
    self.pid = process.pid

Mark whatever properties are required to reconstruct your object from disk with this metadata.

writing from_dict#

from_dict() should be a class method which returns an instance of your Launcher class, loaded from dict.

Most from_dict methods will look similar to this:

    @classmethod
    def from_dict(cls, d, **kwargs):
        self = super().from_dict(d, **kwargs)
        self._reconstruct_process(d)
        return self

where serializable-state is loaded first, then ‘live’ objects are loaded from that. As in the default LocalProcessLauncher:

    def _reconstruct_process(self, d):
        """Reconstruct our process"""
        if 'pid' in d and d['pid'] > 0:
            try:
                self.process = psutil.Process(d['pid'])
            except psutil.NoSuchProcess as e:
                raise NotRunning(f"Process {d['pid']}")
            self._start_waiting()

The local process case is the simplest, where the main thing that needs serialization is the PID of the process.

If reconstruction of the object fails because the resource is no longer running (e.g. check for the PID and it’s not there, or a VM / batch job are gone), the NotRunning exception should be raised. This tells the Cluster that the object is gone and should be removed (handled the same as if it had stopped while we are watching). Raising other unhandled errors will be assumed to be a bug in the Launcher, and not result in removing the resource from cluster state.

Additional methods#

Some useful additional methods to implement, if the base class implementations do not work for you:

TODO: write more docs on these

Registering your Launcher via entrypoints#

Once you have defined your launcher, you can ‘register’ it for discovery via entrypoints. In your setup.py:

setup(
    ...
    entry_points={
        'ipyparallel.controller_launchers': [
            'mine = mypackage:MyControllerLauncher',
        ],
        'ipyparallel.engine_launchers': [
            'mine = mypackage:MyEngineSetLauncher',
        ],
    },
)

This allows clusters created to use the shortcut:

Cluster(engines="mine")

instead of the full import string

Cluster(engines="mypackage.MyEngineSetLauncher")

though the long form will always still work.

Launcher API reference#

Facilities for launching IPython Parallel processes asynchronously.

class ipyparallel.cluster.launcher.BaseLauncher(**kwargs: Any)#

An abstraction for starting, stopping and signaling a process.

property arg_str#

The string form of the program arguments.

property args#

A list of cmd and args that will be used to start the process.

This is what is passed to spawnProcess() and the first element will be the process name.

property cluster_args#

Common cluster arguments

property cluster_env#

Cluster-related env variables

property connection_files#

Dict of connection file paths

environment c.BaseLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

find_args()#

The .args property calls this to find the args list.

Subcommand should implement this to construct the cmd and args.

classmethod from_dict(d, *, config=None, parent=None, **kwargs)#

Restore a Launcher from a dict

Subclasses should always call launcher = super().from_dict(*args, **kwargs) and finish initialization after that.

After calling from_dict(), the launcher should be in the same state as after .start() (i.e. monitoring for exit, etc.)

Returns: Launcher

The instantiated and fully configured Launcher.

Raises: NotRunning

e.g. if the process has stopped and is no longer running.

get_env()#

Get the full environment for the process

merges different sources for environment variables

get_output(remove=False)#

Retrieve the output form the Launcher.

If remove: remove the file, if any, where it was being stored.

identifier Unicode('')#

Used for lookup in e.g. EngineSetLauncher during notify_stop and default log files

async join(timeout=None)#

Wait for the process to finish

notify_start(data)#

Call this to trigger startup actions.

This logs the process startup and sets the state to ‘running’. It is a pass-through so it can be used as a callback.

notify_stop(data)#

Call this to trigger process stop actions.

This logs the process stopping and sets the state to ‘after’. Call this to trigger callbacks registered via on_stop().

on_stop(f)#

Register a callback to be called with this Launcher’s stop_data when the process actually finishes.

output_limit c.BaseLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

property running#

Am I running.

signal(sig)#

Signal the process.

Parameters:

sig (str or int) – ‘KILL’, ‘INT’, etc., or any signal number

async start()#

Start the process.

Should be an async def coroutine.

When start completes, the process should be requested (it need not be running yet), and waiting should begin in the background such that notify_stop() will be called when the process finishes.

async stop()#

Stop the process and notify observers of stopping.

This method should be an async def coroutine, and return only after the process has stopped.

All resources should be cleaned up by the time this returns.

stop_timeout c.BaseLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

to_dict()#

Serialize a Launcher to a dict, for later restoration

class ipyparallel.cluster.launcher.BatchControllerLauncher(**kwargs: Any)#
batch_file_name c.BatchControllerLauncher.batch_file_name = Unicode('batch_script')#

The filename of the instantiated batch script.

batch_template c.BatchControllerLauncher.batch_template = Unicode('')#

The string that is the batch script template itself.

batch_template_file c.BatchControllerLauncher.batch_template_file = Unicode('')#

The file that contains the batch template.

connection_info_timeout c.BatchControllerLauncher.connection_info_timeout = Float(60)#

Default timeout (in seconds) for get_connection_info

New in version 8.7.

controller_args c.BatchControllerLauncher.controller_args = List()#

command-line args to pass to ipcontroller

controller_cmd c.BatchControllerLauncher.controller_cmd = List()#

Popen command to launch ipcontroller.

delete_command c.BatchControllerLauncher.delete_command = List()#

The name of the command line program used to delete jobs.

environment c.BatchControllerLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

job_id_regexp c.BatchControllerLauncher.job_id_regexp = CRegExp('')#

A regular expression used to get the job id from the output of the submit_command.

job_id_regexp_group c.BatchControllerLauncher.job_id_regexp_group = Int(0)#

The group we wish to match in job_id_regexp (0 to match all)

namespace c.BatchControllerLauncher.namespace = Dict()#

Extra variables to pass to the template.

This lets you parameterize additional options, such as wall_time with a custom template.

output_file c.BatchControllerLauncher.output_file = Unicode('')#

File in which to store stdout/err of processes

output_limit c.BatchControllerLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

queue c.BatchControllerLauncher.queue = Unicode('')#

The batch queue.

signal_command c.BatchControllerLauncher.signal_command = List()#

The name of the command line program used to send signals to jobs.

start()#

Start n copies of the process using a batch system.

stop_timeout c.BatchControllerLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

submit_command c.BatchControllerLauncher.submit_command = List()#

The name of the command line program used to submit jobs.

class ipyparallel.cluster.launcher.BatchEngineSetLauncher(**kwargs: Any)#
batch_file_name c.BatchEngineSetLauncher.batch_file_name = Unicode('batch_script')#

The filename of the instantiated batch script.

batch_template c.BatchEngineSetLauncher.batch_template = Unicode('')#

The string that is the batch script template itself.

batch_template_file c.BatchEngineSetLauncher.batch_template_file = Unicode('')#

The file that contains the batch template.

delete_command c.BatchEngineSetLauncher.delete_command = List()#

The name of the command line program used to delete jobs.

engine_args c.BatchEngineSetLauncher.engine_args = List()#

command-line arguments to pass to ipengine

engine_cmd c.BatchEngineSetLauncher.engine_cmd = List()#

command to launch the Engine.

environment c.BatchEngineSetLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

job_id_regexp c.BatchEngineSetLauncher.job_id_regexp = CRegExp('')#

A regular expression used to get the job id from the output of the submit_command.

job_id_regexp_group c.BatchEngineSetLauncher.job_id_regexp_group = Int(0)#

The group we wish to match in job_id_regexp (0 to match all)

namespace c.BatchEngineSetLauncher.namespace = Dict()#

Extra variables to pass to the template.

This lets you parameterize additional options, such as wall_time with a custom template.

output_file c.BatchEngineSetLauncher.output_file = Unicode('')#

File in which to store stdout/err of processes

output_limit c.BatchEngineSetLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

queue c.BatchEngineSetLauncher.queue = Unicode('')#

The batch queue.

signal_command c.BatchEngineSetLauncher.signal_command = List()#

The name of the command line program used to send signals to jobs.

stop_timeout c.BatchEngineSetLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

submit_command c.BatchEngineSetLauncher.submit_command = List()#

The name of the command line program used to submit jobs.

class ipyparallel.cluster.launcher.BatchSystemLauncher(**kwargs: Any)#

Launch an external process using a batch system.

This class is designed to work with UNIX batch systems like PBS, LSF, GridEngine, etc. The overall model is that there are different commands like qsub, qdel, etc. that handle the starting and stopping of the process.

This class also has the notion of a batch script. The batch_template attribute can be set to a string that is a template for the batch script. This template is instantiated using string formatting. Thus the template can use {n} for the number of instances. Subclasses can add additional variables to the template dict.

batch_file_name c.BatchSystemLauncher.batch_file_name = Unicode('batch_script')#

The filename of the instantiated batch script.

batch_template c.BatchSystemLauncher.batch_template = Unicode('')#

The string that is the batch script template itself.

batch_template_file c.BatchSystemLauncher.batch_template_file = Unicode('')#

The file that contains the batch template.

delete_command c.BatchSystemLauncher.delete_command = List()#

The name of the command line program used to delete jobs.

environment c.BatchSystemLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

find_args()#

The .args property calls this to find the args list.

Subcommand should implement this to construct the cmd and args.

get_output(remove=True)#

Retrieve the output form the Launcher.

If remove: remove the file, if any, where it was being stored.

job_id_regexp c.BatchSystemLauncher.job_id_regexp = CRegExp('')#

A regular expression used to get the job id from the output of the submit_command.

job_id_regexp_group c.BatchSystemLauncher.job_id_regexp_group = Int(0)#

The group we wish to match in job_id_regexp (0 to match all)

namespace c.BatchSystemLauncher.namespace = Dict()#

Extra variables to pass to the template.

This lets you parameterize additional options, such as wall_time with a custom template.

output_file c.BatchSystemLauncher.output_file = Unicode('')#

File in which to store stdout/err of processes

output_limit c.BatchSystemLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

parse_job_id(output)#

Take the output of the submit command and return the job id.

poll()#

Poll not implemented

Need to use squeue and friends to check job status

queue c.BatchSystemLauncher.queue = Unicode('')#

The batch queue.

signal(sig)#

Signal the process.

Parameters:

sig (str or int) – ‘KILL’, ‘INT’, etc., or any signal number

signal_command c.BatchSystemLauncher.signal_command = List()#

The name of the command line program used to send signals to jobs.

start(n=1)#

Start n copies of the process using a batch system.

stop()#

Stop the process and notify observers of stopping.

This method should be an async def coroutine, and return only after the process has stopped.

All resources should be cleaned up by the time this returns.

stop_timeout c.BatchSystemLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

submit_command c.BatchSystemLauncher.submit_command = List()#

The name of the command line program used to submit jobs.

write_batch_script(n=1)#

Instantiate and write the batch script to the work_dir.

class ipyparallel.cluster.launcher.ControllerLauncher(**kwargs: Any)#

Base class for launching ipcontroller

connection_info_timeout c.ControllerLauncher.connection_info_timeout = Float(60)#

Default timeout (in seconds) for get_connection_info

New in version 8.7.

controller_args c.ControllerLauncher.controller_args = List()#

command-line args to pass to ipcontroller

controller_cmd c.ControllerLauncher.controller_cmd = List()#

Popen command to launch ipcontroller.

environment c.ControllerLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

async get_connection_info(timeout=None)#

Retrieve connection info for the controller

Default implementation assumes profile_dir and cluster_id are local.

Changed in version 8.7: Accept timeout=None (default) to use .connection_info_timeout config.

output_limit c.ControllerLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

stop_timeout c.ControllerLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

class ipyparallel.cluster.launcher.EngineLauncher(**kwargs: Any)#

Base class for launching one engine

engine_args c.EngineLauncher.engine_args = List()#

command-line arguments to pass to ipengine

engine_cmd c.EngineLauncher.engine_cmd = List()#

command to launch the Engine.

environment c.EngineLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

output_limit c.EngineLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

stop_timeout c.EngineLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

class ipyparallel.cluster.launcher.HTCondorControllerLauncher(**kwargs: Any)#

Launch a controller using HTCondor.

batch_file_name c.HTCondorControllerLauncher.batch_file_name = Unicode('htcondor_controller')#

batch file name for the controller job.

batch_template c.HTCondorControllerLauncher.batch_template = Unicode('')#

The string that is the batch script template itself.

batch_template_file c.HTCondorControllerLauncher.batch_template_file = Unicode('')#

The file that contains the batch template.

connection_info_timeout c.HTCondorControllerLauncher.connection_info_timeout = Float(60)#

Default timeout (in seconds) for get_connection_info

New in version 8.7.

controller_args c.HTCondorControllerLauncher.controller_args = List()#

command-line args to pass to ipcontroller

controller_cmd c.HTCondorControllerLauncher.controller_cmd = List()#

Popen command to launch ipcontroller.

delete_command c.HTCondorControllerLauncher.delete_command = List()#

The HTCondor delete command [‘condor_rm’]

environment c.HTCondorControllerLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

job_id_regexp c.HTCondorControllerLauncher.job_id_regexp = CRegExp('(\\d+)\\.$')#

Regular expression for identifying the job ID [r’(d+).$’]

job_id_regexp_group c.HTCondorControllerLauncher.job_id_regexp_group = Int(1)#

The group we wish to match in job_id_regexp [1]

namespace c.HTCondorControllerLauncher.namespace = Dict()#

Extra variables to pass to the template.

This lets you parameterize additional options, such as wall_time with a custom template.

output_file c.HTCondorControllerLauncher.output_file = Unicode('')#

File in which to store stdout/err of processes

output_limit c.HTCondorControllerLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

queue c.HTCondorControllerLauncher.queue = Unicode('')#

The batch queue.

signal_command c.HTCondorControllerLauncher.signal_command = List()#

The name of the command line program used to send signals to jobs.

stop_timeout c.HTCondorControllerLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

submit_command c.HTCondorControllerLauncher.submit_command = List()#

The HTCondor submit command [‘condor_submit’]

class ipyparallel.cluster.launcher.HTCondorEngineSetLauncher(**kwargs: Any)#

Launch Engines using HTCondor

batch_file_name c.HTCondorEngineSetLauncher.batch_file_name = Unicode('htcondor_engines')#

batch file name for the engine(s) job.

batch_template c.HTCondorEngineSetLauncher.batch_template = Unicode('')#

The string that is the batch script template itself.

batch_template_file c.HTCondorEngineSetLauncher.batch_template_file = Unicode('')#

The file that contains the batch template.

delete_command c.HTCondorEngineSetLauncher.delete_command = List()#

The HTCondor delete command [‘condor_rm’]

engine_args c.HTCondorEngineSetLauncher.engine_args = List()#

command-line arguments to pass to ipengine

engine_cmd c.HTCondorEngineSetLauncher.engine_cmd = List()#

command to launch the Engine.

environment c.HTCondorEngineSetLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

job_id_regexp c.HTCondorEngineSetLauncher.job_id_regexp = CRegExp('(\\d+)\\.$')#

Regular expression for identifying the job ID [r’(d+).$’]

job_id_regexp_group c.HTCondorEngineSetLauncher.job_id_regexp_group = Int(1)#

The group we wish to match in job_id_regexp [1]

namespace c.HTCondorEngineSetLauncher.namespace = Dict()#

Extra variables to pass to the template.

This lets you parameterize additional options, such as wall_time with a custom template.

output_file c.HTCondorEngineSetLauncher.output_file = Unicode('')#

File in which to store stdout/err of processes

output_limit c.HTCondorEngineSetLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

queue c.HTCondorEngineSetLauncher.queue = Unicode('')#

The batch queue.

signal_command c.HTCondorEngineSetLauncher.signal_command = List()#

The name of the command line program used to send signals to jobs.

stop_timeout c.HTCondorEngineSetLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

submit_command c.HTCondorEngineSetLauncher.submit_command = List()#

The HTCondor submit command [‘condor_submit’]

class ipyparallel.cluster.launcher.HTCondorLauncher(**kwargs: Any)#

A BatchSystemLauncher subclass for HTCondor.

HTCondor requires that we launch the ipengine/ipcontroller scripts rather that the python instance but otherwise is very similar to PBS. This is because HTCondor destroys sys.executable when launching remote processes - a launched python process depends on sys.executable to effectively evaluate its module search paths. Without it, regardless of which python interpreter you launch you will get the to built in module search paths.

We use the ip{cluster, engine, controller} scripts as our executable to circumvent this - the mechanism of shebanged scripts means that the python binary will be launched with argv[0] set to the location of the ip{cluster, engine, controller} scripts on the remote node. This means you need to take care that:

  1. Your remote nodes have their paths configured correctly, with the ipengine and ipcontroller of the python environment you wish to execute code in having top precedence.

  2. This functionality is untested on Windows.

If you need different behavior, consider making you own template.

batch_file_name c.HTCondorLauncher.batch_file_name = Unicode('batch_script')#

The filename of the instantiated batch script.

batch_template c.HTCondorLauncher.batch_template = Unicode('')#

The string that is the batch script template itself.

batch_template_file c.HTCondorLauncher.batch_template_file = Unicode('')#

The file that contains the batch template.

delete_command c.HTCondorLauncher.delete_command = List()#

The HTCondor delete command [‘condor_rm’]

environment c.HTCondorLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

job_id_regexp c.HTCondorLauncher.job_id_regexp = CRegExp('(\\d+)\\.$')#

Regular expression for identifying the job ID [r’(d+).$’]

job_id_regexp_group c.HTCondorLauncher.job_id_regexp_group = Int(1)#

The group we wish to match in job_id_regexp [1]

namespace c.HTCondorLauncher.namespace = Dict()#

Extra variables to pass to the template.

This lets you parameterize additional options, such as wall_time with a custom template.

output_file c.HTCondorLauncher.output_file = Unicode('')#

File in which to store stdout/err of processes

output_limit c.HTCondorLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

queue c.HTCondorLauncher.queue = Unicode('')#

The batch queue.

signal_command c.HTCondorLauncher.signal_command = List()#

The name of the command line program used to send signals to jobs.

stop_timeout c.HTCondorLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

submit_command c.HTCondorLauncher.submit_command = List()#

The HTCondor submit command [‘condor_submit’]

class ipyparallel.cluster.launcher.LSFControllerLauncher(**kwargs: Any)#

Launch a controller using LSF.

batch_file_name c.LSFControllerLauncher.batch_file_name = Unicode('lsf_controller')#

batch file name for the controller job.

batch_template c.LSFControllerLauncher.batch_template = Unicode('')#

The string that is the batch script template itself.

batch_template_file c.LSFControllerLauncher.batch_template_file = Unicode('')#

The file that contains the batch template.

connection_info_timeout c.LSFControllerLauncher.connection_info_timeout = Float(60)#

Default timeout (in seconds) for get_connection_info

New in version 8.7.

controller_args c.LSFControllerLauncher.controller_args = List()#

command-line args to pass to ipcontroller

controller_cmd c.LSFControllerLauncher.controller_cmd = List()#

Popen command to launch ipcontroller.

delete_command c.LSFControllerLauncher.delete_command = List()#

The LSF delete command [‘bkill’]

environment c.LSFControllerLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

job_id_regexp c.LSFControllerLauncher.job_id_regexp = CRegExp('\\d+')#

Regular expresion for identifying the job ID [r’d+’]

job_id_regexp_group c.LSFControllerLauncher.job_id_regexp_group = Int(0)#

The group we wish to match in job_id_regexp (0 to match all)

namespace c.LSFControllerLauncher.namespace = Dict()#

Extra variables to pass to the template.

This lets you parameterize additional options, such as wall_time with a custom template.

output_file c.LSFControllerLauncher.output_file = Unicode('')#

File in which to store stdout/err of processes

output_limit c.LSFControllerLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

queue c.LSFControllerLauncher.queue = Unicode('')#

The batch queue.

signal_command c.LSFControllerLauncher.signal_command = List()#

The LSF signal command [‘bkill’, ‘-s’]

stop_timeout c.LSFControllerLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

submit_command c.LSFControllerLauncher.submit_command = List()#

The LSF submit command [‘bsub’]

class ipyparallel.cluster.launcher.LSFEngineSetLauncher(**kwargs: Any)#

Launch Engines using LSF

batch_file_name c.LSFEngineSetLauncher.batch_file_name = Unicode('lsf_engines')#

batch file name for the engine(s) job.

batch_template c.LSFEngineSetLauncher.batch_template = Unicode('')#

The string that is the batch script template itself.

batch_template_file c.LSFEngineSetLauncher.batch_template_file = Unicode('')#

The file that contains the batch template.

delete_command c.LSFEngineSetLauncher.delete_command = List()#

The LSF delete command [‘bkill’]

engine_args c.LSFEngineSetLauncher.engine_args = List()#

command-line arguments to pass to ipengine

engine_cmd c.LSFEngineSetLauncher.engine_cmd = List()#

command to launch the Engine.

environment c.LSFEngineSetLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

get_env()#

Get the full environment for the process

merges different sources for environment variables

job_id_regexp c.LSFEngineSetLauncher.job_id_regexp = CRegExp('\\d+')#

Regular expresion for identifying the job ID [r’d+’]

job_id_regexp_group c.LSFEngineSetLauncher.job_id_regexp_group = Int(0)#

The group we wish to match in job_id_regexp (0 to match all)

namespace c.LSFEngineSetLauncher.namespace = Dict()#

Extra variables to pass to the template.

This lets you parameterize additional options, such as wall_time with a custom template.

output_file c.LSFEngineSetLauncher.output_file = Unicode('')#

File in which to store stdout/err of processes

output_limit c.LSFEngineSetLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

queue c.LSFEngineSetLauncher.queue = Unicode('')#

The batch queue.

signal_command c.LSFEngineSetLauncher.signal_command = List()#

The LSF signal command [‘bkill’, ‘-s’]

stop_timeout c.LSFEngineSetLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

submit_command c.LSFEngineSetLauncher.submit_command = List()#

The LSF submit command [‘bsub’]

class ipyparallel.cluster.launcher.LSFLauncher(**kwargs: Any)#

A BatchSystemLauncher subclass for LSF.

batch_file_name c.LSFLauncher.batch_file_name = Unicode('batch_script')#

The filename of the instantiated batch script.

batch_template c.LSFLauncher.batch_template = Unicode('')#

The string that is the batch script template itself.

batch_template_file c.LSFLauncher.batch_template_file = Unicode('')#

The file that contains the batch template.

delete_command c.LSFLauncher.delete_command = List()#

The LSF delete command [‘bkill’]

environment c.LSFLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

job_id_regexp c.LSFLauncher.job_id_regexp = CRegExp('\\d+')#

Regular expresion for identifying the job ID [r’d+’]

job_id_regexp_group c.LSFLauncher.job_id_regexp_group = Int(0)#

The group we wish to match in job_id_regexp (0 to match all)

namespace c.LSFLauncher.namespace = Dict()#

Extra variables to pass to the template.

This lets you parameterize additional options, such as wall_time with a custom template.

output_file c.LSFLauncher.output_file = Unicode('')#

File in which to store stdout/err of processes

output_limit c.LSFLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

queue c.LSFLauncher.queue = Unicode('')#

The batch queue.

signal_command c.LSFLauncher.signal_command = List()#

The LSF signal command [‘bkill’, ‘-s’]

start(n=1)#

Start n copies of the process using LSF batch system. This cant inherit from the base class because bsub expects to be piped a shell script in order to honor the #BSUB directives : bsub < script

stop_timeout c.LSFLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

submit_command c.LSFLauncher.submit_command = List()#

The LSF submit command [‘bsub’]

exception ipyparallel.cluster.launcher.LauncherError#
class ipyparallel.cluster.launcher.LocalControllerLauncher(**kwargs: Any)#

Launch a controller as a regular external process.

connection_info_timeout c.LocalControllerLauncher.connection_info_timeout = Float(60)#

Default timeout (in seconds) for get_connection_info

New in version 8.7.

controller_args c.LocalControllerLauncher.controller_args = List()#

command-line args to pass to ipcontroller

controller_cmd c.LocalControllerLauncher.controller_cmd = List()#

Popen command to launch ipcontroller.

environment c.LocalControllerLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

find_args()#

The .args property calls this to find the args list.

Subcommand should implement this to construct the cmd and args.

output_limit c.LocalControllerLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

poll_seconds c.LocalControllerLauncher.poll_seconds = Int(30)#

Interval on which to poll processes (.

Note: process exit should be noticed immediately, due to use of Process.wait(), but this interval should ensure we aren’t leaving threads running forever, as other signals/events are checked on this interval

start()#

Start the controller by profile_dir.

stop_seconds_until_kill c.LocalControllerLauncher.stop_seconds_until_kill = Int(5)#

The number of seconds to wait for a process to exit after sending SIGTERM before sending SIGKILL

stop_timeout c.LocalControllerLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

class ipyparallel.cluster.launcher.LocalEngineLauncher(**kwargs: Any)#

Launch a single engine as a regular external process.

engine_args c.LocalEngineLauncher.engine_args = List()#

command-line arguments to pass to ipengine

engine_cmd c.LocalEngineLauncher.engine_cmd = List()#

command to launch the Engine.

environment c.LocalEngineLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

find_args()#

The .args property calls this to find the args list.

Subcommand should implement this to construct the cmd and args.

output_limit c.LocalEngineLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

poll_seconds c.LocalEngineLauncher.poll_seconds = Int(30)#

Interval on which to poll processes (.

Note: process exit should be noticed immediately, due to use of Process.wait(), but this interval should ensure we aren’t leaving threads running forever, as other signals/events are checked on this interval

stop_seconds_until_kill c.LocalEngineLauncher.stop_seconds_until_kill = Int(5)#

The number of seconds to wait for a process to exit after sending SIGTERM before sending SIGKILL

stop_timeout c.LocalEngineLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

class ipyparallel.cluster.launcher.LocalEngineSetLauncher(**kwargs: Any)#

Launch a set of engines as regular external processes.

delay c.LocalEngineSetLauncher.delay = Float(0.1)#

delay (in seconds) between starting each engine after the first. This can help force the engines to get their ids in order, or limit process flood when starting many engines.

engine_args c.LocalEngineSetLauncher.engine_args = List()#

command-line arguments to pass to ipengine

engine_cmd c.LocalEngineSetLauncher.engine_cmd = List()#

command to launch the Engine.

environment c.LocalEngineSetLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

find_args()#

The .args property calls this to find the args list.

Subcommand should implement this to construct the cmd and args.

classmethod from_dict(d, **kwargs)#

Restore a Launcher from a dict

Subclasses should always call launcher = super().from_dict(*args, **kwargs) and finish initialization after that.

After calling from_dict(), the launcher should be in the same state as after .start() (i.e. monitoring for exit, etc.)

Returns: Launcher

The instantiated and fully configured Launcher.

Raises: NotRunning

e.g. if the process has stopped and is no longer running.

get_output(remove=False)#

Get the output of all my child Launchers

launcher_class#

alias of LocalEngineLauncher

output_limit c.LocalEngineSetLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

poll_seconds c.LocalEngineSetLauncher.poll_seconds = Int(30)#

Interval on which to poll processes (.

Note: process exit should be noticed immediately, due to use of Process.wait(), but this interval should ensure we aren’t leaving threads running forever, as other signals/events are checked on this interval

signal(sig)#

Signal the process.

Parameters:

sig (str or int) – ‘KILL’, ‘INT’, etc., or any signal number

start(n)#

Start n engines by profile or profile_dir.

async stop()#

Stop the process and notify observers of stopping.

This method should be an async def coroutine, and return only after the process has stopped.

All resources should be cleaned up by the time this returns.

stop_seconds_until_kill c.LocalEngineSetLauncher.stop_seconds_until_kill = Int(5)#

The number of seconds to wait for a process to exit after sending SIGTERM before sending SIGKILL

stop_timeout c.LocalEngineSetLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

to_dict()#

Serialize a Launcher to a dict, for later restoration

class ipyparallel.cluster.launcher.LocalProcessLauncher(**kwargs: Any)#

Start and stop an external process in an asynchronous manner.

This will launch the external process with a working directory of self.work_dir.

environment c.LocalProcessLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

find_args()#

The .args property calls this to find the args list.

Subcommand should implement this to construct the cmd and args.

classmethod from_dict(d, **kwargs)#

Restore a Launcher from a dict

Subclasses should always call launcher = super().from_dict(*args, **kwargs) and finish initialization after that.

After calling from_dict(), the launcher should be in the same state as after .start() (i.e. monitoring for exit, etc.)

Returns: Launcher

The instantiated and fully configured Launcher.

Raises: NotRunning

e.g. if the process has stopped and is no longer running.

get_output(remove=False)#

Retrieve the output form the Launcher.

If remove: remove the file, if any, where it was being stored.

async join(timeout=None)#

Wait for the process to exit

output_limit c.LocalProcessLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

poll_seconds c.LocalProcessLauncher.poll_seconds = Int(30)#

Interval on which to poll processes (.

Note: process exit should be noticed immediately, due to use of Process.wait(), but this interval should ensure we aren’t leaving threads running forever, as other signals/events are checked on this interval

signal(sig)#

Signal the process.

Parameters:

sig (str or int) – ‘KILL’, ‘INT’, etc., or any signal number

start()#

Start the process.

Should be an async def coroutine.

When start completes, the process should be requested (it need not be running yet), and waiting should begin in the background such that notify_stop() will be called when the process finishes.

async stop()#

Stop the process and notify observers of stopping.

This method should be an async def coroutine, and return only after the process has stopped.

All resources should be cleaned up by the time this returns.

stop_seconds_until_kill c.LocalProcessLauncher.stop_seconds_until_kill = Int(5)#

The number of seconds to wait for a process to exit after sending SIGTERM before sending SIGKILL

stop_timeout c.LocalProcessLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

class ipyparallel.cluster.launcher.MPIControllerLauncher(**kwargs: Any)#

Launch a controller using mpiexec.

connection_info_timeout c.MPIControllerLauncher.connection_info_timeout = Float(60)#

Default timeout (in seconds) for get_connection_info

New in version 8.7.

controller_args c.MPIControllerLauncher.controller_args = List()#

command-line args to pass to ipcontroller

controller_cmd c.MPIControllerLauncher.controller_cmd = List()#

Popen command to launch ipcontroller.

environment c.MPIControllerLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

mpi_args c.MPIControllerLauncher.mpi_args = List()#

The command line arguments to pass to mpiexec.

mpi_cmd c.MPIControllerLauncher.mpi_cmd = List()#

The mpiexec command to use in starting the process.

output_limit c.MPIControllerLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

poll_seconds c.MPIControllerLauncher.poll_seconds = Int(30)#

Interval on which to poll processes (.

Note: process exit should be noticed immediately, due to use of Process.wait(), but this interval should ensure we aren’t leaving threads running forever, as other signals/events are checked on this interval

property program#

The program to start via mpiexec.

property program_args#

The command line argument to the program.

stop_seconds_until_kill c.MPIControllerLauncher.stop_seconds_until_kill = Int(5)#

The number of seconds to wait for a process to exit after sending SIGTERM before sending SIGKILL

stop_timeout c.MPIControllerLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

class ipyparallel.cluster.launcher.MPIEngineSetLauncher(**kwargs: Any)#

Launch engines using mpiexec

engine_args c.MPIEngineSetLauncher.engine_args = List()#

command-line arguments to pass to ipengine

engine_cmd c.MPIEngineSetLauncher.engine_cmd = List()#

command to launch the Engine.

environment c.MPIEngineSetLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

mpi_args c.MPIEngineSetLauncher.mpi_args = List()#

The command line arguments to pass to mpiexec.

mpi_cmd c.MPIEngineSetLauncher.mpi_cmd = List()#

The mpiexec command to use in starting the process.

output_limit c.MPIEngineSetLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

poll_seconds c.MPIEngineSetLauncher.poll_seconds = Int(30)#

Interval on which to poll processes (.

Note: process exit should be noticed immediately, due to use of Process.wait(), but this interval should ensure we aren’t leaving threads running forever, as other signals/events are checked on this interval

property program#

The program to start via mpiexec.

property program_args#

The command line argument to the program.

start(n)#

Start n engines by profile or profile_dir.

stop_seconds_until_kill c.MPIEngineSetLauncher.stop_seconds_until_kill = Int(5)#

The number of seconds to wait for a process to exit after sending SIGTERM before sending SIGKILL

stop_timeout c.MPIEngineSetLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

class ipyparallel.cluster.launcher.MPIExecControllerLauncher(**kwargs: Any)#

Deprecated, use MPIControllerLauncher

connection_info_timeout c.MPIExecControllerLauncher.connection_info_timeout = Float(60)#

Default timeout (in seconds) for get_connection_info

New in version 8.7.

controller_args c.MPIExecControllerLauncher.controller_args = List()#

command-line args to pass to ipcontroller

controller_cmd c.MPIExecControllerLauncher.controller_cmd = List()#

Popen command to launch ipcontroller.

environment c.MPIExecControllerLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

mpi_args c.MPIExecControllerLauncher.mpi_args = List()#

The command line arguments to pass to mpiexec.

mpi_cmd c.MPIExecControllerLauncher.mpi_cmd = List()#

The mpiexec command to use in starting the process.

output_limit c.MPIExecControllerLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

poll_seconds c.MPIExecControllerLauncher.poll_seconds = Int(30)#

Interval on which to poll processes (.

Note: process exit should be noticed immediately, due to use of Process.wait(), but this interval should ensure we aren’t leaving threads running forever, as other signals/events are checked on this interval

stop_seconds_until_kill c.MPIExecControllerLauncher.stop_seconds_until_kill = Int(5)#

The number of seconds to wait for a process to exit after sending SIGTERM before sending SIGKILL

stop_timeout c.MPIExecControllerLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

class ipyparallel.cluster.launcher.MPIExecEngineSetLauncher(**kwargs: Any)#

Deprecated, use MPIEngineSetLauncher

engine_args c.MPIExecEngineSetLauncher.engine_args = List()#

command-line arguments to pass to ipengine

engine_cmd c.MPIExecEngineSetLauncher.engine_cmd = List()#

command to launch the Engine.

environment c.MPIExecEngineSetLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

mpi_args c.MPIExecEngineSetLauncher.mpi_args = List()#

The command line arguments to pass to mpiexec.

mpi_cmd c.MPIExecEngineSetLauncher.mpi_cmd = List()#

The mpiexec command to use in starting the process.

output_limit c.MPIExecEngineSetLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

poll_seconds c.MPIExecEngineSetLauncher.poll_seconds = Int(30)#

Interval on which to poll processes (.

Note: process exit should be noticed immediately, due to use of Process.wait(), but this interval should ensure we aren’t leaving threads running forever, as other signals/events are checked on this interval

stop_seconds_until_kill c.MPIExecEngineSetLauncher.stop_seconds_until_kill = Int(5)#

The number of seconds to wait for a process to exit after sending SIGTERM before sending SIGKILL

stop_timeout c.MPIExecEngineSetLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

class ipyparallel.cluster.launcher.MPIExecLauncher(**kwargs: Any)#

Deprecated, use MPILauncher

environment c.MPIExecLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

mpi_args c.MPIExecLauncher.mpi_args = List()#

The command line arguments to pass to mpiexec.

mpi_cmd c.MPIExecLauncher.mpi_cmd = List()#

The mpiexec command to use in starting the process.

output_limit c.MPIExecLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

poll_seconds c.MPIExecLauncher.poll_seconds = Int(30)#

Interval on which to poll processes (.

Note: process exit should be noticed immediately, due to use of Process.wait(), but this interval should ensure we aren’t leaving threads running forever, as other signals/events are checked on this interval

stop_seconds_until_kill c.MPIExecLauncher.stop_seconds_until_kill = Int(5)#

The number of seconds to wait for a process to exit after sending SIGTERM before sending SIGKILL

stop_timeout c.MPIExecLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

class ipyparallel.cluster.launcher.MPILauncher(**kwargs: Any)#

Launch an external process using mpiexec.

environment c.MPILauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

find_args()#

Build self.args using all the fields.

mpi_args c.MPILauncher.mpi_args = List()#

The command line arguments to pass to mpiexec.

mpi_cmd c.MPILauncher.mpi_cmd = List()#

The mpiexec command to use in starting the process.

output_limit c.MPILauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

poll_seconds c.MPILauncher.poll_seconds = Int(30)#

Interval on which to poll processes (.

Note: process exit should be noticed immediately, due to use of Process.wait(), but this interval should ensure we aren’t leaving threads running forever, as other signals/events are checked on this interval

program List()#

The program to start via mpiexec.

program_args List()#

The command line argument to the program.

start(n=1)#

Start n instances of the program using mpiexec.

stop_seconds_until_kill c.MPILauncher.stop_seconds_until_kill = Int(5)#

The number of seconds to wait for a process to exit after sending SIGTERM before sending SIGKILL

stop_timeout c.MPILauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

exception ipyparallel.cluster.launcher.NotRunning#

Raised when a launcher is no longer running

class ipyparallel.cluster.launcher.PBSControllerLauncher(**kwargs: Any)#

Launch a controller using PBS.

batch_file_name c.PBSControllerLauncher.batch_file_name = Unicode('pbs_controller')#

batch file name for the controller job.

batch_template c.PBSControllerLauncher.batch_template = Unicode('')#

The string that is the batch script template itself.

batch_template_file c.PBSControllerLauncher.batch_template_file = Unicode('')#

The file that contains the batch template.

connection_info_timeout c.PBSControllerLauncher.connection_info_timeout = Float(60)#

Default timeout (in seconds) for get_connection_info

New in version 8.7.

controller_args c.PBSControllerLauncher.controller_args = List()#

command-line args to pass to ipcontroller

controller_cmd c.PBSControllerLauncher.controller_cmd = List()#

Popen command to launch ipcontroller.

delete_command c.PBSControllerLauncher.delete_command = List()#

The PBS delete command [‘qdel’]

environment c.PBSControllerLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

job_id_regexp c.PBSControllerLauncher.job_id_regexp = CRegExp('\\d+')#

Regular expresion for identifying the job ID [r’d+’]

job_id_regexp_group c.PBSControllerLauncher.job_id_regexp_group = Int(0)#

The group we wish to match in job_id_regexp (0 to match all)

namespace c.PBSControllerLauncher.namespace = Dict()#

Extra variables to pass to the template.

This lets you parameterize additional options, such as wall_time with a custom template.

output_file c.PBSControllerLauncher.output_file = Unicode('')#

File in which to store stdout/err of processes

output_limit c.PBSControllerLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

queue c.PBSControllerLauncher.queue = Unicode('')#

The batch queue.

signal_command c.PBSControllerLauncher.signal_command = List()#

The PBS signal command [‘qsig’]

stop_timeout c.PBSControllerLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

submit_command c.PBSControllerLauncher.submit_command = List()#

The PBS submit command [‘qsub’]

class ipyparallel.cluster.launcher.PBSEngineSetLauncher(**kwargs: Any)#

Launch Engines using PBS

batch_file_name c.PBSEngineSetLauncher.batch_file_name = Unicode('pbs_engines')#

batch file name for the engine(s) job.

batch_template c.PBSEngineSetLauncher.batch_template = Unicode('')#

The string that is the batch script template itself.

batch_template_file c.PBSEngineSetLauncher.batch_template_file = Unicode('')#

The file that contains the batch template.

delete_command c.PBSEngineSetLauncher.delete_command = List()#

The PBS delete command [‘qdel’]

engine_args c.PBSEngineSetLauncher.engine_args = List()#

command-line arguments to pass to ipengine

engine_cmd c.PBSEngineSetLauncher.engine_cmd = List()#

command to launch the Engine.

environment c.PBSEngineSetLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

job_id_regexp c.PBSEngineSetLauncher.job_id_regexp = CRegExp('\\d+')#

Regular expresion for identifying the job ID [r’d+’]

job_id_regexp_group c.PBSEngineSetLauncher.job_id_regexp_group = Int(0)#

The group we wish to match in job_id_regexp (0 to match all)

namespace c.PBSEngineSetLauncher.namespace = Dict()#

Extra variables to pass to the template.

This lets you parameterize additional options, such as wall_time with a custom template.

output_file c.PBSEngineSetLauncher.output_file = Unicode('')#

File in which to store stdout/err of processes

output_limit c.PBSEngineSetLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

queue c.PBSEngineSetLauncher.queue = Unicode('')#

The batch queue.

signal_command c.PBSEngineSetLauncher.signal_command = List()#

The PBS signal command [‘qsig’]

stop_timeout c.PBSEngineSetLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

submit_command c.PBSEngineSetLauncher.submit_command = List()#

The PBS submit command [‘qsub’]

class ipyparallel.cluster.launcher.PBSLauncher(**kwargs: Any)#

A BatchSystemLauncher subclass for PBS.

batch_file_name c.PBSLauncher.batch_file_name = Unicode('batch_script')#

The filename of the instantiated batch script.

batch_template c.PBSLauncher.batch_template = Unicode('')#

The string that is the batch script template itself.

batch_template_file c.PBSLauncher.batch_template_file = Unicode('')#

The file that contains the batch template.

delete_command c.PBSLauncher.delete_command = List()#

The PBS delete command [‘qdel’]

environment c.PBSLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

job_id_regexp c.PBSLauncher.job_id_regexp = CRegExp('\\d+')#

Regular expresion for identifying the job ID [r’d+’]

job_id_regexp_group c.PBSLauncher.job_id_regexp_group = Int(0)#

The group we wish to match in job_id_regexp (0 to match all)

namespace c.PBSLauncher.namespace = Dict()#

Extra variables to pass to the template.

This lets you parameterize additional options, such as wall_time with a custom template.

output_file c.PBSLauncher.output_file = Unicode('')#

File in which to store stdout/err of processes

output_limit c.PBSLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

queue c.PBSLauncher.queue = Unicode('')#

The batch queue.

signal_command c.PBSLauncher.signal_command = List()#

The PBS signal command [‘qsig’]

stop_timeout c.PBSLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

submit_command c.PBSLauncher.submit_command = List()#

The PBS submit command [‘qsub’]

exception ipyparallel.cluster.launcher.ProcessStateError#
class ipyparallel.cluster.launcher.SGEControllerLauncher(**kwargs: Any)#

Launch a controller using SGE.

batch_file_name c.SGEControllerLauncher.batch_file_name = Unicode('sge_controller')#

batch file name for the ipontroller job.

batch_template c.SGEControllerLauncher.batch_template = Unicode('')#

The string that is the batch script template itself.

batch_template_file c.SGEControllerLauncher.batch_template_file = Unicode('')#

The file that contains the batch template.

connection_info_timeout c.SGEControllerLauncher.connection_info_timeout = Float(60)#

Default timeout (in seconds) for get_connection_info

New in version 8.7.

controller_args c.SGEControllerLauncher.controller_args = List()#

command-line args to pass to ipcontroller

controller_cmd c.SGEControllerLauncher.controller_cmd = List()#

Popen command to launch ipcontroller.

delete_command c.SGEControllerLauncher.delete_command = List()#

The PBS delete command [‘qdel’]

environment c.SGEControllerLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

job_id_regexp c.SGEControllerLauncher.job_id_regexp = CRegExp('\\d+')#

Regular expresion for identifying the job ID [r’d+’]

job_id_regexp_group c.SGEControllerLauncher.job_id_regexp_group = Int(0)#

The group we wish to match in job_id_regexp (0 to match all)

namespace c.SGEControllerLauncher.namespace = Dict()#

Extra variables to pass to the template.

This lets you parameterize additional options, such as wall_time with a custom template.

output_file c.SGEControllerLauncher.output_file = Unicode('')#

File in which to store stdout/err of processes

output_limit c.SGEControllerLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

queue c.SGEControllerLauncher.queue = Unicode('')#

The batch queue.

signal_command c.SGEControllerLauncher.signal_command = List()#

The PBS signal command [‘qsig’]

stop_timeout c.SGEControllerLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

submit_command c.SGEControllerLauncher.submit_command = List()#

The PBS submit command [‘qsub’]

class ipyparallel.cluster.launcher.SGEEngineSetLauncher(**kwargs: Any)#

Launch Engines with SGE

batch_file_name c.SGEEngineSetLauncher.batch_file_name = Unicode('sge_engines')#

batch file name for the engine(s) job.

batch_template c.SGEEngineSetLauncher.batch_template = Unicode('')#

The string that is the batch script template itself.

batch_template_file c.SGEEngineSetLauncher.batch_template_file = Unicode('')#

The file that contains the batch template.

delete_command c.SGEEngineSetLauncher.delete_command = List()#

The PBS delete command [‘qdel’]

engine_args c.SGEEngineSetLauncher.engine_args = List()#

command-line arguments to pass to ipengine

engine_cmd c.SGEEngineSetLauncher.engine_cmd = List()#

command to launch the Engine.

environment c.SGEEngineSetLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

job_id_regexp c.SGEEngineSetLauncher.job_id_regexp = CRegExp('\\d+')#

Regular expresion for identifying the job ID [r’d+’]

job_id_regexp_group c.SGEEngineSetLauncher.job_id_regexp_group = Int(0)#

The group we wish to match in job_id_regexp (0 to match all)

namespace c.SGEEngineSetLauncher.namespace = Dict()#

Extra variables to pass to the template.

This lets you parameterize additional options, such as wall_time with a custom template.

output_file c.SGEEngineSetLauncher.output_file = Unicode('')#

File in which to store stdout/err of processes

output_limit c.SGEEngineSetLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

queue c.SGEEngineSetLauncher.queue = Unicode('')#

The batch queue.

signal_command c.SGEEngineSetLauncher.signal_command = List()#

The PBS signal command [‘qsig’]

stop_timeout c.SGEEngineSetLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

submit_command c.SGEEngineSetLauncher.submit_command = List()#

The PBS submit command [‘qsub’]

class ipyparallel.cluster.launcher.SGELauncher(**kwargs: Any)#

Sun GridEngine is a PBS clone with slightly different syntax

batch_file_name c.SGELauncher.batch_file_name = Unicode('batch_script')#

The filename of the instantiated batch script.

batch_template c.SGELauncher.batch_template = Unicode('')#

The string that is the batch script template itself.

batch_template_file c.SGELauncher.batch_template_file = Unicode('')#

The file that contains the batch template.

delete_command c.SGELauncher.delete_command = List()#

The PBS delete command [‘qdel’]

environment c.SGELauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

job_id_regexp c.SGELauncher.job_id_regexp = CRegExp('\\d+')#

Regular expresion for identifying the job ID [r’d+’]

job_id_regexp_group c.SGELauncher.job_id_regexp_group = Int(0)#

The group we wish to match in job_id_regexp (0 to match all)

namespace c.SGELauncher.namespace = Dict()#

Extra variables to pass to the template.

This lets you parameterize additional options, such as wall_time with a custom template.

output_file c.SGELauncher.output_file = Unicode('')#

File in which to store stdout/err of processes

output_limit c.SGELauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

queue c.SGELauncher.queue = Unicode('')#

The batch queue.

signal_command c.SGELauncher.signal_command = List()#

The PBS signal command [‘qsig’]

stop_timeout c.SGELauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

submit_command c.SGELauncher.submit_command = List()#

The PBS submit command [‘qsub’]

class ipyparallel.cluster.launcher.SSHControllerLauncher(**kwargs: Any)#
connection_info_timeout c.SSHControllerLauncher.connection_info_timeout = Float(60)#

Default timeout (in seconds) for get_connection_info

New in version 8.7.

controller_args c.SSHControllerLauncher.controller_args = List()#

command-line args to pass to ipcontroller

controller_cmd c.SSHControllerLauncher.controller_cmd = List()#

Popen command to launch ipcontroller.

environment c.SSHControllerLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

hostname c.SSHControllerLauncher.hostname = Unicode('')#

hostname on which to launch the program

location c.SSHControllerLauncher.location = Unicode('')#

user@hostname location for ssh in one setting

output_limit c.SSHControllerLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

poll_seconds c.SSHControllerLauncher.poll_seconds = Int(30)#

Interval on which to poll processes (.

Note: process exit should be noticed immediately, due to use of Process.wait(), but this interval should ensure we aren’t leaving threads running forever, as other signals/events are checked on this interval

property program#

Program to launch via ssh

property program_args#

args to pass to remote program

remote_profile_dir c.SSHControllerLauncher.remote_profile_dir = Unicode('')#

The remote profile_dir to use.

If not specified, use calling profile, stripping out possible leading homedir.

remote_python c.SSHControllerLauncher.remote_python = Unicode('python3')#

Remote path to Python interpreter, if needed

scp_args c.SSHControllerLauncher.scp_args = List()#

args to pass to scp

scp_cmd c.SSHControllerLauncher.scp_cmd = List()#

command for sending files

ssh_args c.SSHControllerLauncher.ssh_args = List()#

args to pass to ssh

ssh_cmd c.SSHControllerLauncher.ssh_cmd = List()#

command for starting ssh

stop_seconds_until_kill c.SSHControllerLauncher.stop_seconds_until_kill = Int(5)#

The number of seconds to wait for a process to exit after sending SIGTERM before sending SIGKILL

stop_timeout c.SSHControllerLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

to_fetch c.SSHControllerLauncher.to_fetch = List()#

List of (remote, local) files to fetch after starting

to_send c.SSHControllerLauncher.to_send = List()#

List of (local, remote) files to send before starting

user c.SSHControllerLauncher.user = Unicode('')#

username for ssh

class ipyparallel.cluster.launcher.SSHEngineLauncher(**kwargs: Any)#
engine_args c.SSHEngineLauncher.engine_args = List()#

command-line arguments to pass to ipengine

engine_cmd c.SSHEngineLauncher.engine_cmd = List()#

command to launch the Engine.

environment c.SSHEngineLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

hostname c.SSHEngineLauncher.hostname = Unicode('')#

hostname on which to launch the program

location c.SSHEngineLauncher.location = Unicode('')#

user@hostname location for ssh in one setting

output_limit c.SSHEngineLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

poll_seconds c.SSHEngineLauncher.poll_seconds = Int(30)#

Interval on which to poll processes (.

Note: process exit should be noticed immediately, due to use of Process.wait(), but this interval should ensure we aren’t leaving threads running forever, as other signals/events are checked on this interval

property program#

Program to launch via ssh

property program_args#

args to pass to remote program

remote_profile_dir c.SSHEngineLauncher.remote_profile_dir = Unicode('')#

The remote profile_dir to use.

If not specified, use calling profile, stripping out possible leading homedir.

remote_python c.SSHEngineLauncher.remote_python = Unicode('python3')#

Remote path to Python interpreter, if needed

scp_args c.SSHEngineLauncher.scp_args = List()#

args to pass to scp

scp_cmd c.SSHEngineLauncher.scp_cmd = List()#

command for sending files

ssh_args c.SSHEngineLauncher.ssh_args = List()#

args to pass to ssh

ssh_cmd c.SSHEngineLauncher.ssh_cmd = List()#

command for starting ssh

stop_seconds_until_kill c.SSHEngineLauncher.stop_seconds_until_kill = Int(5)#

The number of seconds to wait for a process to exit after sending SIGTERM before sending SIGKILL

stop_timeout c.SSHEngineLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

to_fetch c.SSHEngineLauncher.to_fetch = List()#

List of (remote, local) files to fetch after starting

to_send c.SSHEngineLauncher.to_send = List()#

List of (local, remote) files to send before starting

user c.SSHEngineLauncher.user = Unicode('')#

username for ssh

class ipyparallel.cluster.launcher.SSHEngineSetLauncher(**kwargs: Any)#
delay c.SSHEngineSetLauncher.delay = Float(0.1)#

delay (in seconds) between starting each engine after the first. This can help force the engines to get their ids in order, or limit process flood when starting many engines.

engine_args c.SSHEngineSetLauncher.engine_args = List()#

command-line arguments to pass to ipengine

engine_cmd c.SSHEngineSetLauncher.engine_cmd = List()#

command to launch the Engine.

engines c.SSHEngineSetLauncher.engines = Dict()#

dict of engines to launch. This is a dict by hostname of ints, corresponding to the number of engines to start on that host.

environment c.SSHEngineSetLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

hostname c.SSHEngineSetLauncher.hostname = Unicode('')#

hostname on which to launch the program

launcher_class#

alias of SSHEngineLauncher

location c.SSHEngineSetLauncher.location = Unicode('')#

user@hostname location for ssh in one setting

output_limit c.SSHEngineSetLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

poll_seconds c.SSHEngineSetLauncher.poll_seconds = Int(30)#

Interval on which to poll processes (.

Note: process exit should be noticed immediately, due to use of Process.wait(), but this interval should ensure we aren’t leaving threads running forever, as other signals/events are checked on this interval

remote_profile_dir c.SSHEngineSetLauncher.remote_profile_dir = Unicode('')#

The remote profile_dir to use.

If not specified, use calling profile, stripping out possible leading homedir.

remote_python c.SSHEngineSetLauncher.remote_python = Unicode('python3')#

Remote path to Python interpreter, if needed

scp_args c.SSHEngineSetLauncher.scp_args = List()#

args to pass to scp

scp_cmd c.SSHEngineSetLauncher.scp_cmd = List()#

command for sending files

ssh_args c.SSHEngineSetLauncher.ssh_args = List()#

args to pass to ssh

ssh_cmd c.SSHEngineSetLauncher.ssh_cmd = List()#

command for starting ssh

start(n)#

Start engines by profile or profile_dir. n is an upper limit of engines. The engines config property is used to assign slots to hosts.

stop_seconds_until_kill c.SSHEngineSetLauncher.stop_seconds_until_kill = Int(5)#

The number of seconds to wait for a process to exit after sending SIGTERM before sending SIGKILL

stop_timeout c.SSHEngineSetLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

to_fetch c.SSHEngineSetLauncher.to_fetch = List()#

List of (remote, local) files to fetch after starting

to_send c.SSHEngineSetLauncher.to_send = List()#

List of (local, remote) files to send before starting

user c.SSHEngineSetLauncher.user = Unicode('')#

username for ssh

class ipyparallel.cluster.launcher.SSHLauncher(**kwargs: Any)#

A minimal launcher for ssh.

To be useful this will probably have to be extended to use the sshx idea for environment variables. There could be other things this needs as well.

property cluster_env#

Cluster-related env variables

environment c.SSHLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

fetch_files()#

fetch remote files (called after start)

find_args()#

The .args property calls this to find the args list.

Subcommand should implement this to construct the cmd and args.

get_output(remove=False)#

Retrieve engine output from the remote file

hostname c.SSHLauncher.hostname = Unicode('')#

hostname on which to launch the program

async join(timeout=None)#

Wait for the process to exit

location c.SSHLauncher.location = Unicode('')#

user@hostname location for ssh in one setting

output_limit c.SSHLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

poll()#

Override poll

poll_seconds c.SSHLauncher.poll_seconds = Int(30)#

Interval on which to poll processes (.

Note: process exit should be noticed immediately, due to use of Process.wait(), but this interval should ensure we aren’t leaving threads running forever, as other signals/events are checked on this interval

program List()#

Program to launch via ssh

program_args List()#

args to pass to remote program

property remote_connection_files#

Return remote paths for connection files

remote_output_file Unicode('')#

The remote file to store output

remote_profile_dir c.SSHLauncher.remote_profile_dir = Unicode('')#

The remote profile_dir to use.

If not specified, use calling profile, stripping out possible leading homedir.

remote_python c.SSHLauncher.remote_python = Unicode('python3')#

Remote path to Python interpreter, if needed

scp_args c.SSHLauncher.scp_args = List()#

args to pass to scp

scp_cmd c.SSHLauncher.scp_cmd = List()#

command for sending files

send_files()#

send our files (called before start)

signal(sig)#

Signal the process.

Parameters:

sig (str or int) – ‘KILL’, ‘INT’, etc., or any signal number

ssh_args c.SSHLauncher.ssh_args = List()#

args to pass to ssh

ssh_cmd c.SSHLauncher.ssh_cmd = List()#

command for starting ssh

start(hostname=None, user=None, port=None)#

Start the process.

Should be an async def coroutine.

When start completes, the process should be requested (it need not be running yet), and waiting should begin in the background such that notify_stop() will be called when the process finishes.

stop_seconds_until_kill c.SSHLauncher.stop_seconds_until_kill = Int(5)#

The number of seconds to wait for a process to exit after sending SIGTERM before sending SIGKILL

stop_timeout c.SSHLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

to_fetch c.SSHLauncher.to_fetch = List()#

List of (remote, local) files to fetch after starting

to_send c.SSHLauncher.to_send = List()#

List of (local, remote) files to send before starting

user c.SSHLauncher.user = Unicode('')#

username for ssh

class ipyparallel.cluster.launcher.SSHProxyEngineSetLauncher(**kwargs: Any)#

Launcher for calling ipcluster engines on a remote machine.

Requires that remote profile is already configured.

engine_args c.SSHProxyEngineSetLauncher.engine_args = List()#

command-line arguments to pass to ipengine

engine_cmd c.SSHProxyEngineSetLauncher.engine_cmd = List()#

command to launch the Engine.

environment c.SSHProxyEngineSetLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

hostname c.SSHProxyEngineSetLauncher.hostname = Unicode('')#

hostname on which to launch the program

ipcluster_args c.SSHProxyEngineSetLauncher.ipcluster_args = List()#

Extra CLI arguments to pass to ipcluster engines

ipcluster_cmd c.SSHProxyEngineSetLauncher.ipcluster_cmd = List()#

No help string is provided.

location c.SSHProxyEngineSetLauncher.location = Unicode('')#

user@hostname location for ssh in one setting

output_limit c.SSHProxyEngineSetLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

poll_seconds c.SSHProxyEngineSetLauncher.poll_seconds = Int(30)#

Interval on which to poll processes (.

Note: process exit should be noticed immediately, due to use of Process.wait(), but this interval should ensure we aren’t leaving threads running forever, as other signals/events are checked on this interval

property program#

Program to launch via ssh

property program_args#

args to pass to remote program

remote_profile_dir c.SSHProxyEngineSetLauncher.remote_profile_dir = Unicode('')#

The remote profile_dir to use.

If not specified, use calling profile, stripping out possible leading homedir.

remote_python c.SSHProxyEngineSetLauncher.remote_python = Unicode('python3')#

Remote path to Python interpreter, if needed

scp_args c.SSHProxyEngineSetLauncher.scp_args = List()#

args to pass to scp

scp_cmd c.SSHProxyEngineSetLauncher.scp_cmd = List()#

command for sending files

ssh_args c.SSHProxyEngineSetLauncher.ssh_args = List()#

args to pass to ssh

ssh_cmd c.SSHProxyEngineSetLauncher.ssh_cmd = List()#

command for starting ssh

start(n)#

Start the process.

Should be an async def coroutine.

When start completes, the process should be requested (it need not be running yet), and waiting should begin in the background such that notify_stop() will be called when the process finishes.

stop_seconds_until_kill c.SSHProxyEngineSetLauncher.stop_seconds_until_kill = Int(5)#

The number of seconds to wait for a process to exit after sending SIGTERM before sending SIGKILL

stop_timeout c.SSHProxyEngineSetLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

to_fetch c.SSHProxyEngineSetLauncher.to_fetch = List()#

List of (remote, local) files to fetch after starting

to_send c.SSHProxyEngineSetLauncher.to_send = List()#

List of (local, remote) files to send before starting

user c.SSHProxyEngineSetLauncher.user = Unicode('')#

username for ssh

class ipyparallel.cluster.launcher.SlurmControllerLauncher(**kwargs: Any)#

Launch a controller using Slurm.

account c.SlurmControllerLauncher.account = Unicode('')#

Slurm account to be used

batch_file_name c.SlurmControllerLauncher.batch_file_name = Unicode('slurm_controller.sbatch')#

batch file name for the controller job.

batch_template c.SlurmControllerLauncher.batch_template = Unicode('')#

The string that is the batch script template itself.

batch_template_file c.SlurmControllerLauncher.batch_template_file = Unicode('')#

The file that contains the batch template.

connection_info_timeout c.SlurmControllerLauncher.connection_info_timeout = Float(60)#

Default timeout (in seconds) for get_connection_info

New in version 8.7.

controller_args c.SlurmControllerLauncher.controller_args = List()#

command-line args to pass to ipcontroller

controller_cmd c.SlurmControllerLauncher.controller_cmd = List()#

Popen command to launch ipcontroller.

delete_command c.SlurmControllerLauncher.delete_command = List()#

The slurm delete command [‘scancel’]

environment c.SlurmControllerLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

job_id_regexp c.SlurmControllerLauncher.job_id_regexp = CRegExp('\\d+')#

Regular expresion for identifying the job ID [r’d+’]

job_id_regexp_group c.SlurmControllerLauncher.job_id_regexp_group = Int(0)#

The group we wish to match in job_id_regexp (0 to match all)

namespace c.SlurmControllerLauncher.namespace = Dict()#

Extra variables to pass to the template.

This lets you parameterize additional options, such as wall_time with a custom template.

options c.SlurmControllerLauncher.options = Unicode('')#

Extra Slurm options

output_file c.SlurmControllerLauncher.output_file = Unicode('')#

File in which to store stdout/err of processes

output_limit c.SlurmControllerLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

qos c.SlurmControllerLauncher.qos = Unicode('')#

Slurm QoS to be used

queue c.SlurmControllerLauncher.queue = Unicode('')#

The batch queue.

signal_command c.SlurmControllerLauncher.signal_command = List()#

The slurm signal command [‘scancel’, ‘-s’]

stop_timeout c.SlurmControllerLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

submit_command c.SlurmControllerLauncher.submit_command = List()#

The slurm submit command [‘sbatch’]

timelimit c.SlurmControllerLauncher.timelimit = Any('')#

Slurm timelimit to be used

class ipyparallel.cluster.launcher.SlurmEngineSetLauncher(**kwargs: Any)#

Launch Engines using Slurm

account c.SlurmEngineSetLauncher.account = Unicode('')#

Slurm account to be used

batch_file_name c.SlurmEngineSetLauncher.batch_file_name = Unicode('slurm_engine.sbatch')#

batch file name for the engine(s) job.

batch_template c.SlurmEngineSetLauncher.batch_template = Unicode('')#

The string that is the batch script template itself.

batch_template_file c.SlurmEngineSetLauncher.batch_template_file = Unicode('')#

The file that contains the batch template.

delete_command c.SlurmEngineSetLauncher.delete_command = List()#

The slurm delete command [‘scancel’]

engine_args c.SlurmEngineSetLauncher.engine_args = List()#

command-line arguments to pass to ipengine

engine_cmd c.SlurmEngineSetLauncher.engine_cmd = List()#

command to launch the Engine.

environment c.SlurmEngineSetLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

job_id_regexp c.SlurmEngineSetLauncher.job_id_regexp = CRegExp('\\d+')#

Regular expresion for identifying the job ID [r’d+’]

job_id_regexp_group c.SlurmEngineSetLauncher.job_id_regexp_group = Int(0)#

The group we wish to match in job_id_regexp (0 to match all)

namespace c.SlurmEngineSetLauncher.namespace = Dict()#

Extra variables to pass to the template.

This lets you parameterize additional options, such as wall_time with a custom template.

options c.SlurmEngineSetLauncher.options = Unicode('')#

Extra Slurm options

output_file c.SlurmEngineSetLauncher.output_file = Unicode('')#

File in which to store stdout/err of processes

output_limit c.SlurmEngineSetLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

qos c.SlurmEngineSetLauncher.qos = Unicode('')#

Slurm QoS to be used

queue c.SlurmEngineSetLauncher.queue = Unicode('')#

The batch queue.

signal_command c.SlurmEngineSetLauncher.signal_command = List()#

The slurm signal command [‘scancel’, ‘-s’]

stop_timeout c.SlurmEngineSetLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

submit_command c.SlurmEngineSetLauncher.submit_command = List()#

The slurm submit command [‘sbatch’]

timelimit c.SlurmEngineSetLauncher.timelimit = Any('')#

Slurm timelimit to be used

class ipyparallel.cluster.launcher.SlurmLauncher(**kwargs: Any)#

A BatchSystemLauncher subclass for slurm.

account c.SlurmLauncher.account = Unicode('')#

Slurm account to be used

batch_file_name c.SlurmLauncher.batch_file_name = Unicode('batch_script')#

The filename of the instantiated batch script.

batch_template c.SlurmLauncher.batch_template = Unicode('')#

The string that is the batch script template itself.

batch_template_file c.SlurmLauncher.batch_template_file = Unicode('')#

The file that contains the batch template.

delete_command c.SlurmLauncher.delete_command = List()#

The slurm delete command [‘scancel’]

environment c.SlurmLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

job_id_regexp c.SlurmLauncher.job_id_regexp = CRegExp('\\d+')#

Regular expresion for identifying the job ID [r’d+’]

job_id_regexp_group c.SlurmLauncher.job_id_regexp_group = Int(0)#

The group we wish to match in job_id_regexp (0 to match all)

namespace c.SlurmLauncher.namespace = Dict()#

Extra variables to pass to the template.

This lets you parameterize additional options, such as wall_time with a custom template.

options c.SlurmLauncher.options = Unicode('')#

Extra Slurm options

output_file c.SlurmLauncher.output_file = Unicode('')#

File in which to store stdout/err of processes

output_limit c.SlurmLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

qos c.SlurmLauncher.qos = Unicode('')#

Slurm QoS to be used

queue c.SlurmLauncher.queue = Unicode('')#

The batch queue.

signal_command c.SlurmLauncher.signal_command = List()#

The slurm signal command [‘scancel’, ‘-s’]

stop_timeout c.SlurmLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

submit_command c.SlurmLauncher.submit_command = List()#

The slurm submit command [‘sbatch’]

timelimit c.SlurmLauncher.timelimit = Any('')#

Slurm timelimit to be used

exception ipyparallel.cluster.launcher.UnknownStatus#
class ipyparallel.cluster.launcher.WindowsHPCControllerLauncher(**kwargs: Any)#
controller_args List()#

extra args to pass to ipcontroller

environment c.WindowsHPCControllerLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

job_cmd c.WindowsHPCControllerLauncher.job_cmd = Unicode('')#

The command for submitting jobs.

job_file_name c.WindowsHPCControllerLauncher.job_file_name = Unicode('ipcontroller_job.xml')#

WinHPC xml job file.

job_id_regexp c.WindowsHPCControllerLauncher.job_id_regexp = CRegExp('\\d+')#

A regular expression used to get the job id from the output of the submit_command.

output_limit c.WindowsHPCControllerLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

scheduler c.WindowsHPCControllerLauncher.scheduler = Unicode('')#

The hostname of the scheduler to submit the job to.

stop_timeout c.WindowsHPCControllerLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

class ipyparallel.cluster.launcher.WindowsHPCEngineSetLauncher(**kwargs: Any)#
engine_args List()#

extra args to pas to ipengine

environment c.WindowsHPCEngineSetLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

job_cmd c.WindowsHPCEngineSetLauncher.job_cmd = Unicode('')#

The command for submitting jobs.

job_file_name c.WindowsHPCEngineSetLauncher.job_file_name = Unicode('ipengineset_job.xml')#

jobfile for ipengines job

job_id_regexp c.WindowsHPCEngineSetLauncher.job_id_regexp = CRegExp('\\d+')#

A regular expression used to get the job id from the output of the submit_command.

output_limit c.WindowsHPCEngineSetLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

scheduler c.WindowsHPCEngineSetLauncher.scheduler = Unicode('')#

The hostname of the scheduler to submit the job to.

start(n)#

Start the controller by profile_dir.

stop_timeout c.WindowsHPCEngineSetLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

class ipyparallel.cluster.launcher.WindowsHPCLauncher(**kwargs: Any)#
environment c.WindowsHPCLauncher.environment = Dict()#

Set environment variables for the launched process

New in version 8.0.

find_args()#

The .args property calls this to find the args list.

Subcommand should implement this to construct the cmd and args.

job_cmd c.WindowsHPCLauncher.job_cmd = Unicode('')#

The command for submitting jobs.

job_file_name c.WindowsHPCLauncher.job_file_name = Unicode('ipython_job.xml')#

The filename of the instantiated job script.

job_id_regexp c.WindowsHPCLauncher.job_id_regexp = CRegExp('\\d+')#

A regular expression used to get the job id from the output of the submit_command.

output_limit c.WindowsHPCLauncher.output_limit = Int(100)#

When a process exits, display up to this many lines of output

parse_job_id(output)#

Take the output of the submit command and return the job id.

scheduler c.WindowsHPCLauncher.scheduler = Unicode('')#

The hostname of the scheduler to submit the job to.

start(n)#

Start n copies of the process using the Win HPC job scheduler.

stop()#

Stop the process and notify observers of stopping.

This method should be an async def coroutine, and return only after the process has stopped.

All resources should be cleaned up by the time this returns.

stop_timeout c.WindowsHPCLauncher.stop_timeout = Int(60)#

The number of seconds to wait for a process to exit before raising a TimeoutError in stop

ipyparallel.cluster.launcher.abbreviate_launcher_class(cls)#

Abbreviate a launcher class back to its entrypoint name

ipyparallel.cluster.launcher.find_launcher_class(name, kind)#

Return a launcher class for a given name and kind.

Parameters:
  • name (str) – The full name of the launcher class, either with or without the module path, or an abbreviation (MPI, SSH, SGE, PBS, LSF, HTCondor Slurm, WindowsHPC).

  • kind (str) – Either ‘EngineSet’ or ‘Controller’.

ipyparallel.cluster.launcher.ssh_waitpid(pid, timeout=None)#

To be called on a remote host, waiting on a pid

ipyparallel.cluster.launcher.sshx(ssh_cmd, cmd, env, remote_output_file, log=None)#

Launch a remote process, returning its remote pid

Uses nohup and pipes to put it in the background