Changes in IPython Parallel¶
- Workaround a setuptools issue preventing installation from sdist on Windows
- Drop support for Python 3.3. IPython parallel now requires Python 2.7 or >= 3.4.
- Further fixes for compatibility with tornado 5 when run with asyncio (Python 3)
- Fix for enabling clusters tab via nbextension
- Multiple fixes for handling when engines stop unexpectedly
- Installing IPython Parallel enables the Clusters tab extension by default, without any additional commands.
- Fix regression in 6.1.0 preventing BatchSpawners (PBS, etc.) from launching with ipcluster.
Compatibility fixes with related packages:
- Fix compatibility with pyzmq 17 and tornado 5.
- Fix compatibility with IPython ≥ 6.
- Improve compatibility with dask.distributed ≥ 1.18.
namespaceto BatchSpawners for easier extensibility.
- Support serializing partial functions.
- Support hostnames for machine location, not just ip addresses.
--locationargument to ipcluster for setting the controller location. It can be a hostname or ip.
- Engine rank matches MPI rank if engines are started with
- Avoid duplicate pickling of the same object in maps, etc.
Documentation has been improved significantly.
Upload fixed sdist for 6.0.1.
Small encoding fix for Python 2.
Due to a compatibility change and semver, this is a major release. However, it is not a big release. The main compatibility change is that all timestamps are now timezone-aware UTC timestamps. This means you may see comparison errors if you have code that uses datetime objects without timezone info (so-called naïve datetime objects).
become_distributed()remains as an alias.
- import joblib from a public API instead of a private one when using IPython Parallel as a joblib backend.
- Compatibility fix in extensions for security changes in notebook 4.3
- Fix compatibility with changes in ipykernel 4.3, 4.4
- Improve inspection of
Client.wait()accepts any Future.
--userflag to ipcluster nbextension
- Default to one core per worker in
Client.become_distributed(). Override by specifying ncores keyword-argument.
- Subprocess logs are no longer sent to files by default in ipcluster.
To turn an IPython cluster into a dask.distributed cluster,
executor = client.become_distributed(ncores=1)
which returns a distributed
To register IPython Parallel as the backend for joblib:
import ipyparallel as ipp ipp.register_joblib_backend()
IPython parallel now supports the notebook-4.2 API for enabling server extensions, to provide the IPython clusters tab:
jupyter serverextension enable --py ipyparallel jupyter nbextension install --py ipyparallel jupyter nbextension enable --py ipyparallel
though you can still use the more convenient single-call:
ipcluster nbextension enable
which does all three steps above.
- Fix imports in
- Various typos and documentation updates to catch up with 5.0.
The highlight of ipyparallel 5.0 is that the Client has been reorganized a bit to use Futures. AsyncResults are now a Future subclass, so they can be yield ed in coroutines, etc. Views have also received an Executor interface. This rewrite better connects results to their handles, so the Client.results cache should no longer grow unbounded.
Part of the Future refactor is that Client IO is now handled in a background thread,
which means that
Client.spin_thread() is obsolete and deprecated.
- Add ipcluster nbextension enable|disable to toggle the clusters tab in Jupyter notebook
Less interesting development changes for users:
Some IPython-parallel extensions to the IPython kernel have been moved to the ipyparallel package:
- ipykernel Python serialization is now in
- apply_request message handling is implememented in a Kernel subclass, rather than the base ipykernel Kernel.
- Improvements for specifying engines with SSH launcher.