0.7.0 / 2019-10-09

  • Base Dask-Jobqueue on top of the core dask.distributed.SpecCluster class (GH#307)

    This is nearly complete reimplementation of the dask-jobqueue logic on top of more centralized logic. This improves standardization and adds new features, but does include the following breaking changes:

    • The cluster.stop_all_jobs() method has been removed. Please use cluster.scale(0) instead.

    • The attributes running_jobs, pending_jobs, and cancelled_jobs have been removed. These have been moved upstream to the dask.distributed.SpecCluster class instead as workers and worker_spec, as well as .plan, .requested, and .observed.

    • The name attribute has been moved to job_name.

  • Update .scale() and .adapt() docstrings (GH#346)

  • Update interactive docs (GH#340)

  • Improve error message when cores or memory is not specified (GH#331)

  • Fix Python 3.5.0 support in (GH#317)

0.6.3 / 2019-08-18

  • Compatibility with Dask 2.3.0: add scheduler_info from local_cluster (GH#313)

  • Remove lingering Python 2 specific code (GH#308)

  • Remove __future__ imports since we depend on Python >3.5 (GH#311)

  • Remove Python 3 check for black in CI (GH#315)

0.6.2 / 2019-07-31

  • Ensure compatibility with Dask 2.2 (GH#303)

  • Update documentation

0.6.1 / 2019-07-25

  • more fixes related to distributed >= 2 changes (GH#278, GH#291)

  • distributed >= 2.1 is now required (GH#295)

  • remove deprecated threads parameter from all the Cluster classes (GH#297)

  • doc improvements (GH#290, GH#294, GH#296)

0.6.0 / 2019-07-06

  • Drop Python 2 support (GH#284)

  • Fix adaptive compatibility with SpecificationCluster in Distributed 2.0 (GH#282)

0.5.0 / 2019-06-20

0.4.1 / 2018-10-18

  • Handle worker restart with clearer message (GH#138)

  • Better error handling on job submission failure (GH#146)

  • Fixed Python 2.7 error when starting workers (GH#155)

  • Better handling of extra scheduler options (GH#160)

  • Correct testing of Python 2.7 compatibility (GH#154)

  • Add ability to override python used to start workers (GH#167)

  • Internal improvements and edge cases handling (GH#97)

  • Possibility to specify a folder to store every job logs file (GH#145)

  • Require all cores on the same node for LSF (GH#177)

0.4.0 / 2018-09-06

  • Use number of worker processes as an argument to scale instead of number of jobs.

  • Bind scheduler bokeh UI to every network interfaces by default.

  • Adds an OAR job queue system implementation.

  • Adds an LSF job queue system implementation.

  • Adds some convenient methods to JobQueueCluster objects: __repr__, stop_jobs(), close().