Changelog
Contents
Changelog¶
Development version¶
0.9.0 / 2024-08-22¶
Add SLURMRunner from jacobtomlinson/dask-hpc-runners (GH#659)
Bump minimum Python to 3.10 (GH#662)
Fixed LSFCluster stdin job setup not being run in a shell (GH#661)
Remove unused lsf_version function for Python 3.12 and fix never awaited error for LSFCluster (GH#646)
Handle outdated root certificates (GH#651)
Migrate Slurm tests to use rockylinux (GH#650)
Migrate PBS tests to rocklinux 8 and openpbs 23.06 (GH#649)
Fix bug in OARJob where Job._call is not awaited (GH#642)
0.8.5 / 2024-02-22¶
Update versioneer for 3.12 compatibility (GH#618)
Make cli worker parameter flexible (GH#606)
Asynchronous job submission and removal (GH#610)
Python executable from config file (GH#623)
Thanks to @hmacdope, @jrueb, @Andrew-S-Rosen, @fnattino, @eckhrd, @cbouss and @jacobtomlinson.
0.8.2 / 2023-06-15¶
0.8.1 / 2022-10-04¶
Fix the multiple
--interface
CLI argument bug (GH#591)Change
project
toaccount
where appropriate (PBS, Slurm) (GH#586)Do not skip
job_extra_directives
withheader_skip
values and renameheader_skip
tojob_directives_skip
(GH#584)Various CI updates, HTCondor Docker build (GH#588, GH#587, GH#583, GH#582, GH#581, GH#580)
Thanks to @jolange and @guillaumeeb.
0.8.0 / 2022-08-29¶
Use –nworkers instead of deprecated –nprocs in the generated job scripts (GH#560)
Drop support for Python 3.7 (GH#562)
Rename
env_extra
kwarg tojob_script_prologue
(GH#575)Rename
extra
kwarg toworker_extra_args
(GH#576)Rename
job_extra
kwarg tojob_extra_directives
(GH#577)Fix behaviour of env_extra for HTCondor and other related fixes (GH#563, GH#570, GH#572)
Add batch_name to match the name of the Dask worker in HTCondor (GH#571)
Thanks to @jolange, @ikabadzhov and @guillaumeeb.
0.7.4 / 2022-07-13¶
Testing fixes (GH#538, GH#537, GH#533, GH#532, GH#531, GH#523, GH#511)
Drop support for Python 3.6 (GH#279)
Fix docstring as HTCondor needs no shared filesystem (GH#536)
Fix some utils deprecations (GH#529)
Add the possibility to use TLS and auto generate certificates (GH#519, GH#524, GH#526, GH#527)
Adding extra argument to condor_submit (GH#411)
0.7.3 / 2021-07-22¶
0.7.2 / 2020-12-07¶
Use
Status
enum (GH#476)Bump GHA
setup-miniconda
version (GH#474)Build docker images for scheduled runs (GH#468)
Blacken after likely a black change
Add GH action to periodically build docker images (GH#455)
Fix link format inside an italicised text (GH#460)
MoabCluster
: fix bug whereMoabCluster
was using thejobqueue.pbs
config section rather than thejobqueue.moab
section (GH#450)Updating
start_workers
to scale in examples (GH#453)Fixing typo in
core.py
(GH#454)Improve doc about GiB vs GB
Fix math typo in GiB definition (GH#445)
Improve doc about customising dashboard link
Remove Travis mentions following Github Actions switch (GH#444)
Improve error message.
Tweak name in
cluster.job_script()
(GH#439)Switch from Travis to GitHub Actions (GH#435)
All cluster classes: fix a bug that would allow to pass any named parameter without an error (GH#398)
Use pre-built docker images to speed up CI (GH#432)
Rename common work-arounds section.
Kick-off doc section about common work-arounds (GH#430)
Clean up parametrized tests (GH#429)
All cluster classes:
scheduler_options
parameter can be set through the config file in thescheduler-options
section (GH#405)Add minimal HTCondor CI support (GH#420)
Add content about the python executable used by workers in SLURM (GH#409)
Remove
config_name
from cluster classes (GH#426)Fix mysql version to get Slurm CI green (GH#423)
Fix URL for miniconda download (GH#412)
0.7.1 / 2020-03-26¶
all cluster classes: add
scheduler_options
allows to pass parameters to the Dask scheduler. For examplescheduler_options={'interface': 'eth0', dashboard_addresses=':12435')
(GH#384). Breaking change: usingport
ordashboard_addresses
arguments raises an error. They have to be passed throughscheduler_options
.all cluster classes:
processes
parameter default has changed. By default,processes ~= sqrt(cores)
so that the number of processes and the number of threads per process is roughly the same. Old default was to use one process and only threads, i.e.proccesses=1
,threads_per_process=cores
. (GH#375)all cluster classes:
interface
was ignored when set in a config file. (GH#366)LSFCluster
: switch touse_stdin=True
by default (GH#388).LSFCluster
: adduse_stdin
toLSFCluster
. This switches betweenbsub < job_script
andbsub job_script
to launch aLSF
job (GH#360).HTCondorCluster
: support olderHTCondor
versions without-file
argument (GH#351).OARCluster
: fix bug (forgotten async def) inOARCluster._submit_job
(GH#380).
0.7.0 / 2019-10-09¶
Base Dask-Jobqueue on top of the core
dask.distributed.SpecCluster
class (GH#307)This is nearly complete reimplementation of the dask-jobqueue logic on top of more centralized logic. This improves standardization and adds new features, but does include the following breaking changes:
The
cluster.start_workers
method has been removed. Usecluster.scale
instead.The
cluster.stop_all_jobs()
method has been removed. Please usecluster.scale(0)
instead.The attributes
running_jobs
,pending_jobs
, andcancelled_jobs
have been removed. These have been moved upstream to thedask.distributed.SpecCluster
class instead asworkers
andworker_spec
, as well as.plan
,.requested
, and.observed
.The
name
attribute has been moved tojob_name
.
You can now specify jobs in
.scale
and.adapt
: for examplecluster.scale(jobs=2)
andcluster.adapt(minimum_jobs=0, maximum_jobs=10)
. Specifying scaling in terms of jobs is generally more intuitive than in terms of Dask workers. This was part of GH#307.Update
.scale()
and.adapt()
docstrings (GH#346)Update interactive docs (GH#340)
Improve error message when cores or memory is not specified (GH#331)
Fix Python 3.5.0 support in setup.py (GH#317)
0.6.3 / 2019-08-18¶
0.6.1 / 2019-07-25¶
0.6.0 / 2019-07-06¶
0.5.0 / 2019-06-20¶
Keeping up to date with Dask and Distributed (GH#268)
Improve some batch scheduler integration (GH#274, GH#256, GH#232)
Add HTCondor compatibility (GH#245)
Add the possibility to specify named configuration (:pr: 204)
Allow free configuration of Dask diagnostic_port (:pr: 192)
Start work on ClusterManager, see https://github.com/dask/distributed/issues/2235 (GH#187, GH#184, GH#183)
A lot of other tiny fixes and improvements(GH#277, GH#261, GH#260, GH#250, GH#244, GH#200, GH#189)
0.4.1 / 2018-10-18¶
Handle worker restart with clearer message (GH#138)
Better error handling on job submission failure (GH#146)
Fixed Python 2.7 error when starting workers (GH#155)
Better handling of extra scheduler options (GH#160)
Correct testing of Python 2.7 compatibility (GH#154)
Add ability to override python used to start workers (GH#167)
Internal improvements and edge cases handling (GH#97)
Possibility to specify a folder to store every job logs file (GH#145)
Require all cores on the same node for LSF (GH#177)
0.4.0 / 2018-09-06¶
Use number of worker processes as an argument to
scale
instead of number of jobs.Bind scheduler bokeh UI to every network interfaces by default.
Adds an OAR job queue system implementation.
Adds an LSF job queue system implementation.
Adds some convenient methods to JobQueueCluster objects:
__repr__
,stop_jobs()
,close()
.