LSFCluster(queue=None, project=None, ncpus=None, mem=None, walltime=None, job_extra=None, lsf_units=None, config_name='lsf', **kwargs)¶
Launch Dask on a LSF cluster
Destination queue for each worker job. Passed to #BSUB -q option.
Accounting string associated with each worker job. Passed to #BSUB -P option.
Number of cpus. Passed to #BSUB -n option.
Request memory in bytes. Passed to #BSUB -M option.
Walltime for each worker job in HH:MM. Passed to #BSUB -W option.
List of other LSF options, for example -u. Each option will be prepended with the #LSF prefix.
Unit system for large units in resource usage set by the LSF_UNIT_FOR_LIMITS in the lsf.conf file of a cluster.
Name of Dask workers.
Total number of cores per job
- memory: str
Total amount of memory per job
Number of processes per job
Network interface like ‘eth0’ or ‘ib0’.
Seconds to wait for a scheduler before closing workers
Dask worker local directory for file spilling.
Additional arguments to pass to dask-worker
Other commands to add to script before launching worker.
Python executable used to launch Dask workers.
Path to desired interpreter for your batch submission script.
Additional keyword arguments to pass to LocalCluster
>>> from dask_jobqueue import LSFCluster >>> cluster = LSFcluster(queue='general', project='DaskonLSF', ... cores=15, memory='25GB') >>> cluster.scale(10) # this may take a few seconds to launch
>>> from dask.distributed import Client >>> client = Client(cluster)
This also works with adaptive clusters. This automatically launches and kill workers based on load.
__init__(self, queue=None, project=None, ncpus=None, mem=None, walltime=None, job_extra=None, lsf_units=None, config_name='lsf', **kwargs)¶
__init__(self[, queue, project, ncpus, mem, …])
adapt(self[, minimum_cores, maximum_cores, …])
Turn on adaptivity For keyword arguments see dask.distributed.Adaptive Instead of minimum and maximum parameters which apply to the number of worker, If Cluster object implements worker_spec attribute, one can use the following parameters: Parameters ———- minimum_cores: int Minimum number of cores for the cluster maximum_cores: int Maximum number of cores for the cluster minimum_memory: str Minimum amount of memory for the cluster maximum_memory: str Maximum amount of memory for the cluster Examples ——– >>> cluster.adapt(minimum=0, maximum=10, interval=‘500ms’) >>> cluster.adapt(minimum_cores=24, maximum_cores=96) >>> cluster.adapt(minimum_memory=‘60 GB’, maximum_memory= ‘1 TB’)
Stops all running and pending jobs and stops scheduler
Write job submission script to temporary file
Construct a job submission script
scale(self[, n, cores, memory])
Scale cluster to n workers or to the given number of cores or memory number of cores and memory are converted into number of workers using worker_spec attribute.
scale_down(self, workers[, n])
Close the workers with the given addresses
scale_up(self, n, \*\*kwargs)
Brings total worker count up to
Start workers and point them to our local scheduler
Stops all running and pending jobs
Stop a list of jobs
Stop a list of workers
Jobs that have finished
Jobs pending in the queue
Jobs with currently active workers
The scheduler of this cluster