SGECluster(queue=None, project=None, resource_spec=None, walltime=None, job_extra=None, config_name='sge', **kwargs)¶
Launch Dask on a SGE cluster
If you want a specific amount of RAM, both
resource_specmust be specified. The exact syntax of
resource_specis defined by your GridEngine system administrator. The amount of
memoryrequested should match the
resource_spec, so that Dask’s memory management system can perform accurately.
Destination queue for each worker job. Passed to #$ -q option.
Accounting string associated with each worker job. Passed to #$ -A option.
Request resources and specify job placement. Passed to #$ -l option.
Walltime for each worker job.
List of other SGE options, for example -w e. Each option will be prepended with the #$ prefix.
Name of Dask workers.
Total number of cores per job
- memory: str
Total amount of memory per job
Number of processes per job
Network interface like ‘eth0’ or ‘ib0’.
Seconds to wait for a scheduler before closing workers
Dask worker local directory for file spilling.
Additional arguments to pass to dask-worker
Other commands to add to script before launching worker.
Python executable used to launch Dask workers.
Path to desired interpreter for your batch submission script.
Additional keyword arguments to pass to LocalCluster
>>> from dask_jobqueue import SGECluster >>> cluster = SGECluster(queue='regular') >>> cluster.scale(10) # this may take a few seconds to launch
>>> from dask.distributed import Client >>> client = Client(cluster)
This also works with adaptive clusters. This automatically launches and kill workers based on load.
__init__(self, queue=None, project=None, resource_spec=None, walltime=None, job_extra=None, config_name='sge', **kwargs)¶
__init__(self[, queue, project, …])
adapt(self[, minimum_cores, maximum_cores, …])
Turn on adaptivity For keyword arguments see dask.distributed.Adaptive Instead of minimum and maximum parameters which apply to the number of worker, If Cluster object implements worker_spec attribute, one can use the following parameters: Parameters ———- minimum_cores: int Minimum number of cores for the cluster maximum_cores: int Maximum number of cores for the cluster minimum_memory: str Minimum amount of memory for the cluster maximum_memory: str Maximum amount of memory for the cluster Examples ——– >>> cluster.adapt(minimum=0, maximum=10, interval=‘500ms’) >>> cluster.adapt(minimum_cores=24, maximum_cores=96) >>> cluster.adapt(minimum_memory=‘60 GB’, maximum_memory= ‘1 TB’)
Stops all running and pending jobs and stops scheduler
Write job submission script to temporary file
Construct a job submission script
scale(self[, n, cores, memory])
Scale cluster to n workers or to the given number of cores or memory number of cores and memory are converted into number of workers using worker_spec attribute.
scale_down(self, workers[, n])
Close the workers with the given addresses
scale_up(self, n, \*\*kwargs)
Brings total worker count up to
Start workers and point them to our local scheduler
Stops all running and pending jobs
Stop a list of jobs
Stop a list of workers
Jobs that have finished
Jobs pending in the queue
Jobs with currently active workers
The scheduler of this cluster