The Teoroo2 Cluster

Note for scheduling GPU jobs on Teoroo2

The official nextflow does not support scheduling GPU resources on a local executor. On Teoroo, a custom nextflow build is avialble. Use /sw/nf instead of nextflow, and set the GPUs available to a workflow with CUDA_VISIALBE_DEVICES, namely:

export CUDA_VISIBLE_DEVICES=1,2,3,4
/sw/nf main.nf -entry h2o_demo -profile teoroo2

The Teoroo2 profiel use containerized runtimes as the standard profile, the difference is that local copies of the images are available and used directly. Local singularity images in sif format are stored in the /sw/pinnacle folder. All processes are configured to run on single thread, and the pinn-based jobs are configured to run with one GPU.

Profile

profiles {
  teoroo2 {
    executor{
      name = 'local'
      cpus = 32
    }

    params {
      cp2k_cmd = 'source /opt/cp2k/prod_entrypoint.sh local popt; cp2k.popt'
      lmp_cmd = 'lmp_mpi'
    }

    env {
      OMP_NUM_THREADS='1'
    }

    process {
      cpus=1
      accelerator = 0
      withLabel: pinn        {accelerator= 1}
      withLabel: "pinn|tips" {container='/sw/pinnacle/pinn.sif'}
      withLabel: cp2k        {container='/sw/pinnacle/cp2k.sif'}
      withLabel: dftb        {container='/sw/pinnacle/dftb.sif'}
      withLabel: lammps      {container='/sw/pinnacle/lammps.sif'}
      withLabel: molutils    {container='/sw/pinnacle/molutils.sif'}
    }

    singularity {
      enabled = true
      runOptions = '--nv'
    }
  }
}
« Previous