High Energy Physics CMS Tier-2 Facilities#

Overview of Resources#

Access to the computing cluster is available through SSH:#

  • SSH login access: login.hep.wisc.edu

Batch Processing using Condor#

Condor may be used directly for running CMS jobs on our cluster. Once submitted, the jobs flock all over the UW campus grid. In particular, the jobs run on Grid Laboratory Of Wisconsin (GLOW) or Wisconsin’s Center for High Throughput Computing (CHTC) cluster. The only storage areas common to this entire domain are AFS and HDFS. AFS is used for software. HDFS is used for CMS data files.#

Abridged instructions for CMS Users#

The batch jobs are started by submitting a script to condor: condor_submit#

You can watch the progress of your jobs using: condor_q -nobatch#

You can check the status of the compute pools using:#

condor_status -pool condor.hep.wisc.edu
condor_status -pool cm.chtc.wisc.edu

You can ssh to a job to debug it interactively using:#

condor_ssh_to_job *jobid-here*

“farmout” shell script for submitting a bunch of CMSSW jobs#

You may use my sample shell script called “farmout” that submits CMSSW jobs to fully process a dataset. This automatically submits the jobs from an appropriate /scratch directory where logs will accumulate. The data files are copied to HDFS.#

Example simulation submission:#

farmoutRandomSeedJobs \
    dataset-name \
    total-events \
    events-per-job \
    /path/to/CMSSW \
    /path/to/configTemplate.py

Example analysis submission:#

farmoutAnalysisJobs \
    jobName \
    /path/to/CMSSW \
    /path/to/configTemplate.py

Use the –help option to see the options used by these scripts.#

Your data files will be stored in HDFS under /hdfs/store/user/username. See the FAQ for more information on how to manage your files, how to use farmout, and how to use Condor.#