NAMD2
Amber (Assisted Model Building with Energy Refinement) is a family of force fields and molecular simulation software. Amber sander and Amber pmemd MPI (academic version) are available on duhpc. Amber 24 with pmemd.cuda achieves 622 ns/day on the V100 GPU — the fastest MD engine available on the cluster.
Template for NAMD2 MD Configuration (namd2.conf):
##============================================================
# NAMD Production MD Configuration Template - duhpc Cluster
# For CHARMM force field simulations
# Usage: reference this file in namd_gpu.sh
#============================================================
# ── Input files ───────────────────────────────────────────
structure system.psf ;# CHARMM PSF topology
coordinates system.pdb ;# Initial coordinates
# ── Force field parameters ────────────────────────────────
paraTypeCharmm on
parameters par_all36_prot.prm
parameters par_all36_lipid.prm
parameters toppar_water_ions.str
# Add more parameter files as needed
# ── Output files ──────────────────────────────────────────
outputName production
restartfreq 5000 ;# Save restart every 10 ps
dcdfreq 5000 ;# Save trajectory every 10 ps
outputEnergies 500 ;# Print energies every 1 ps
outputPressure 500
# ── Restart (comment out for new simulation) ──────────────
# bincoordinates equil.restart.coor
# binvelocities equil.restart.vel
# extendedSystem equil.restart.xsc
# ── Initial temperature (comment out if using restart) ────
temperature 300
# ── Basic MD settings ─────────────────────────────────────
exclude scaled1-4
1-4scaling 1.0
cutoff 12.0
switching on
switchdist 10.0
pairlistdist 14.0
# ── Integrator ────────────────────────────────────────────
timestep 2.0 ;# 2 fs timestep
rigidBonds all ;# SHAKE on all H bonds
nonbondedFreq 1
fullElectFrequency 2
# ── Electrostatics (PME) ──────────────────────────────────
PME yes
PMEGridSpacing 1.0
# ── Periodic boundary conditions ─────────────────────────
# Set these to match your equilibrated box dimensions
cellBasisVector1 80.0 0 0
cellBasisVector2 0 80.0 0
cellBasisVector3 0 0 80.0
cellOrigin 0 0 0
wrapAll on
# ── Temperature control (Langevin) ────────────────────────
langevin on
langevinDamping 1.0
langevinTemp 300
langevinHydrogen no
# ── Pressure control (Langevin piston) ───────────────────
LangevinPiston on
LangevinPistonTarget 1.01325 ;# 1 atm in bar
LangevinPistonPeriod 100
LangevinPistonDecay 50
LangevinPistonTemp 300
# ── Run ───────────────────────────────────────────────────
# 50000000 steps x 2 fs = 100 ns
run 50000000
Template for NAMD2 GPU Job (namd_gpu.sh):
#!/bin/bash
#============================================================
# NAMD2 GPU Job Template - duhpc Cluster
# For CHARMM force field simulations
# NAMD2 version: 2.14 with CUDA 10 support
# Expected performance: ~39 ns/day on V100
# Usage: sbatch namd_gpu.sh
#============================================================
#SBATCH --job-name=namd_gpu # Job name (change this)
#SBATCH --partition=gpu # GPU partition - DO NOT CHANGE
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=8
#SBATCH --gres=gpu:1 # Request 1 GPU - DO NOT CHANGE
#SBATCH --mem=16G
#SBATCH --time=2-00:00:00 # Max time (days-hours:min:sec)
#SBATCH --output=%x_%j.out
#SBATCH --error=%x_%j.err
#SBATCH --mail-type=BEGIN,END,FAIL
#SBATCH --mail-user=your@email.com # Change to your email
#------------------------------------------------------------
# USER SETTINGS - Edit these for your simulation
#------------------------------------------------------------
CONF_FILE="namd2.conf" # NAMD configuration file
#------------------------------------------------------------
echo "============================================="
echo "NAMD2 GPU Job - duhpc Cluster"
echo "Job ID : $SLURM_JOBID"
echo "User : $USER"
echo "Node : $SLURMD_NODENAME"
echo "CPUs : $SLURM_CPUS_PER_TASK"
echo "Start : $(date)"
echo "============================================="
# ── NAMD2 path ────────────────────────────────────────────
NAMD2=/scratch/apps/NAMD/namd2/namd2
echo "NAMD2 : $($NAMD2 --version 2>&1 | grep 'NAMD' | grep -v Info | head -1)"
echo "GPU : $(nvidia-smi --query-gpu=name,memory.total --format=csv,noheader)"
echo ""
# ── Check input file ──────────────────────────────────────
if [ ! -f "$CONF_FILE" ]; then
echo "ERROR: NAMD config file not found: $CONF_FILE"
exit 1
fi
echo "Config : $CONF_FILE (OK)"
echo ""
# ── Run NAMD2 GPU ─────────────────────────────────────────
echo "--- Starting NAMD2 GPU MD ---"
START=$(date +%s)
$NAMD2 +p$SLURM_CPUS_PER_TASK +devices 0 $CONF_FILE
EXIT=$?
END=$(date +%s)
# ── Performance Report ────────────────────────────────────
echo ""
echo "============================================="
echo "PERFORMANCE REPORT"
echo "============================================="
if [ $EXIT -eq 0 ]; then
echo "Status : SUCCESS"
echo "GPU : $(nvidia-smi --query-gpu=name --format=csv,noheader)"
echo "Wall time: $((END-START)) seconds"
echo ""
echo "Performance (ns/day):"
grep "Benchmark time" ${CONF_FILE%.conf}.log 2>/dev/null | tail -3
# Also check SLURM output
grep "Benchmark time" ${SLURM_JOB_NAME}_${SLURM_JOBID}.out 2>/dev/null | tail -3
else
echo "Status : FAILED (exit code $EXIT)"
echo "Check output file for details"
fi
echo "End time : $(date)"
echo "============================================="
Getting Support
- Email to Mr. Imran Ghani on ighani[at]ducc[dot]du[dot]ac[dot]in for any duhpc related information.
