ORCA6 - BNNLab/BN_Group_Wiki GitHub Wiki
The zip file for ORCA6 is stored either on the BN Group share point at /Documents/General/ORCA/
or for the latest version, it can be found at https://orcaforum.kofo.mpg.de/app.php/portal
. Once the ORCA6 zip file is downloaded transfer it onto your $HOME
directory on AIRE
. Use the command tar -xf orca_6_0_0_linux_x86-64_shared_openmpi416.tar.xz
to unzip the orca file. It will create a new directory called orca_6_0_0_shared_openmpi416
that contains 16 GB
worth of computational chemistry tools including the main executable orca
.
A .sh
submission file for ORCA6, recommend using python to mass produce these files for AIRE
. In a seperate .sh
file, write the sbatch
commands to submit the submission files at once. In a seperate .sh
file. ORCA6 needs openmpi
to multithread.
#!/bin/bash
#SBATCH --job-name={JOB_NAME}
#SBATCH --time={HH:MM:SS} # Wallclock time
#SBATCH --mem={RAM_PER_CPU*CPU_NUMBER}G # RAM requested to be shared between all CPUs
#SBATCH --ntasks={CPU_NUMBER}
#SBATCH --cpus-per-task=1
module load openmpi
# Create a scratch directory and navigate to it
SCRATCH_DIR=/scratch/$USER/$SLURM_JOB_ID
mkdir -p $SCRATCH_DIR
cd $SCRATCH_DIR
# Copy input files to the scratch directory
cp $HOME/{FOLDER_CONTAINING_INPUT_FILE}/{JOB_NAME}.inp $SCRATCH_DIR
$HOME/{FOLDER_CONTAINING_ORCA6}/orca {JOB_NAME}.inp > {JOB_NAME}.out
# Copy results back to permanent storage
cp {JOB_NAME}.out $HOME/{FOLDER_CONTAINING_INPUT_FILE}/
# Clean up the scratch directory
rm -rf $SCRATCH_DIR
Only use basis set that is at least triple-zeta, like cc-pVTZ
, for decent results. Please note that this input file only applies to molecules with a multiplicity of 1. The %maxcore
command tells ORCA6 the maximum amount of RAM per CPU it can use, it is recommended to be 75 % of the actual allocated RAM per CPU. %maxcore
reads it as MB
, not GB
.
! DLPNO-CCSD(T) cc-pVTZ cc-pVTZ/C
%maxcore {RAM_PER_CPU*750}
%pal
nprocs {CPU_NUMBER}
end
*xyz {FormalChargeOfMolecule} {MultiplicityOfMolecule}
{XYZ_BLOCK}
*
The amount of resources resquested in the .sh
submission file determines how much prioity the job will have. Couple cluster calculations are the most expensive type of calculation but also the most chemically accurate. Fortunately orca has the cheapest form in its arsenal of methods that scales in a linear fashion in terms of CPU time and RAM required.
The calculation needs a minumin of 10 GB
of RAM and then 400 MB
of RAM added on for every additional basis set function. It is recommend to request 30 minuets
of CPU time (3.75 minuets
wallclock time over 8 CPUs) for up to 120 basis set functions. Beyond 120 basis set functions add 142.18 seconds
of CPU time (17.77 seconds
wallclock time over 8 CPUs) for every additional basis set function to the requested CPU time. Information on all the different basis sets can be found at https://www.basissetexchange.org/
.
The command du -sh *
can be used to check how much disk spcae is being used by each job in the $SCRATCH
directory. This is important as each user has a maxumum storage on the scratch space of 1000 GB
, to many couple cluster calculations can exceed this limit and calculations can fail. The output will look like this:
[<USERNAME>@login1[aire] <USERNAME>]$ du -sh *
50G 32005
49G 32006
48G 32007
45G 32008
45G 32009
52G 32010
51G 32011
!XTB2 NEB-TS Freq PAL8 # Uses 8 CPUs for calculation
%NEB
NImages 8 # 8 images between the product and reactant
# images are taken to map reaction coordinate
PREOPT TRUE # Optimise the products and reactants before NEB-TS calculation
NEB_END_XYZFILE "Product.xyz" # Provide coordinates of the products and the TS guess
NEB_TS_XYZFILE "TS_Guess.xyz"
END
*xyz {formal_charge} {multiplicity}
{Reactants_xyz_block}
*