MATLAB on Ganymede
MATLAB jobs can be run in serial and parallel modes, in the background with sbatch
or interactively
with srun
or salloc
on CIRC HPC systems, except Europa.
For more information about the interactive jobs using srun or salloc, please refer interacting jobs. |
MATLAB in Interactive Mode
To run MATLAB interactively from a Ganymede cluster node, log in to a compute node, then load the MATLAB module:
srun -N 1 -n 1 --cpus-per-task=16 --pty /bin/bash
or
salloc -p debug -N 1 -n 4 --time=00:30:00
module load matlab
matlab -nodisplay -nosplash
Do not run MATLAB in the login node. This may result in the system running out of resources and impacting other users. |
Execution will take some time, normally a few moments. Now, the user can execute MATLAB functions interactively.
[@compute-7-6-0 ~]$ ml matlab/R2020b
[@compute-7-6-0 ~]$ matlab -nodisplay -nosplash
MATLAB is selecting SOFTWARE OPENGL rendering.
< M A T L A B (R) >
Copyright 1984-2020 The MathWorks, Inc.
R2020b Update 1 (9.9.0.1495850) 64-bit (glnxa64)
September 30, 2020
To get started, type doc.
For product information, visit www.mathworks.com.
>>
>> ver # For a complete list of MATLAB, Simulink, and MATLAB Toolboxes and
# their version information, type the "ver" command at the MATLAB prompt
When finished, exit from the MATLAB terminal and the compute node.
>> exit
[0@compute-7-6-0 ~]$ exit
exit
[@ganymede ~]$
Running MATLAB Batch Jobs
Create a new folder called matlab in your scratch directory.
mkdir matlab && cd matlab
Copy the submission script matlab-job-script
and MATLAB file matlabTest.m
from the directory
/opt/ohpc/pub/examples/matlab/
to $SCRATCH
. The submission script should be done as shown in
an example script in the directory /opt/ohpc/ pub/examples/matlab/matlab-job-script
.
#!/bin/bash
#SBATCH --partition=normal
#SBATCH --job-name=Matlab-Test
#SBATCH --mail-user=`whoami`@utdallas.edu
#SBATCH --mail-type=ALL
#SBATCH --nodes=1
#SBATCH --ntasks=4
#SBATCH --output=r.result
#SBATCH --partition=normal
#SBATCH --account=`whoami`
module load matlab
matlab -nodisplay -nosplash < matlabTest.m > myResult
Running MATLAB Serial Jobs
For an example of a serial job please look in the /opt/ohpc/pub/examples/matlab/serial/
and
follow the instructions in README file. Copy slurm-matlab-serial.sh
and spectral_radius-serial.m
to your scratch
directory. Write a Slurm script as follows to submit:
#!/bin/bash
#SBATCH --job-name=sr-serial
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=16
#SBATCH --output=sr-serial.out
#SBATCH --partition=normal
#SBATCH --mail-type=ALL
#SBATCH --mail-user=`whoami`
module load matlab
matlab -nodisplay -nosplash < spectral_radius-serial.m
Now submit the batch script with the sbatch
command:
sbatch slurm-matlab-serial.sh
When run is complete check the output sr-serial.out
.
$ cat sr-serial.out
MATLAB is selecting SOFTWARE OPENGL rendering.
< M A T L A B (R) >
Copyright 1984-2020 The MathWorks, Inc.
R2020b Update 1 (9.9.0.1495850) 64-bit (glnxa64)
September 30, 2020
To get started, type doc.
For product information, visit www.mathworks.com.
>>
LASTIN =
16
>> >> >> >> >> >> Elasped time is 36.537003 seconds.
MATLAB Parallel Computing Toolkit (PCT) Jobs
Parallel Computing Toolkit (PCT) utilizes
MathWorks' Parallel Computing Toolbox to parallelize MATLAB applications without MPI or CUDA programming.
These jobs follow the directives of the Toolkit. MATLAB supports process parallelization and thread
parallelization using parpool
. Thread parallelization can be better when acting on a large, shared dataset.
parpool(); % Default parpool
parpool(16); % Process parpool with 16 workers
parpool("threads"); % Threads parpool
delete(gcp('nocreate')) % deleting a parallel pool
Parpool(16) spawns 16 matlab processes or copies. Whereas threads
in parpool("threads")
uses single process and less time.
Check using top -u <userid>
command.
>> parpool(16)
Starting parallel pool (parpool) using the 'local' profile ...
Connected to the parallel pool (number of workers: 16).
ans =
ProcessPool with properties:
Connected: true
NumWorkers: 16
Cluster: local
AttachedFiles: {}
AutoAddClientPath: true
IdleTimeout: 30 minutes (30 minutes remaining)
SpmdEnabled: true
>>
>> delete(gcp('nocreate'))
Parallel pool using the 'local' profile is shutting down.
>> parpool("threads")
Starting parallel pool (parpool) ...
Connected to the parallel pool (number of workers: 40).
ans =
ThreadPool with properties:
NumWorkers: 40
>>
For an example of a parallel job, look in the directory /opt/ohpc/pub/examples/matlab/parallel/
and
follow the README file. Copy both Slurm script slurm-matlab-parfor.sh
#!/bin/bash
#SBATCH --job-name=sr-parfor
#SBATCH --nodes=1
#SBATCH --ntasks=8
#SBATCH --output=sr-parfor.out
#SBATCH --partition=normal
module load matlab
matlab -nodisplay -nosplash < spectral_radius-parfor.m
and MATLAB script spectral_radius-parfor.m
to your scratch directory. Now submit the job using sbatch
command:
$ sbatch slurm-matlab-parfor.sh
Submitted batch job 600
When the run is complete, check output sr-parfor.out
$ cat sr-parfor.out
MATLAB is selecting SOFTWARE OPENGL rendering.
< M A T L A B (R) >
Copyright 1984-2020 The MathWorks, Inc.
R2020b Update 1 (9.9.0.1495850) 64-bit (glnxa64)
September 30, 2020
To get started, type doc.
For product information, visit www.mathworks.com.
>>
LASTIN =
16
>> >> Starting parallel pool (parpool) using the 'local' profile ...
connected to 8 workers
>> >> >> >> >> >> Elasped time is 6.484026 seconds.
Example II : <spectral_radius-serial.m>
LASTN=maxNumCompThreads(1)
tic
n=200
A=500
a=zeros(n);
for i=1:n
a(i)=max(abs(eig(rand(A))));
end
toc
Example III : LARGE ARRAY EXAMPLE: PROCESS PARALLELIZATION
parpool(16);
n = 1000000000;
a = zeros(1,n);
tic
parfor i = 1:n
a(i) = -log(1 - rand());
end
toc
The output :
>> >> Starting parallel pool (parpool) using the 'local' profile ...
Connected to the parallel pool (number of workers: 16).
>> >> >> >> >> >> Elasped time is 15.580450 seconds.
Example IV : LARGE ARRAY EXAMPLE: THREAD PARALLELIZATION
parpool("threads");
n = 1000000000;
a = zeros(1,n);
tic
parfor i = 1:n
a(i) = -log(1 - rand());
end
toc
The output:
>> >> Starting parallel pool (parpool) using the 'local' profile ...
Connected to the parallel pool (number of workers: 16).
>> >> >> >> >> >> Elasped time is 10.882831 seconds.