CUDA GPU Computing Servers
CIMS has a pair of Tesla S1070 systems that can run CUDA programs. Each Tesla S1070 system has 4 Tesla S10 GPUs. Each GPU has 240 streaming cores and 4GB of memory for a total of 960 processor cores and 16GB of memory. These systems are capable of both single and double floating point precession calculations.
The Tesla units are connected to:
These machines are restricted to logins from within the Courant network, so if you are coming from outside of CIMS, you will have to first login to access.cims.nyu.edu, then use ssh to get to them.
Once logged in you can get setup running CUDA code by following these instructions:
module load mpich2
cp -r /usr/local/pkg/cuda/current/sdk ~/nvidia_sdk
See the documentation in
/usr/local/pkg/cuda/current/cuda/doc/ for more information.
An equivalent sequence of commands that can be run on CentOS 7 (e.g. cuda1, cuda2, cuda5) is as follows:
module load mpi/mpich-x86_64
cp -r /usr/local/cuda/samples ~/samples
NVIDIA provides online GPU computing seminars.
If you run into problems, please let us know as we'd like to try and get all the kinks worked out.
Thanks go to Professor Petter Kolm for his professor partnership with NVIDIA which provided this hardware.