XieResearchGroup
  • About the Group
  • HPC Environments
    • Summary of HPCs
      • DGX (Group)
      • DLS (Department)
      • Dragon (Group)
    • HPC User Guide (Must Read)
      • Overall Workflow
      • Connect to HPCs
      • User Directories
      • Run Jobs
      • Data Backup
  • Useful Tutorials
    • Linux Tutorial
      • Linux Commands
    • Docker Tutorial
      • Intro to Docker
      • Intro to NVIDIA Docker
      • Use Docker for Deep Learning
      • Docker Useful Commands
    • Jupyter Notebook Tutorial
      • Run Jupyter Server with GPU Access on HPCs
    • HTCondor Tutorial
      • Introduction of HTCondor
      • Quick Start Guide
      • Submitting Vanilla Job
      • Submitting Docker Job
      • HTCondor Useful Commands
    • Better Deep Learning
      • Better Training
      • Better Generalization
      • Better Prediction
    • About Graph Neural Networks
    • Data Preprocessing
      • Untitled
      • Untitled
  • Contribute to the Wiki
Powered by GitBook
On this page
  • Features of DGX Station
  • Hardware Summary
  • Processors
  • System Memory and Storage
  • Administrators

Was this helpful?

  1. HPC Environments
  2. Summary of HPCs

DGX (Group)

DGX-Lei Workstation

PreviousSummary of HPCsNextDLS (Department)

Last updated 11 months ago

Was this helpful?

The DGX-Lei Workstation (DGX) in Dr. Lei Xie's group is the NVIDIA® DGX Station™, which is the world’s first personal supercomputer for leading-edge AI development.

Features of DGX Station

Deep learning platforms require software engineering expertise to keep today’s frameworks optimized for maximum performance, with time spent waiting on stable versions of open-source software. This means hundreds of thousands of dollars in lost productivity, dwarfing the initial hardware cost.

NVIDIA DGX Station includes the same software stack found in all DGX solutions. This innovative, integrated system includes access to popular deep learning frameworks, updated monthly, each optimized by NVIDIA engineers for maximized performance. It also includes access to NVIDIA DIGITS™ deep learning training application, third-party accelerated solutions, the NVIDIA Deep Learning SDK (e.g. cuDNN, cuBLAS, NCCL), CUDA® Toolkit, and NVIDIA drivers.

Built on container technology powered by NVIDIA Docker, this unified deep learning software stack simplifies workflow, saving you days in a re-compilation time when you need to scale your work and deploy your models in the data center or cloud.

Hardware Summary

Processors

Component
Qty
Description

CPU

2

Intel Xeon E5-2698 v4 2.2 GHz 20-Core (40 threads total)

GPU

4

NVIDIA Tesla® V100 with 16 GB per GPU (64 GB total of GPU memory)

System Memory and Storage

Component
Qty
Unit Capacity
Total Capacity
Description

System memory

8

32 GB

256 GB

ECC Registered LRDIMM DDR4 SDRAM

Data storage

3

1.92 TB

5.76 TB

2.5” 6 Gb/s SATA III SSD in RAID 0 configuration

OS storage

1

1.92 TB

1.92 TB

2.5” 6 Gb/s SATA III SSD

External data storage for backup

1

8 TB

8 TB

Hard Disk

Administrators

If you have any problems with using the DGX. Please contact the administrators of DGX-Lei workstation:

Name
Email

Shuo Zhang

Violent Hajdini

Read more at:

http://docs.nvidia.com/dgx/dgx-station-user-guide/index.html
szhang4@gradcenter.cuny.edu
vh542@hunter.cuny.edu