XieResearchGroup
  • About the Group
  • HPC Environments
    • Summary of HPCs
      • DGX (Group)
      • DLS (Department)
      • Dragon (Group)
    • HPC User Guide (Must Read)
      • Overall Workflow
      • Connect to HPCs
      • User Directories
      • Run Jobs
      • Data Backup
  • Useful Tutorials
    • Linux Tutorial
      • Linux Commands
    • Docker Tutorial
      • Intro to Docker
      • Intro to NVIDIA Docker
      • Use Docker for Deep Learning
      • Docker Useful Commands
    • Jupyter Notebook Tutorial
      • Run Jupyter Server with GPU Access on HPCs
    • HTCondor Tutorial
      • Introduction of HTCondor
      • Quick Start Guide
      • Submitting Vanilla Job
      • Submitting Docker Job
      • HTCondor Useful Commands
    • Better Deep Learning
      • Better Training
      • Better Generalization
      • Better Prediction
    • About Graph Neural Networks
    • Data Preprocessing
      • Untitled
      • Untitled
  • Contribute to the Wiki
Powered by GitBook
On this page

Was this helpful?

  1. HPC Environments
  2. HPC User Guide (Must Read)

Overall Workflow

PreviousHPC User Guide (Must Read)NextConnect to HPCs

Last updated 2 years ago

Was this helpful?

The systems on the HPCs are all , which is a Linux operating system. The users should have the basic knowledge about using the Linux system. Information can be found on the page.

To have an overall understanding of how to run jobs on the HPCs, a workflow is shown below:

Step
Work
Reference

1

Connect to HPC

2

Create a work directory of the job under /raid/home/user_name with needed codes and dataset

3

Set up the computing environment (usually by using the Docker container image, Jupyter Notebook can also be used in Docker)

,

4

Check the status of the HPC to decide the resources for running the job

5

Submit the job (usually with specified GPU or CPU)

6

Monitor the job to avoid invalid runs

7

Backup your data if necessary

Ubuntu
Linux Tutorial
Page
Page
Page1
Page2
Page
Page
Page
Page