XieResearchGroup
  • About the Group
  • HPC Environments
    • Summary of HPCs
      • DGX (Group)
      • DLS (Department)
      • Dragon (Group)
    • HPC User Guide (Must Read)
      • Overall Workflow
      • Connect to HPCs
      • User Directories
      • Run Jobs
      • Data Backup
  • Useful Tutorials
    • Linux Tutorial
      • Linux Commands
    • Docker Tutorial
      • Intro to Docker
      • Intro to NVIDIA Docker
      • Use Docker for Deep Learning
      • Docker Useful Commands
    • Jupyter Notebook Tutorial
      • Run Jupyter Server with GPU Access on HPCs
    • HTCondor Tutorial
      • Introduction of HTCondor
      • Quick Start Guide
      • Submitting Vanilla Job
      • Submitting Docker Job
      • HTCondor Useful Commands
    • Better Deep Learning
      • Better Training
      • Better Generalization
      • Better Prediction
    • About Graph Neural Networks
    • Data Preprocessing
      • Untitled
      • Untitled
  • Contribute to the Wiki
Powered by GitBook
On this page
  • Penalize the large weights with weight regularization
  • Sparse representation with activity regularization
  • Force small weights with weight constraints
  • Decouple layers with dropout
  • Promote robustness with Noise
  • Halt training at the right time with early stopping
  • Issues Log

Was this helpful?

  1. Useful Tutorials
  2. Better Deep Learning

Better Generalization

Penalize the large weights with weight regularization

Sparse representation with activity regularization

  1. large activations may indicate an over-fit model

  2. there is a tension between the expressiveness and the generalization of the learned features

  3. encourage small activations with additional penalty

  4. track activation mean value

Force small weights with weight constraints

Decouple layers with dropout

Promote robustness with Noise

Halt training at the right time with early stopping

Issues Log

PreviousBetter TrainingNextBetter Prediction

Last updated 2 years ago

Was this helpful?