Home > Servers > PowerEdge Components > White Papers > Developing and Deploying Vision AI with Dell and NVIDIA Metropolis > What is transfer learning?
Transfer learning is the process of transferring learned features from one deep neural network application to another. A model trained on one task is given supplemental training that allows it to operate on a different but partly related task. This approach is often effective as many of the early and middle layers in a neural network resolve portions of understanding that are useful for many final applications. As an example, the early layers in a convolutional neural network used for computer vision primarily work to identify basic low-level features such as edges, corners, curves, and textures within an image. These learned features from the earlier layers provide information to later layers which become increasingly application specific, and the same basic earlier layers can, especially with a regimen of fine-tuning on targeted end application examples, support different prediction goals.
There is another advantage of transfer learning worth noting. Since the early and intermediate elements of learned understanding within the network can be co-opted and applied towards different final goals, this ability requires fewer training examples from the ultimate application to fully train an end-to-end network. You can reach the targeted accuracy faster, and with less new data, than by training the same model architecture from the beginning. This ability is crucial in situations where that final application might have limited or only expensive-to-get data samples to work with (see NVIDIA TAO Toolkit Overview for more information).