Bootstrap custom image classification using transfer learning

Transfer learning enables you to use pretrained deep neural networks (e.g., AlexNet, ResNet, and Inception V3) and adapt them for custom image classification tasks. Danielle Dean and Wee Hyong Tok walk you through the basics of transfer learning and demonstrate how you can use the technique to bootstrap the building of custom image classifiers.
Talk Title | Bootstrap custom image classification using transfer learning |
Speakers | Danielle Dean (iRobot), Wee Hyong Tok (Microsoft) |
Conference | Strata + Hadoop World |
Conf Tag | Make Data Work |
Location | Singapore |
Date | December 6-8, 2016 |
URL | Talk Page |
Slides | Talk Slides |
Video | |
Convolution neural networks (CNNs) have been used in many image classification tasks and are usually trained on large image datasets, such as ImageNet and CIFAR. CNNs have been shown to be very effective in extracting the features of images from diverse domains. In practice, most people do not train a CNN from scratch, due to time constraints. Transfer learning enables you to use pretrained deep neural networks (e.g., AlexNet, ResNet, and Inception V3) and adapt them for custom image classification tasks. Danielle Dean and Wee Hyong Tok walk you through the basics of transfer learning and demonstrate how you can use the technique to bootstrap the building of custom image classifiers using pretrained CNNs available in various deep learning toolkits (e.g., pretrained CNTK models, Caffe Model Zoo, and pretrained TensorFlow libraries). Topics include: