Deep learning and AI benefit greatly from using very large training sets. The software to utilize them remains complex, and few companies and academic institutions currently are able to deal with such problems. This project aims to develop simple, easy-to-use, efficient tools that allow deep learning and machine learning to scale easily to training dataset that are petabytes large–without having to hire an entire IT staff.
Thomas Breuel works on deep learning and computer vision at NVIDIA Research. Prior to NVIDIA, he was a full professor of computer science at the University of Kaiserslautern (Germany) and worked as a researcher at Google, Xerox PARC, the IBM Almaden Research Center, IDIAP, Switzerland, as well as a consultant to the US Bureau of the Census. He is an alumnus of the Massachusetts Institute of Technology and Harvard University.
The roots of the renaissance of deep learning can be found in the tremendous success GPU-based object recognition demonstrated in 2014, outperforming all previous methods substantially. Previous methods were rooted in sound statistical and computational models, while deep learning methods remain largely heuristic.