Performed within MCP Data Science

What was previously mere science fiction, such as self-driving cars and phones that let you log in just showing your face, is now reality. Machine learning and its progression deep learning are key components, containing techniques that process data for learning to understand gathered information combined with large (artificial) neural networks trained with, in turn, large volumes of data points. Deep learning has transformed the artificial intelligence (AI) and data science fields.

But does every specific application of deep learning require its own build-up of “knowledge” within the neural network? A SeRC researcher has found that it’s not neces- sarily so.

In our original research, an AI model was first trained with a simple visual recognition task to assign an image to one of 1,000 available categories, covering a wide range of general visual classes such as man-made objects, animals, plants, and scenes. We observed the surprising result that the AI model also learned to solve other visual recognition tasks, such as finding smiling faces or specific activities in the images. The model could even retrieve other instances of a query object (such as the Statue of Liberty) across a large dataset of images.

The insight that what the model learns in one task can be extremely useful for other tasks operating on similar sensory information has proven to be extremely valuable in comput- er vision (automatic understanding of visual data). In fact, the original research paper constituted a key turning point for the entire field of computer vision, convincing prominent researchers to accept deep learning as the go-to method for image analysis. This is not least stated by the magazine Nature in May, 2015 [1]:

“This success has brought about a revolution in com- puter vision; ConvNets are now the dominant approach for almost all recognition and detection tasks and approach human performance on some tasks.”

This piece of research is listed among the top 100 most influential deep-learning works between 2012 and 2017 [1] based on the number of citations.

[1] pubmed.ncbi.nlm.nih.gov/26017442
[2] github.com/terryum/awesome-deep-learning-papers