Learning Words with Pictures

Natural language processing (NLP) machines have made great progress by learning to recognize complex statistical patterns in sentences and paragraphs. Work with modern deep learning models such as the transformer has shown that sufficiently large networks (hundreds of millions parameters) can do a good job processing language (e.g., translation), without having any information about what the words mean.

We humans make good use of meaning when we process language. We understand how the things, actions, and ideas described by language relate to each other. This gives us a big advantage over NLP machines – we don’t need the billions of examples these machines need to learn language.

NLP researchers have asked the question, “Is there some way to teach machines something about the meaning of words, and will that improve their performance?” This has led to the development of NLP systems that learn not just from samples of text, but also from digital images associated with the text, such as the one above from the COCO dataset. In my latest iMerit blog I describe such a system – the Vokenizer!

Machines Learning From Machines

‘If I have seen further, it is by standing on the shoulders of giants’

Sir Isaak Newton, 1619

Technical disciplines have always progressed by researchers building on past work, but the deep learning research community does this in spades with transfer learning. Transfer learning builds new deep learning systems on top of previously developed ones.

For example, in my recent iMerit blog, I describe a system to detect Alzheimer’s disease from MRI scans. It was built using a very large convolutional neural network (VGG16) that had been previously trained on 14 million visual images. The Alzheimer’s detection system substituted the last few layers in VGG16 with custom, fully-connected layers. 6400 MRI images were used to train the customer layers, while the parameters of the convolutional layers were ‘frozen’ to their previously trained values.

This approach works because VGG16 had already ‘learned’ some general ‘skills’, like how shape, contrast, and texture contribute to recognizing image differences. Applying this ‘knowledge’ allowed the Alzheimer’s detection system to be trained using a relatively small number of MRI images.

Transfer learning is remarkably easy to implement. The deep learning community has many open source repositories, such as the ONYX Model Zoo, which provide downloadable, pre-trained ML systems. In addition, ML system development environments such as TensorFLow make it easy to load previously trained systems and modify and train custom final layers.

To learn more about how transfer learning works, and how new research is extending the ability of previously trained ML systems to tackle new problems, read my iMerit blog.

Navigating the Cost Terrain with Minibatches

Training a Machine Learning system requires a journey through the cost terrain, where each location in the terrain represents particular values for all ML system parameters, and the height of the terrain is the cost, a mathematical value that reflects how well the ML system is performing for that parameter set (smaller cost means better performance). For a very simple ML system with only two parameters, we can visualize the cost terrain as a mountainous territory with peaks and valleys, plateaus and saddlebacks. (Deep learning cost terrains are a lot like this, only instead of three dimensions they can have millions!)

Training mathematically explores the cost terrain, taking steps in promising directions, hoping not to fall off a cliff or get lost on a plateau. Our guide in this journey is gradient descent, which calculates the best next step in the search for the best ML system parameters, which is in the lowest valley of the cost terrain.

Gradient descent can be very cautious and look at all the training samples before taking a step. This makes sure the step is a very good one, but progress is slow because it takes a long time to look at all the training samples. Or, it can make a guess and take a step after every training sample it looks at. These snap decisions make rapid steps in the cost terrain, but there is a lot of motion but little progress because each step is all about one training sample; we want the ML system to give good average performance across all the training samples.

The best way to efficiently navigate the cost terrain is a compromise between slow deliberation and snap judgement, called minibatching. This approach takes a step using a small subset of the the training set – enough to get a pretty good idea of where to go, but a small enough sample size so that the calculations can be done quickly using modern vector processors.

Read my latest iMerit blog to get a better idea of how minibatching works:

Learning Without a Teacher

Machine learning applications generally rely on supervised learning, learning from training samples that have been labeled by a human ‘teacher’. Unsupervised learning learns what it can from unlabeled training samples. What can be learned this way are basic structural characteristics of the training data, and this information can be a useful aid to supervised learning.

In my latest iMerit blog I describe how the long-used technique of clustering has been incorporated into deep learning systems, to provide a useful starting point for supervised learning and to extrapolate what is learned from labeled training data.

The Road to Human-Level Natural Language Processing

Language is a hallmark of human intelligence, and Natural Language Processing (NLP) has long been a goal of Artificial Intelligence. The ability of early computers to process rules and look up definitions made machine translation seem right around the corner. However language proved to be more complicated than rules and definitions.

The observation that humans use practical knowledge of the world to interpret language set off a quest to create vast databases of human knowledge to apply to NLP. But it wasn’t until deep learning became available that human-level NLP was achieved, using an approach quite unlike human language understanding.

In my latest iMerit blog I trace the path that led to modern NLP systems, which leave meaning to humans and let machines do what they are good at – finding patterns in data.

Encoding Human and Machine Knowledge for Machine Learning

iMerit is a remarkable company of over 4000 people that specializes in annotating the data needed to train machine learning systems.

I am writing a series of blogs for them on various aspects of machine learning. In my latest blog I explain how ML systems embody both human intelligence and a form of machine ‘intelligence’.

Just as our biology provides the basis for human learning, human-provided ML system designs provide frameworks that enable machine learning. Through human engineering, these designs bring ML systems to the point where everything they need to ‘know’ about the world can be reflected in their parameters.

Analogous to the role of our parents and teachers, training data annotation drives the learning process toward competent action. Annotation is the crucial link between the ML system and its operational world, and accurate and complete annotation is the only way an ML system can learn to perform well.