Reconstruct inputs to detect anomalies, remove noise, and generate images and text

An autoencoder is a type of deep learning network that is trained to replicate its input data. Autoencoders have surpassed traditional engineering techniques in accuracy and performance on many applications, including anomaly detection, text generation, image generation, image denoising, and digital communications.

You can use the MATLAB Deep Learning Toolbox™ for a number of autoencoder application examples, which are referenced below.

How Do Autoencoders Work?

Autoencoders output a reconstruction of the input. The autoencoder consists of two smaller networks: an encoder and a decoder. During training, the encoder learns a set of features, known as a latent representation, from input data. At the same time, the decoder is trained to reconstruct the data based on these features. The autoencoder can then be applied to predict inputs not previously seen. Autoencoders are very generalizable and can be used on different data types, including images, time series, and text.

Figure 1: An autoencoder consists of an encoder and a decoder.

What Applications Use Autoencoders?

Autoencoders will naturally ignore any input noise as the encoder is trained. This feature is ideal for removing noise or detecting anomalies when the inputs and outputs are compared (see Figures 2 and 3).

Figure 2: Noise removal from Images.

Figure 3: Image-based anomaly detection.

The latent representation can also be used to generate synthetic data. For example, you can automatically create realistic looking handwriting or phrases of text (Figure 4).

Figure 4: Generating phrases of new text from existing text.

Time series-based autoencoders can also be used to detect anomalies in signal data. For example, in predictive maintenance, an autoencoder can be trained on normal operating data from an industrial machine (Figure 5).

Figure 5: Training on normal operating data for predictive maintenance.

The trained autoencoder is then tested on new incoming data. A large variation from the autoencoder’s output indicates an abnormal operation, which could require investigation (Figure 6).

Figure 6: A large error indicating abnormalities in the input data, which may be a sign that maintenance is needed.

Key Points

  • Autoencoders do not require labeled input data for training: they are unsupervised
  • There are several varieties of autoencoders built for different engineering tasks, including:
    • Convolution autoencoders – The decoder output attempts to mirror the encoder input, which is useful for denoising
    • Variational autoencoders – These create a generative model, useful for anomaly detection
    • LSTM autoencoders – These create a generative model for time series applications

See also: what is deep learning, long short-term memory networks, what is a convolutional neural network?, anomaly detection