Speech Recognition using Deep Learning
This example shows how to train a simple deep learning model that detects the presence of speech commands in audio. The example uses the Speech Commands Dataset [1] to train a convolutional neural network to recognize a given set of commands.
To run the whole example, you must first download the data set. If you do not want to download the data set or train the network, then you can load a pretrained network by typing
load('commandNet.mat')
at the command line. Then, go directly to the Detect Commands Using Streaming Audio from Microphone section at the end of the example.Load Speech Commands Data Set
Download the data set from http://download.tensorflow.org/data/speech_commands_v0.01.tar.gz and untar the downloaded file. Set
datafolder
to the location of the data. Use audioexample.Datastore
to create a datastore that contains the file names and the corresponding labels. Use the folder names as label source. Specify the read method to read the entire audio file. Create a copy of the datastore for future use.Choose Words to Recognize
Specify which words are the commands that you want your model to recognize. Label all words that are not among the commands as
unknown
. The idea is that these words should somehow approximate the distribution of all words other than the commands. To reduce the class imbalance between the known and unknown words, only include each unknown word with a certain probability. Do not include the longer files with background noise in the _background_noise
_ folder.
Use
getSubsetDatastore(ads,indices)
to create a datastore that only contains the files and labels indexed by indices
. Reduce the datastore ads
to only contain the commands and the subset of unknown words. Count the number of examples belonging to each class.Split Data into Training, Validation, and Test Sets
The data set folder contains text files with a list of sound files to use as validation and test sets. Because the data set contains multiple utterances of the same word by the same person, it is better to use these predefined sets than to select a random subset of the whole data set. Use the supporting function
splitData
to split the data store into a training, validation, and test datastore based on the list of validation and test files in datafolder
.Compute Speech Spectrograms
To prepare the data for efficient training of a convolutional neural network, convert the speech waveforms to log-bark auditory spectrograms.
Define the parameters of the spectrogram calculation.
segmentDuration
is the duration of each speech clip (in seconds). frameDuration
is the duration of each frame for spectrogram calculation. hopDuration
is the time step between each column of the spectrogram. numBands
is the number of log-bark filters and equals the height of each spectrogram.
Compute the spectrograms for all the training, validation, and test sets by using the supporting function
speechSpectrograms
. The speechSpectrograms
function uses auditorySpectrogram
for the spectrogram calculations. To obtain data with a smoother distribution, take the logarithm of the spectrograms using a small offset epsil
.Visualize Data
Plot the waveforms and spectrograms of a few training examples. Play the corresponding audio clips.
Neural networks train most easily when their inputs have a reasonably smooth distribution and are normalized. To check that data distribution is smooth, plot a histogram of the pixel values of the training data.
Add Background Noise Data
The network should not only be able to recognize different spoken words. It should also be able to detect if a word is spoken at all, or if the input only contains background noise.
Use the audio files in the
_background_noise
_ folder to create samples of one-second clips of background noise. Create an equal number of background clips from each background noise file. You can also create your own recordings of background noise and add them to the _background_noise
_ folder. To calculate numBkgClips
spectrograms of background clips taken from the audio files in the adsBkg
datastore, use the supporting function backgroundSpectrograms
. Before calculating spectrograms, the function rescales each audio clip with a factor sampled from a log-uniform distribution in the range given by volumeRange
.
Create 4000 background clips and rescale each of them by a number between
1e-4
and 1. XBkg
contains spectrograms of background noise with volumes ranging from practically silent to loud.
Split the spectrograms of background noise over the training, validation, and test sets. Because the
_background_noise
_ folder only contains about five and a half minutes of background noise, the background samples in the different data sets are highly correlated. To increase the variation in the background noise, you can create your own background files and add them to the folder. To increase the robustness to noise, you can also try mixing background noise into the speech files.
Plot the distribution of the different class labels in the training and validation sets. The test set has a very similar distribution to the validation set.
Add Data Augmentation
Create an augmented image datastore for automatic augmentation and resizing of the spectrograms. Translate the spectrogram randomly up to 10 frames (100 ms) forwards or backwards in time, and scale the spectrograms along the time axis up or down by 20 percent. Augmenting the data somewhat increases the effective size of the training data and helps prevent the network from overfitting. The augmented image datastore creates augmented images in real time and inputs these to the network. No augmented spectrograms are saved in memory.
Define Neural Network Architecture
Create a simple network architecture as an array of layers. Use convolutional and batch normalization layers, and downsample the feature maps "spatially" (that is, in time and frequency) using max pooling layers. Add a final max pooling layer that pools the input feature map globally over time. This enforces (approximate) time-translation variance in the input spectrograms, which seems reasonable if we expect the network to perform the same classification independent of the exact position of the speech in time. This global pooling also significantly reduces the number of parameters of the final fully connected layer. To reduce the chance of the network memorizing specific features of the training data, add a small amount of dropout to the inputs to the layers with the largest number of parameters. These layers are the convolutional layers with the largest number of filters. The final convolutional layers have 64*64*3*3 = 36864 weights each (plus biases). The final fully connected layer has 12*5*64 = 3840 weights.
Use a weighted cross entropy classification loss.
weightedCrossEntropyLayer(classNames,classWeights)
creates a custom layer that calculates the weighted cross entropy loss for the classes in classNames
using the weights in classWeights
. To give each class equal weight in the loss, use class weights that are inversely proportional to the number of training examples of each class. When using the Adam optimizer to train the network, training should be independent of the overall normalization of the class weights.Train Network
Specify the training options. Use the Adam optimizer with a mini-batch size of 128 and a learning rate of
5e-4
. Train for 25 epochs and reduce the learning rate by a factor of 10 after 20 epochs.
Train the network. If you do not have a GPU, then training the network can take some time. To load a pretrained network instead of training a network from scratch, set
doTraining
to false
.Evaluate Trained Network
Calculate the final accuracy on the training set (without data augmentation) and validation set. Plot the confusion matrix. The network is very accurate on this data set. However, the training, validation, and test data all come from similar distributions that do not necessarily reflect real-world environments. This applies in particular to the
unknown
category which contains utterances of a small number of words only.
In applications with constrained hardware resources, such as mobile applications, it is important to respect limitations on available memory and computational resources. Compute the total size of the network in kilobytes, and test its prediction speed when using the CPU. The prediction time is the time for classifying a single input image. If you input multiple images to the network, these can be classified simultaneously, leading to shorter prediction times per image. For this application, however, the single-image prediction time is the most relevant.
Detect Commands Using Streaming Audio from Microphone
Test your newly trained command detection network on streaming audio from your microphone. If you have not trained a network, then type
load('commandNet.mat')
at the command line to load a pretrained network and the parameters required to classify live, streaming audio. Try speaking one of the speech commands, for example, 'yes', 'no', or 'stop'. Then, try one of the unknown words, such as 'marvin', 'sheila', 'bed', 'house', 'cat', 'bird', or any number from zero to nine.
Specify the audio sampling rate and classification rate in Hz and create an audio device reader to read audio from your microphone.
Specify parameters for the streaming spectrogram computations and initialize a buffer for the audio. Extract the classification labels of the network and initialize buffers of half a second for the labels and classification probabilities of the streaming audio. Use these buffers to build 'agreement' over when a command is detected using multiple frames over half a second.
Create a figure and detect commands as long as the created figure exists. To stop the live detection, simply close the figure. Add the path of the
auditorySpectrogram
function that calculates the spectrograms.
No comments:
Post a Comment