Parallel Prediction Retrains Neural Networks Share your comment!

 

Here’s how it usually happens: Neural networks are trained, and an application uses them to make decisions. Unfortunately, this overlooks the real-life fact that circumstances change, invalidating network training. What makes the situation particularly serious is that retraining a network takes time and may not be practical for real-time applications. But the parallel processing technology offers a solution that makes retraining neural networks in real time an option.

Two examples:

Imagine a robot scanning the ocean floor to gather geological data. It makes movement and navigation decisions based on trained neural networks. But if it encounters situations for which it was not trained, the robot’s behavior may be unpredictable, or downright wrong. If such a robot could retrain in real time, as do flesh and blood animals, it has a chance of surviving and completing the mission.

Let’s say that a financial trading program has neural networks that have been trained for all known trends. It predicts investment decisions based on the trained neural networks it contains. Here again, if a new trend surfaces, the predictions may be inaccurate and may result in a loss of investment capital. What this program needs is the ability to respond to new trends, and retrain itself so that predictions can be accurate.

Demo Program for a Smarter Solution

To illustrate a possible solution, I wrote a demonstration program using Intel parallel processing technology. The program generates four pre-calculated data sets that can be used to train and test neural networks. Four neural network objects are intended to dovetail with the parallel processing technology. The following methodology is used whenever a new data set is selected.

1. A derived data set for training is created. This is important because neural networks, which are trained on sequential data, do not learn as well as those, which are trained on data that has been reordered. A network trained by sequential (or ordered) data is said to be over-fitted, and the results are usually less than optimal.

2. Three neural networks are trained. The first is trained with only 100 training epochs so that it will be ready to use quickly. The reason for this is that in an authentic situation, even a poorly trained network is better than one that is trained for another purpose.

3. The two other networks are then trained with progressively more thorough training, each with greater amounts of training epochs. In this way, a network will eventually emerge that makes good decisions.

The Magic Sauce

In order to train the three networks without abruptly stopping the program is to spin up parallel threads. The program uses cilk_spawn. Using it could not be easier, the following code shows how to spin up the three training methods.

cilk_spawn TrainFirst();

cilk_spawn TrainSecond();

cilk_spawn TrainThird();

The three functions are extremely simple. They each train a neural network object, and then assign the values to a neural network object, which is being used to draw the predictions on the application. The first trains with 100 epochs, the second with 200 epochs, and the third with 300 epochs. When you run the application you can easily see the user interface reflect the status of each neural network.

Gotchas and Caveats

I discovered a few gotchas and caveats when writing the app. The major one is that you cannot put any user interface code within the function that cilk_spawn references. If you do, the function call will block, meaning that the calling code won’t get control back until the spawned method finishes (or I should say the allegedly spawned method). What I ended up doing was creating some semaphore variables, which are examined from an independent thread. When the semaphore variables are set, then the independent thread can make the user interface calls.

Another warning is in order. I originally wrote the application with three consecutive uses of cilk_spawn. While this worked, the CPU usage on my six-core development machine went to the point of being sluggish and almost non responsive. I would suggest that you experiment with spawning simultaneous threads that perform CPU intensive tasks. You might find that it is not an issue, but you should make sure you have thoroughly tested your application. My final solution was to wait until each training function finished before launching another training function.

If you use cilk_spawn from within a spawned thread, you won’t gain any benefit. It is as if you simply performed a normal function call from within the spawned thread.

The Results

The application (which can be downloaded here) allows users to select the data set, have the application retrain without missing a beat, and display a graphical representation as shown in Figure 1.

The application uses whichever neural network has the most thorough training to make predictions.

Now that I have explored neural networks within the parallel processing world, I am ready to look for new and innovative applications of neural networks. Now that real-time retraining is an option, the applications for neural networks have increased very dramatically.

Posted on May 1, 2013 by Rick Leinecker, Slashdot Media Contributing Editor