LVQ Selection of Appropriate Networks

Description

One of the problems with neural networks is that they only handle one type of problem. For some problems, it may be difficult to make the network general enough. In general, the one size fits all approach does not work very well. For example, consider the case of driving a car. Driving in snow is different then driving on a freeway, which is different then driving on surface streets, etc.

Instead of having one backprop network trying to accomplish everything, my project will use a LVQ network to select the situation and then apply a backprop network. Instead of the LVQ selecting a final class, it will select a network and pass the input to the backprop network. If necessary, there may be a transformation of the input between the two networks.

One of the complications is that instead of simply picking the class and learning from that, the LVQ must run the input through each of the networks to determine the error. Therefore, one could either use the typical measures (Euclidian Distance, Hamming Distance, etc.) associated with LVQ, or one could use the MSR for each of the networks to form the clusters.

Training

A network of this type will be somewhat more difficult to train then some other networks. There are several ways of approaching the training, with each approach varying in the amount of supervision.

  1. Manual Construction: In this method, each backpropogation network would be trained individual for a giving situation (ie Snow Driving). Then the LVQ would be trained to recognize each of the different situations. Finally, the BP networks would simply be plugged in. This is the first solution I will pursue because it is perhaps the simpilest. However, it is not the most appealing because it requires a large amount of human interaction.
  2. Two-Stage Learning: In this method, the LVQ will be allowed to classify the inputs into classes. Once this has been accomplished, each BP network will be trained with the data from a single class and then linked to that class. This technique has the advantage of requiring fewer decisions on the part of a human.
  3. Simultaneous Construction: This method will train the BP networks and the LVQ at the same time. For each cycle, the weights of the BP Network will be adjusted for each item for which the associated class is a winner. However, the winner will be determined by seeing which network produces the best result. The weights of that BP network will then be adjusted bring it closer to the desired output and the LVQ will be adjusted so that next time, similar inputs will map to that class. This is the last technique that will be investigated. It is possible that this technique will not even converge to a stable solution.

Project Goals

The general idea is that a network will perform better if it is allowed to create sub-problems. This idea will be tested by implementing the network described above. Each of the training methods will be compared with each other. Additionally, the whole network will be compared against a BP Network working on the whole dataset. In summary:


James Benham
Last modified: Fri Dec 17 04:20:06 PST 1999