Can a computer read words aloud, recognize faces, perform a medical diagnosis, drive a car, play a game, balance a pole, predict physical phenomena? Can it be taught to do these things?
The answer to all these is yes. All these applications and others have been demonstrated using varieties of the computational model known as "neural networks", the subject of this course.
The course will develop the theory of a number of neural network models. Participants will exercise the theory through both pre-developed computer programs and ones of their own design.
Modeling, simulation, and analysis of artificial neural networks. Relationship to biological neural networks. Design and optimization of discrete and continuous neural networks. Backpropagation, and other gradient descent methods. Hopfield and Boltzmann networks. Unsupervised learning. Self-organizing feature maps. Applications chosen from function approximation, signal processing, control, computer graphics, pattern recognition, time-series analysis. Relationship to fuzzy logic, genetic algorithms, and artificial life.
CS 60 (Principles of Computer Science) and Mathematics 12 (Multi-variate Calculus and Linear Algebra), or permission of the instructor.
o Main Textbook (abbreviated NND):
Neural Network Design by Martin T. Hagan, Howard B. Demuth, and Mark Beale, available from Huntley Bookstore, or University of Colorado Bookstore at 303-492-3648. ISBN 0-9717321-0-8. (The original version is out of print, so it may not be generally available except as used.)
o Software: Matlab (help)
There will be some homework, mostly programming assignments, but no exams. The assignments will constitute about 40% of your grade. 40% of your grade is from a substantial final project involving either creation of a working neural network application or a research paper. The grade on the project will be determined by the comprehensiveness and degree to which you explored competing approaches. The projects will be presented orally. 10% of your grade will be based on a preliminary presentation you make, ideally on material related to the your project. Finally, 10% will be based on general participation, which includes attendance.
Week 1:
Introduction and Perceptrons
(read NND chapters
1 to 4)
[Ch 1: Introduction, Ch 2: Neuron models and network architecture
Ch 3: Illustrative Example, Ch 4:
Perceptron learning rule]
Contexts for Neural Networks
Artificial Neural Network OverviewPerceptrons
o Perceptron learning rule
o Perceptron convergence theorem
Week 2: Widrow-Hoff Learning
(review chapters 5 and 6, read NND chapter
10)
[Ch 10: Widrow-Hoff Learning]
Week 3: Backpropagation
(read NND chapter 8)
[Ch 8: Performance surfaces and optimum points]
Learning for Multi-Level Perceptrons (MLP)
o Variations on backpropagation
· Batching
· Momentum
·
Variable learning rate
Week 4: Performance optimization
(read NND chapter 9)
[Ch 9: Performance optimization]
Week 5: Sequential networks
(supplementary material not in text)
o Time series
o Backpropagation through time
o Real-time recurrent learning
o Control applications
o
Finite Impulse Response (FIR) MLP
o
Temporal difference Method
Week
6: Kernel networks
(supplementary material)
Week 7: Supervised and unsupervised Hebbian learning
(read NND chapter 7)
[Ch 7: Supervised Hebbian Learning]
Week 8: Associative learning
(read NND chapter 13)
[Ch 13: Associative Learning]
Week 9: Competitive networks
(read NND chapter 14)
[Ch 14: Competitive Networks]
Week 10: Adaptive resonance theory
(read NND chapter 16)
[Ch 16: Adaptive Resonance Theory]
Week 11: Physics-based networks
(read NND chapters 17 and 18)
[Ch 17: Stability, Ch 18: Hopfield network]
Week 12: Other soft computing concepts
(supplementary material)