156: Parallel and Real-Time Computation
What this course is about
As the speed of processing elements heads toward an ultimate
brick-wall (as determined by the speed of light), it becomes
increasingly important to be able to get multiple computers, or
multiple processors within a single computer, to work together on
common problems, for purposes of speedup and reliability. We
study this issue from three angles: algorithms, architecture, and
programming languages. We will construct some programs on actual
Robert Keller 242
Olin (4-5 p.m. MTuW or whenever), keller@muddcs, x 18483
Characteristics and applications of parallel and real-time
systems. Specification techniques, architectures, languages,
design, and implementation. 3 credit hours.
Prerequisites: Prerequisites: CS 124 and 131.
Requirements and Grading
There are three parts, conducted roughly in sequence, although there may be
- Tutorial part: Problems (including programming, intended to be individual)
- Reporting part: Students will give talks on topics or papers (teamwork
- Project part: Development of a substantial parallel or real-time
computing application or system component (teamwork is acceptable)
For the parallel computation part (about 85% of the course):
DBPP: Designing and
Building Parallel Programs, by Ian Foster,
Addison-Wesley, 1995, ISBN 0-2-1-5794-9.
For the real-time part (about 15%):
RTS: Real-Time Systems, by
C.M. Krishna and Kang G. Shin,
McGraw-Hill, 1997, ISBN 0-07-057043.
You may wish to consider purchasing only the parallel
(Numbered titles refer to
sections in the books; un-numbered titles refer to auxiliary material.)
- DBPP 1: Parallel Computers and Computation
- Varieties of parallel computation
- 1.1 Parallelism and Computing
- Models and machines: cellular automata, SIMD machines, PRAMs, function composition and graphs, neural networks, dataflow, graph reduction, systolic arrays, MIMD machines, Holland machine, pipeline machines, multiplexed machines
- 1.2 A Parallel Machine Model
- 1.3 A Parallel Programming
- 1.4 Parallel Algorithm
- Examples of current, defunct, and parallel processors
- DBPP 2: Designing Parallel Algorithms
- DBPP 3: A Quantitative Basis for Design
- DBPP 4: Putting Components Together
- DBPP 5: Compositional C++
- DBPP 6: Fortran M
(I am considering skipping this chapter, replacing it with other material, such
as listed later.)
- DBPP 7: High Performance Fortran
- DBPP 8: Message Passing Interface
- Parallel Languages and APIs
- DBPP 9: Performance Tools
- DBPP 10: Random Numbers
- DBPP 11: Hypercube Algorithms
- RTS 2: Characterizing Real-Time Systems and Tasks
- 2.2 Performance Measures
- 2.3 Estimating Program Run Times
- RTS 3: Task Assignment and Scheduling
- 3.2 Classical Uniprocessor Scheduling Algorithms
- 3.3 Uniprocessor Scheduling of IRIS Tasks
- 3.4 Task Assignment
- 3.5 Mode Changes
- 3.6 Fault-Tolerant Scheduling
- RTS 4: Programming Languages and Tools for Real-Time Processing
Some additional references
- Michael J. Quinn.
computing: Theory and practice. Second Edition.
JŠ JŠ. An introduction to parallel algorithms,
- Guy E.
Blelloch. Vector models for data-parallel computing,
MIT Press, 1990.
Kumar, et al., Introduction
to parallel computing, design and analysis of algorithms,
- F. Thomas Leighton, Introduction to Parallel Algorithms and
Architectures: Arrays, Trees, and Hypercubes
Fox, et al.,
Parallel computing works!,
Morgan-Kauffman, 1994. (On-Line Version)
- Dimitri P.
Parallel and Distributed Computation:
Athena Scientific, 1997.
- Michael Wolfe, High Performance Compilers for Parallel Computing
Worldwide Web Links