CSC 415 PARALLEL COMPUTING
Department of Computer Science and Statistics
The University of Rhode Island - Fall 1998

Thursday 5 - 7:45 BALL 110


Instructor:


B. Ravikumar, Department of Computer Science, 257, Tyler Hall.
E-mail: ravi@cs.uri.edu


Outline of the Course:

Parallel computing is sometimes defined as a ``collection of processing elements that communicate and cooperate to solve large problems fast''. It is different from (though related to) concurrent, multi-user or distributed computation. Parallel computation is rapidly evolving from a research area to one with real practical applications. This transformation was brought about advances in architecture, algorithms and software tools. In this course, we will study all these aspects of parallel computing, namely hardware, software and algorithms.

Hardware: There are many ways to connect many processors together to create a parallel computer. Some of common models are cross-bar switch, bus, array, tree, hypercube etc. We will examine the interconnection models used by real world parallel computers and understand how the hardware structure affects the performance characteristics. We will also study the suitability of specific hardware configurations for special classes of problems.

Algorithms: For some problems, a minor modification of an efficient sequential algorithm will become an efficient parallel algorithm. For others, new ideas and techniques are needed to design a good parallel algorithm. As part of design consideration, measures of acceptable performance of a parallel algorithm have been devised such as speed-up, efficiency, scalability etc. We will study some frequently occurring computational problems (such as sorting, matrix computations, Fourier transform etc.) from the point of parallel program design.

Software and Programming Tools: In the sequential setting, converting an algorithm (expressed as pseudo-code) to the syntax of a real programming language is reasonably easy. In the context of parallel computing, however, this conversion can be challenging. Programming models are abstractions that aid this conversion process by hiding the specifics of hardware and provide high level primitives either in the form of communication libraries or instructions for computation over aggregate of data. You will learn different programming models (e.g. data parallel and data-flow programming, message passing etc.) You will design, implement and test parallel programs on a super-computer.


Background Required:

The students taking this course are required to have CSC 301 (and therefore CSC 212 as well). Since the architecture will be discussed at the functional level, no detailed knowledge of digital design will be required although some basics will be desirable. Knowledge of data structures and algorithms at the level of CSC 331 is desirable. Programming skills in c and c++ at the level of CSC 301 will be assumed.


Text Books:

Designing and Implementing Parallel Programs by I. Foster, Addison-Wesley Inc. (1996). There is an on-line version of this book which you access by clicking here.
Parallel Programming with MPI P.S.Pacheco, Morgan-Kaufman Publishers, Inc. (1997).


Course Work and Grading

You will be assigned home work problems some of which will involve programming and implementation. There will be two or three mid-semester tests, and short quizzes but NO FINAL EXAM. Instead, you will be required to complete a course project. Typically, this will involve program design, implementation and testing. You will also be required to write a report describing your work. A list of potential project problems will be assigned early in the semester. Each of you will select one problem and work on it for the whole semester. Depending on the enrollment, the project may be made into a group task.


Topics Covered:


On-line Sources


Miscellaneous Information