In the last few years, courses on parallel computation have been developed and offered in many institutions in the U.K., Europe and U.S. as a recognition of the growing significance of this topic in mathematics and computer science. There is a clear need for texts that meet the needs of students and lecturers and this book, based on the author's lecture at Eth Zurich is an ideal practical student guide to scientific computing on parallel computers working up from a hardware instruction level, to shared memory machines and finally to distributed memory machines. Aimed at advanced undergraduate and graduate students in applied mathematics, computer science and engineering, subjects covered include linear algebra, fast Fourier transform, and Monte-Carlo simulations, including examples in C and in some cases Fortran. This book is also ideal for practitioners and programmers.
"This book is unique in thta it provides a balanced treatment of the concepts of parallelism on all levels...For computer science undergraduates learning about parallelism and for those who develop programs and systems that exploit as much parallelism as possible so as to maximize the desired
performance."--Choice
"This book is unique in thta it provides a balanced treatment of the concepts of parallelism on all levels...For computer science undergraduates learning about parallelism and for those who develop programs and systems that exploit as much parallelism as possible so as to maximize the desired
performance."--Choice
"This book is unique in thta it provides a balanced treatment of the concepts of parallelism on all levels...For computer science undergraduates learning about parallelism and for those who develop programs and systems that exploit as much parallelism as possible so as to maximize the desired performance."--Choice
"This book is unique in thta it provides a balanced treatment of the concepts of parallelism on all levels...For computer science undergraduates learning about parallelism and for those who develop programs and systems that exploit as much parallelism as possible so as to maximize the desired performance."--Choice