Parallel I/O for High Performance Computing
May, John M.
Sold by dsmbooks, Liverpool, United Kingdom
AbeBooks Seller since 28 September 2015
Used - Hardcover
Condition: Very Good
Quantity: 1 available
Add to basketSold by dsmbooks, Liverpool, United Kingdom
AbeBooks Seller since 28 September 2015
Condition: Very Good
Quantity: 1 available
Add to basketScientific and technical programmers can no longer afford to treat I/O as an afterthought. The speed, memory size, and disk capacity of parallel computers continue to grow rapidly, but the rate at which disk drives can read and write data is improving far less quickly. As a result, the performance of carefully tuned parallel programs can slow dramatically when they read or write files-and the problem is likely to get far worse.
Parallel input and output techniques can help solve this problem by creating multiple data paths between memory and disks. However, simply adding disk drives to an I/O system without considering the overall software design will not significantly improve performance. To reap the full benefits of a parallel I/O system, application programmers must understand how parallel I/O systems work and where the performance pitfalls lie.
Parallel I/O for High Performance Computing directly addresses this critical need by examining parallel I/O from the bottom up. This important new book is recommended to anyone writing scientific application codes as the best single source on I/O techniques and to computer scientists as a solid up-to-date introduction to parallel I/O research.
* An overview of key I/O issues at all levels of abstraction-including hardware, through the OS and file systems, up to very high-level scientific libraries.* Describes the important features of MPI-IO, netCDF, and HDF-5 and presents numerous examples illustrating how to use each of these I/O interfaces.* Addresses the basic question of how to read and write data efficiently in HPC applications.* An explanation of various layers of storage - and techniques for using disks (and sometimes tapes) effectively in HPC applications.
John May is the Group Leader for Computer Science in the Center for Applied Scientific Computing (CASC) at the Lawrence Livermore National Laboratory. His interests include parallel programming models, performance analysis, parallel I/O, and parallel programming tools. He has served on the MPI-2 Forum, the High Performance Debugger Forum, and the Steering Committee of the Parallel Tools Consortium. Currently, he works on the Parallel Performance Improvement project, where he is investigating performance analysis techniques for massively parallel computers.
Dr. May joined LLNL in 1994 after receiving his Ph.D. in Computer Science from the University of California, San Diego. He also holds a BA in Physics from Dartmouth College. Prior to entering graduate school, he worked at AT&T (now Lucent) Bell Laboratories on optoelectronic device technology.
"About this title" may belong to another edition of this title.
we ship all orders with in the specified shipping date.
We use international air service to deliver products.