Paperback. Pub Date :2012-12-01 Pages: 252 Publisher: Machinery Industry Press title: Introduction to Parallel Programming List Price: 49 yuan: (U.S.) Paycheck book Deng Qianni. Translated Publisher: mechanical Industry Publishing Date :2012-12-1ISBN: 9787111392842 Words: Page: 252 Edition: 1 Binding: Paperback: 16 product size and weight: Editor's Summary parallel program design Introduction to Computer Science Series edited by those Peter S.Pacheco. The book comprehensively covers all aspects of parallel software and hardware for visitors to learn how to use MPI (distributed memory programming). Pthreads and OpenMP (shared memory programming) to write efficient parallel programs. Each chapter contains a different degree of difficulty of programming exercises. The book can be used for professional courses in Computer Science from the low-grade undergraduate textbooks. profe...
"synopsis" may belong to another edition of this title.
Seller: liu xing, Nanjing, JS, China
paperback. Condition: New. Ship out in 2 business day, And Fast shipping, Free Tracking number will be provided after the shipment.Paperback. Pub Date :2012-12-01 Pages: 252 Publisher: Machinery Industry Press title: Introduction to Parallel Programming List Price: 49 yuan: (U.S.) Paycheck book Deng Qianni. Translated Publisher: mechanical Industry Publishing Date :2012-12-1ISBN: 9787111392842 Words: Page: 252 Edition: 1 Binding: Paperback: 16 product size and weight: Editor's Summary parallel program design Introduction to Computer Science Series edited by those Peter S.Pacheco. The book comprehensively covers all aspects of parallel software and hardware for visitors to learn how to use MPI (distributed memory programming). Pthreads and OpenMP (shared memory programming) to write efficient parallel programs. Each chapter contains a different degree of difficulty of programming exercises. The book can be used for professional courses in Computer Science from the low-grade undergraduate textbooks. professional reference books to learn parallel programming can be used as a software developer. How to write parallel programs directory publisher translator sequence book praise Preface Acknowledgements Chapter 1 Why 1.1 Why do we need to improve the performance of parallel computing 1.2 Why do we need to build a parallel system 1.3 Why do I need to write parallel programs 1.4 1.5 We will do 1.6 concurrent . parallel. distributed 1.7 the rest of the book warning 1.9 font conventions 1.10 Summary 1.11 Exercises Chapter 2 parallel hardware and parallel software the 2.1 background knowledge 2.1.1 von Neumann structure 2.1.2 process. multi-task and thread 2.2 1.8 2.2.1 Cache basics of the von Neumann model improvements 2.2.2 Cache mapping 2.2.3 Cache and procedures: an instance 2.2.4 virtual memory 2.2.5 instruction-level parallel 2.2.6 hardware multithreading of 2.3 parallel hardware 2.3 the 2.4 Parallel Software 2.4.1 Notes 2.4.2 .1 SIMD system 2.3.2 MIMD system 2.3.3 Interconnect Network 2.3.4 Cache consistency of the 2.3.5 shared memory and distributed memory processes or threads coordination 2.4.3 shared memory distributed memory 2.4.5 2.4.4. 2.6 Performance 2.5 Input and output of the hybrid system programming 2.6.1 speedup and efficiency 2.6.2 Amdahl's Law 2.6.3 Scalability 2.6.4 timing 2.7 parallel program design 2.8 to write and running parallel programs 2.9 assuming a 2.10 Summary 2.10.1 parallel hardware serial system 2.10.2 2.10.3 Parallel Software 2.10.4 inputs and outputs 2.10.5 Performance 2.10.6 parallel program design 2.10.7 assuming 2.11 Exercises Chapter 3 MPI of 3.1.3 MPI_Init and MPI_Finalize 3.1.4 the communication sub. MPI_Comm_size and MPI_Comm_rank 3.1.5 SPMD program distributed memory programming 3.1 Preliminaries 3.1.1 compile and execute 3.1.2 MPI program with 3.1.6 Communications 3.1.7 MPI_Send 3.1.8 The MPI_Recv 3.1.9 trapezoidal integration method 3.2.1 trapezoidal integration method 3.2.2 parallel trapezoidal integration method to achieve 3.3 IO processing 3.2 with MPI message matching the 3.1.10 status_p parameter 3.1.11 MPI_Send and MPI_Recv semantic 3.1.12 potential pitfalls 3.3.1 Output 3.3.2 Input 3.4 collective communication 3.4.1 tree structure communication collective communication and point-to-point communication 3.4.2 MPI_Reduce 3.4.3 3.4.4 MPI_Allreduce 3.4.5 Radio 3.4.6 data distribution 3.4.7 scattering 3.4 .8 gathered 3.4.9 global gathered 3.5 MPI. 3.6 MPI derived data types program performance assessment 3.6.1 time 3.6.2 3.6.3 speedup and efficiency 3.6.4 Scalability 3.7 parallel sorting algorithms 3.7.1 simple serial 3.7.2 parallel sorting algorithms parity exchange Sort 3.7.3 MPI program security 3.7.4 parallel parity exchange sorting algorithm 3.8 Summary 3.9 Exercises 3.10 programming assignments Chapter 4 Pthreads 4.1 shared memory programming processes. threads Matrix 4.3 - 4.2.3 start thread 4.2.5 4.2.4 running thread stop thread 4.2.6 Error Checking 4.2.7 start threads and to work the Pthreads 4.2 Hello. Worl. Seller Inventory # FV068482
Quantity: 1 available