Items related to Neural Networks with Model Compression (Computational...

Neural Networks with Model Compression (Computational Intelligence Methods and Applications) - Softcover

 
9789819950706: Neural Networks with Model Compression (Computational Intelligence Methods and Applications)

Synopsis

Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book.

"synopsis" may belong to another edition of this title.

About the Author

Baochang Zhang is a full Professor with Institute of Artificial Intelligence, Beihang University, Beijing, China. He was selected by the Program for New Century Excellent Talents in University of Ministry of Education of China, also selected as Academic Advisor of Deep Learning Lab of Baidu Inc., and a distinguished researcher of Beihang Hangzhou Institute in Zhejiang Province. His research interests include explainable deep learning, computer vision and patter recognition. His HGPP and LDP methods were state-of-the-art feature descriptors, with 1234 and 768 Google Scholar citations, respectively. Both are “Test-of-Time” works. Our 1-bit methods achieved the best performance on ImageNet. His group also won the ECCV 2020 tiny object detection, COCO object detection, and ICPR 2020 Pollen recognition challenges.

 

Tiancheng Wang are pursuing their Ph.D. degrees under the supervision of Baochang Zhang. His research topics include model compression and trustworthy deep learning, and he has published several high-quality papers on deep model compression. He was selected as visiting student of Zhongguancun laboratory, Beijing, China. 

 

Sheng Xu are pursuing their Ph.D. degrees under the supervision of Baochang Zhang. His research topics mainly focus on low-bit model compression, and he is one of the most active researchers in the field of binary neural networks. He has published more than 10 top-tier papers in computer vision with two of them are selected as CVPR oral papers.

 

Dr. David Doermann is a Professor of Empire Innovation at the University at Buffalo (UB) and the Director of the University at Buffalo Artificial Intelligence Institute. Prior to coming to UB, he was a program manager at the Defense Advanced Research Projects Agency (DARPA), where he developed, selected and oversaw approximately $150 million in research and transition funding in the areas ofcomputer vision, human language technologies and voice analytics. He coordinated performers on all of the projects, orchestrating consensus, evaluating cross team management and overseeing fluid program objectives.


From the Back Cover

Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book.

"About this title" may belong to another edition of this title.

Buy New

View this item

£ 20.03 shipping from Germany to U.S.A.

Destination, rates & speeds

Other Popular Editions of the Same Title

9789819950676: Neural Networks with Model Compression (Computational Intelligence Methods and Applications)

Featured Edition

ISBN 10:  9819950678 ISBN 13:  9789819950676
Publisher: Springer, 2024
Hardcover

Search results for Neural Networks with Model Compression (Computational...

Seller Image

Baochang Zhang
ISBN 10: 9819950708 ISBN 13: 9789819950706
New Taschenbuch
Print on Demand

Seller: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germany

Seller rating 5 out of 5 stars 5-star rating, Learn more about seller ratings

Taschenbuch. Condition: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book. 260 pp. Englisch. Seller Inventory # 9789819950706

Contact seller

Buy New

£ 153.58
Convert currency
Shipping: £ 20.03
From Germany to U.S.A.
Destination, rates & speeds

Quantity: 2 available

Add to basket

Stock Image

Zhang, Baochang; Wang, Tiancheng; Xu, Sheng; Doermann, David
Published by Springer, 2025
ISBN 10: 9819950708 ISBN 13: 9789819950706
New Softcover

Seller: Books Puddle, New York, NY, U.S.A.

Seller rating 4 out of 5 stars 4-star rating, Learn more about seller ratings

Condition: New. Seller Inventory # 26404065180

Contact seller

Buy New

£ 191.57
Convert currency
Shipping: £ 3
Within U.S.A.
Destination, rates & speeds

Quantity: 4 available

Add to basket

Seller Image

Baochang Zhang
ISBN 10: 9819950708 ISBN 13: 9789819950706
New Taschenbuch

Seller: buchversandmimpf2000, Emtmannsberg, BAYE, Germany

Seller rating 5 out of 5 stars 5-star rating, Learn more about seller ratings

Taschenbuch. Condition: Neu. Neuware -Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 272 pp. Englisch. Seller Inventory # 9789819950706

Contact seller

Buy New

£ 153.58
Convert currency
Shipping: £ 52.26
From Germany to U.S.A.
Destination, rates & speeds

Quantity: 2 available

Add to basket

Stock Image

Zhang, Baochang; Wang, Tiancheng; Xu, Sheng; Doermann, David
Published by Springer, 2025
ISBN 10: 9819950708 ISBN 13: 9789819950706
New Softcover
Print on Demand

Seller: Majestic Books, Hounslow, United Kingdom

Seller rating 5 out of 5 stars 5-star rating, Learn more about seller ratings

Condition: New. Print on Demand. Seller Inventory # 409089091

Contact seller

Buy New

£ 202.57
Convert currency
Shipping: £ 6.50
From United Kingdom to U.S.A.
Destination, rates & speeds

Quantity: 4 available

Add to basket

Seller Image

Baochang Zhang
ISBN 10: 9819950708 ISBN 13: 9789819950706
New Taschenbuch

Seller: AHA-BUCH GmbH, Einbeck, Germany

Seller rating 5 out of 5 stars 5-star rating, Learn more about seller ratings

Taschenbuch. Condition: Neu. Druck auf Anfrage Neuware - Printed after ordering - Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book. Seller Inventory # 9789819950706

Contact seller

Buy New

£ 157.69
Convert currency
Shipping: £ 54.08
From Germany to U.S.A.
Destination, rates & speeds

Quantity: 1 available

Add to basket

Stock Image

Zhang, Baochang; Wang, Tiancheng; Xu, Sheng; Doermann, David
Published by Springer, 2025
ISBN 10: 9819950708 ISBN 13: 9789819950706
New Softcover
Print on Demand

Seller: Biblios, Frankfurt am main, HESSE, Germany

Seller rating 5 out of 5 stars 5-star rating, Learn more about seller ratings

Condition: New. PRINT ON DEMAND. Seller Inventory # 18404065174

Contact seller

Buy New

£ 210.98
Convert currency
Shipping: £ 8.67
From Germany to U.S.A.
Destination, rates & speeds

Quantity: 4 available

Add to basket

Stock Image

Zhang, Baochang
Published by Springer, 2025
ISBN 10: 9819950708 ISBN 13: 9789819950706
New paperback

Seller: Mispah books, Redhill, SURRE, United Kingdom

Seller rating 4 out of 5 stars 4-star rating, Learn more about seller ratings

paperback. Condition: New. New. book. Seller Inventory # ERICA82998199507086

Contact seller

Buy New

£ 306
Convert currency
Shipping: £ 25
From United Kingdom to U.S.A.
Destination, rates & speeds

Quantity: 1 available

Add to basket