What You Will Learn in This Book
Master the Mathematical Foundations: Go beyond theory to implement the core mathematical operations of linear algebra, calculus, and probability that form the bedrock of all modern neural networks, using Python and NumPy.
Build a Neural Network From Scratch: Gain an intuitive understanding of how models learn by constructing a simple neural network from first principles, giving you a solid grasp of concepts like activation functions, loss, and backpropagation.
Engineer a Complete Data Pipeline: Learn the critical and often overlooked steps of sourcing, cleaning, and pre-processing the massive text datasets that fuel LLMs, while navigating the ethical considerations of bias and fairness.
Implement a Subword Tokenizer: Solve the "vocabulary problem" by building a Byte-Pair Encoding (BPE) tokenizer from scratch, learning precisely how raw text is converted into a format that models can understand.
Construct a Transformer Block, Piece by Piece: Deconstruct the "black box" of the Transformer by implementing its core components in code. You will build the scaled dot-product attention mechanism, expand it to multi-head attention, and assemble a complete, functional Transformer block.
Differentiate and Understand Key Architectures: Clearly grasp the differences and use cases for the foundational LLM designs, including encoder-only (like BERT), decoder-only (like GPT), and encoder-decoder models (like T5).
Write a Full Pre-training Loop: Move from theory to practice by writing the complete code to pre-train a small-scale GPT-style model from scratch, including setting up the language modeling objective and monitoring loss curves.
Understand the Economics and Scale of Training: Learn the "scaling laws" that govern the relationship between model size, dataset size, and performance, and understand the hardware and distributed computing strategies (e.g., model parallelism, ZeRO) required for training at scale.
Adapt Pre-trained Models with Fine-Tuning: Learn to take a powerful, general-purpose LLM and adapt it for specific, real-world tasks using techniques like instruction tuning and standard fine-tuning.
Grasp Advanced Alignment and Evaluation Techniques: Gain a conceptual understanding of how Reinforcement Learning from Human Feedback (RLHF) aligns models with human intent, and learn how to properly evaluate model quality using benchmarks like MMLU and SuperGLUE.
Explore State-of-the-Art and Future Architectures: Survey the cutting edge of LLM research, including methods for model efficiency (quantization, Mixture of Experts), the shift to multimodality (incorporating images and audio), and the rise of agentic AI systems.
"synopsis" may belong to another edition of this title.
FREE shipping within United Kingdom
Destination, rates & speedsSeller: GreatBookPricesUK, Woodford Green, United Kingdom
Condition: New. Seller Inventory # 50480016-n
Quantity: Over 20 available
Seller: Rarewaves.com UK, London, United Kingdom
Paperback. Condition: New. Seller Inventory # LU-9798288712999
Quantity: Over 20 available
Seller: GreatBookPricesUK, Woodford Green, United Kingdom
Condition: As New. Unread book in perfect condition. Seller Inventory # 50480016
Quantity: Over 20 available
Seller: CitiRetail, Stevenage, United Kingdom
Paperback. Condition: new. Paperback. What You Will Learn in This BookMaster the Mathematical Foundations: Go beyond theory to implement the core mathematical operations of linear algebra, calculus, and probability that form the bedrock of all modern neural networks, using Python and NumPy.Build a Neural Network From Scratch: Gain an intuitive understanding of how models learn by constructing a simple neural network from first principles, giving you a solid grasp of concepts like activation functions, loss, and backpropagation.Engineer a Complete Data Pipeline: Learn the critical and often overlooked steps of sourcing, cleaning, and pre-processing the massive text datasets that fuel LLMs, while navigating the ethical considerations of bias and fairness.Implement a Subword Tokenizer: Solve the "vocabulary problem" by building a Byte-Pair Encoding (BPE) tokenizer from scratch, learning precisely how raw text is converted into a format that models can understand.Construct a Transformer Block, Piece by Piece: Deconstruct the "black box" of the Transformer by implementing its core components in code. You will build the scaled dot-product attention mechanism, expand it to multi-head attention, and assemble a complete, functional Transformer block.Differentiate and Understand Key Architectures: Clearly grasp the differences and use cases for the foundational LLM designs, including encoder-only (like BERT), decoder-only (like GPT), and encoder-decoder models (like T5).Write a Full Pre-training Loop: Move from theory to practice by writing the complete code to pre-train a small-scale GPT-style model from scratch, including setting up the language modeling objective and monitoring loss curves.Understand the Economics and Scale of Training: Learn the "scaling laws" that govern the relationship between model size, dataset size, and performance, and understand the hardware and distributed computing strategies (e.g., model parallelism, ZeRO) required for training at scale.Adapt Pre-trained Models with Fine-Tuning: Learn to take a powerful, general-purpose LLM and adapt it for specific, real-world tasks using techniques like instruction tuning and standard fine-tuning.Grasp Advanced Alignment and Evaluation Techniques: Gain a conceptual understanding of how Reinforcement Learning from Human Feedback (RLHF) aligns models with human intent, and learn how to properly evaluate model quality using benchmarks like MMLU and SuperGLUE.Explore State-of-the-Art and Future Architectures: Survey the cutting edge of LLM research, including methods for model efficiency (quantization, Mixture of Experts), the shift to multimodality (incorporating images and audio), and the rise of agentic AI systems. Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability. Seller Inventory # 9798288712999
Quantity: 1 available
Seller: California Books, Miami, FL, U.S.A.
Condition: New. Print on Demand. Seller Inventory # I-9798288712999
Quantity: Over 20 available
Seller: GreatBookPrices, Columbia, MD, U.S.A.
Condition: New. Seller Inventory # 50480016-n
Quantity: Over 20 available
Seller: GreatBookPrices, Columbia, MD, U.S.A.
Condition: As New. Unread book in perfect condition. Seller Inventory # 50480016
Quantity: Over 20 available
Seller: Best Price, Torrance, CA, U.S.A.
Condition: New. SUPER FAST SHIPPING. Seller Inventory # 9798288712999
Quantity: 1 available
Seller: Rarewaves.com USA, London, LONDO, United Kingdom
Paperback. Condition: New. Seller Inventory # LU-9798288712999
Quantity: Over 20 available
Seller: Grand Eagle Retail, Mason, OH, U.S.A.
Paperback. Condition: new. Paperback. What You Will Learn in This BookMaster the Mathematical Foundations: Go beyond theory to implement the core mathematical operations of linear algebra, calculus, and probability that form the bedrock of all modern neural networks, using Python and NumPy.Build a Neural Network From Scratch: Gain an intuitive understanding of how models learn by constructing a simple neural network from first principles, giving you a solid grasp of concepts like activation functions, loss, and backpropagation.Engineer a Complete Data Pipeline: Learn the critical and often overlooked steps of sourcing, cleaning, and pre-processing the massive text datasets that fuel LLMs, while navigating the ethical considerations of bias and fairness.Implement a Subword Tokenizer: Solve the "vocabulary problem" by building a Byte-Pair Encoding (BPE) tokenizer from scratch, learning precisely how raw text is converted into a format that models can understand.Construct a Transformer Block, Piece by Piece: Deconstruct the "black box" of the Transformer by implementing its core components in code. You will build the scaled dot-product attention mechanism, expand it to multi-head attention, and assemble a complete, functional Transformer block.Differentiate and Understand Key Architectures: Clearly grasp the differences and use cases for the foundational LLM designs, including encoder-only (like BERT), decoder-only (like GPT), and encoder-decoder models (like T5).Write a Full Pre-training Loop: Move from theory to practice by writing the complete code to pre-train a small-scale GPT-style model from scratch, including setting up the language modeling objective and monitoring loss curves.Understand the Economics and Scale of Training: Learn the "scaling laws" that govern the relationship between model size, dataset size, and performance, and understand the hardware and distributed computing strategies (e.g., model parallelism, ZeRO) required for training at scale.Adapt Pre-trained Models with Fine-Tuning: Learn to take a powerful, general-purpose LLM and adapt it for specific, real-world tasks using techniques like instruction tuning and standard fine-tuning.Grasp Advanced Alignment and Evaluation Techniques: Gain a conceptual understanding of how Reinforcement Learning from Human Feedback (RLHF) aligns models with human intent, and learn how to properly evaluate model quality using benchmarks like MMLU and SuperGLUE.Explore State-of-the-Art and Future Architectures: Survey the cutting edge of LLM research, including methods for model efficiency (quantization, Mixture of Experts), the shift to multimodality (incorporating images and audio), and the rise of agentic AI systems. Shipping may be from multiple locations in the US or from the UK, depending on stock availability. Seller Inventory # 9798288712999
Quantity: 1 available