SECURING AI: A Practical Guide to Prompt Injection, LLM Guardrails and AI Firewalls - Softcover

Chaudhari, Mr. Atul

 
9798254456971: SECURING AI: A Practical Guide to Prompt Injection, LLM Guardrails and AI Firewalls

Synopsis

Securing AI is a practical, end-to-end handbook for anyone responsible for building or deploying AI systems safely.

It opens by explaining why AI systems are fundamentally different to secure than traditional software — because LLMs cannot distinguish instructions from data at the architectural level, every defensive measure is probabilistic rather than absolute, making layered defence the only viable strategy.

The book then moves through four parts. Part I maps the full threat landscape: prompt injection, jailbreaking, training data poisoning, model extraction, and supply chain attacks. Part II builds the defensive stack layer by layer — secure system prompts, input/output guardrails, AI firewalls, RAG security, and red teaming. Part III tackles governance: securing autonomous AI agents, privacy and regulatory compliance (EU AI Act, GDPR, NIST AI RMF), and sector-specific requirements for finance, healthcare, and legal. Part IV looks ahead at deepfakes, quantum computing threats, and the emerging career field of AI security.

The appendices provide immediately usable references: the full OWASP Top 10 for LLMs, a PII detection implementation guide, a 58-term glossary, a five-level maturity model, a curated tools directory, and four real-world incident case studies — Samsung's confidential data leak, Air Canada's chatbot liability ruling, the Microsoft Bing Chat manipulation, and a cloud tenant isolation failure.

The core argument throughout is simple: AI security cannot be an afterthought, defence in depth is non-negotiable, and human oversight remains irreplaceable — no matter how sophisticated the automated controls become.

"synopsis" may belong to another edition of this title.