Skip to main content
Link
Menu
Expand
(external link)
Document
Search
Copy
Copied
Supercomputing for AI
Context and Reader’s Guide
Book overview and Table of Contents (PDF)
Preface
How to Use This Book
The Four Axes of the Book
Foundational Performance Principles
Technologies and Navigation aids
Editorial Notes
Acknowledgments
Part I — The Infrastructure Layer
1. Supercomputing Basics
2. Supercomputing Building Blocks
3. Supercomputing Software Environment and Tools
Part II — The Parallel Execution Layer
4. Launching and Structuring Parallel Programs
5. GPU Programming and CUDA
6. Distributed GPU Programming
Part III — The Intelligence Abstraction Layer
7. Neural Networks: Concepts and First Steps
8. Training Neural Networks: Basics, CNNs, and Deployment
9. Getting Started with PyTorch
Part IV — The Scalability Layer
10. Introduction to Parallel Training of Neural Networks
11. Practical Guide to Efficient Training with PyTorch
12. Parallelizing Model Training with Distributed Data Parallel
Part V — The Language Abstraction Layer
13. Introduction to Large Language Models
14. End-to-End Large Language Models Workflow
15. Exploring Optimization and Scaling of LLMs
Beyond the Layers
16. Cross-Cutting Pattern in Supercomputing for AI
17. Looking Forward: Supercomputing and AI Futures
Epilogue
The Path Behind This Book
About The Author
About WATCH THIS SPACE
Appendices
Essential Linux Commands
SSH Public Key Authentication
C Language Basics
Taking Timing
Python and Its Libraries
Jupyter Notebook Basics
Colophon
Search Supercomputing for AI
Appendices
Table of contents
Essential Linux Commands
SSH Public Key Authentication
C Language Basics
Taking Timing
Python and Its Libraries
Jupyter Notebook Basics