Full description not available
R**K
Put a (Bara)CUDA in your programming.
"The CUDA Handbook" is the largest(480p) and latest( June 2013 ) of NVIDIA's series of GPU programming books. It is also the most comprehensive and useful GPU programming reference for programmers to date. It's a tough world out there for programmers who are trying to keep up with changes in technology and this reference makes the future a much more comfortable place to live. Learn about GPGPU programming and get ahead of the crowd.For those programmers who haven't had the time to perceive the changes, GPU programming is a current change in programming design that is sweeping the world of network VOIP management, parallel analysis and simulation, and even supercomputing in a single box. I have personally run a Starfield Simulation on a portable with an i7 processor that increased in speed 112 times by using the internal NVIDIA GeForce 570M. The Starfield frame time reduced from about 2 seconds to about .015 sec. Imagine what I could do with a GeForce 690! Charts indicate that it might exceed 700 times the computing speed!!This book not only tells me how to arrange the software to work with the NVIDIA SDK, but it also shows me the important differences in the architecture of many of the NVIDIA cards to obtain optimum performance.The world of computing is still filled with 32 bit machines( or OS sysstems ) using most of their memory to get their assigned tasks completed. Many of these machines do not have even four core CPUs, forget having over 4GB of memory. They fill computers in production devices, desktops in database support companies, and the racks of IT departments everywhere. The need for faster and more computing does not slow down or stop for these hardware limits. Ant the cost to replace them outright is prohibitive. Now, a demand to manage 5000 computer domains arrives or a messaging demand for 1500 VOIP channels to be mixed in a hundred groups is brought on board or a control simulation to manage six robotic arms in an assembly line needs to be run. Without clustering a dozen to one hundred other computers to manage the computing load, the only practical solution is to employ one or two GPUs. Projects that ignore this message are destined to fail and along with that comes damaged careers and lost jobs.The solution to avoiding the trap of limited legacy hardware is to use GPUs to take up the load and stop overloading the limited memory and CPU cores to do the increased workload. Each GPU can add 2300 streaming multiprocessors to perform the work. And each GPU cards can add 4GB of high speed memory to the limited program memory on the motherboard, which may only be 2GB.The book introduces the GPU architecture, GPU device memory usage and loading, and Kernel processor code design. Once you have mastered the terminology and run some of the examples, you will be able to start developing code for specific solutions. The first chapters introduce you to NVIDIA GPU devices. The meat of the book starts in Chapter 5 with proper memory handling procedures. Chapter 7 expands the material on blocks, threads, warps, and lanes will straighten out the terminology and get you headed into constructive code to manage the upcoming design.If your task goes beyond the capabilities of a single GPU, Chapter 9 introduces multiple GPU programming management. The choice of one of the later client motherboards provides up two four PCIE sockets with the potential of holding four GPUs. That kind of super-computing ability for about $500 a GPU can meet even a gamer's budget. Be aware though that added complexity requires added design refinement. Routines need to be optimized, and Chapter 11 will help you reduce memory usage and Chapter 12 will help you increase the efficiency of Warp usage.Three more chapters involve reductions for routines used in specialized applications that may become of interest to you and are also helpful in further mastering the concepts needed to master GPU computing.Personally, I have a financial program that exceeded my i7 CPU capability for prediction using neural networking because it took more than all night to determine ranking for 400,000 stocks. And I thought that the one hour download time off the internet was onerous. Now I have an affordable solution that won't require me to build a shed out in the backyard to hold all the computers that would normally be required to add this feature to my design. All I have to pay for is a bigger power supply and a single GPU card. Happy computing!
T**S
Excellent but NOT for beginners!
As one slowly learns CUDA programming, numerous questions arise concerning the internal workings of the GPU. The beginning programmer does many things on faith: the documentation says to do it this way, so you do it that way, and it works. Why was that way necessary? Not clear.The documentation supplied by nVidia is very good, and several excellent beginners' books are available. But these things fail to answer the many subtle issues that arise. That's where this book comes in. Over and over as I read it, I said, "Ohhh, that's why I have to do it that way." This book was written by a real insider, someone who knows CUDA as only an insider can. So this book is MANDATORY for anyone who wants to become an expert in CUDA programming.However, be warned that this book is NOT for beginners! It presupposes extensive experience in CUDA programming. If this is the first CUDA book you pick up, you'll be hopelessly lost. Tackle this book only after you have a lot of CUDA under your belt.
P**A
Fantastic Book
This book is a must have if you want to dive into the GPU programming world. It is written in a user-friendly language; it is not a "CUDA manual", because even if it describes certain functions and technical aspects of CUDA, the book explains the main features of it by addressing (simplified) real life problems in a very pedagogical way. The book also includes a not-so-extensive review of Dynamic parallelism (which is why I bought the book in the first place), but it should be more than sufficient for most CUDA "newbies" like me.I can't say much more about this book except this: if you really want to learn CUDA, buy it. You won't be disappointed.
C**K
It'll be a classic
I know a good books about C++, template metaprogramming, C#. The are become classical for people who desired in CS. For CUDA we have only a few books and all of them basically does not provide any answers on question why. But Nicholas do!I really love it.Only one thing that not so good from my point of view is latests part about common algorithms. I think people who read this book already know it. But anyway it's only my feelings.
M**I
... I have seen other books on CUDA that I liked more. So
It is a very complete book but let's say I have seen other books on CUDA that I liked more. So, in comparison to those I should give 2stars. But as I said, practically it has everything that you need to know about CUDA and NVIDIA GPUs.Despite two stars, I would give it a try. You may like it.
D**E
Great book, but dated
If you're a professional writing CUDA code, you need this book. It's the best source I've seen for getting the most performance out of an NVIDIA GPU (and let's face it, the only reason you're writing CUDA code in the first place is for performance). That said, the reason I gave it 3 stars is that it is terribly dated. Large parts of the book are dedicated to architectures that aren't even supported by NVIDIA anymore. And being published in 2013, the new Volta architecture (which made some significant changes to CUDA) isn't there at all. If you need to know the real nuts and bolts of CUDA now, go ahead and buy the book, but I'm waiting with bated breath for the 2018 edition in December.
J**Y
Unable to read the listings
The code listings are very difficult to read- very faint and too small. All e-books that show listings should allow the expansion of the listings and then permit scrolling thru the listing - this is less than ideal, better to buy the print version I guess. For this book I was unable to even expand the listings.
B**N
Great book - this does more than just clinically explain ...
Great book - this does more than just clinically explain the language. You can tell the author has spent a lot of time using CUDA and shared his experience in what works well, what could be better, how to get performance, and many other pearls of wisdom.
R**E
Excelente libro
El libro cubre con mucha profundidad las preguntas que uno se hace al ir aprendiendo a programar en CUDA, particularmente en qué sucede en el hardware al hacer ciertas cosas en el software. Sin embargo, pienso que es necesario que uno ya tenga nociones básicas de CUDA para apreciar de lleno el libro, i.e. haber leído algo así como el "CUDA by Example" de Jason Sanders.
L**I
Ottimo libro!
Venivo dalla triste esperienza del volume intitolato "CUDA by example" dello stesso editore, che rappresentava una serie di esercizi gia' svolti in C.In questo libro fortunatamente abbiamo anche una sezione piu' teorica che spiega i vantaggi computazionali di avere un sistema con migliaia di processori indipendenti ed il modo di sfruttarne le caratteristiche. Seguono esempi (funzionanti) di programmi, con valutazioni relative ad efficienza ed ottimizzazione, ed una descrizione/trattazione degli algoritmi di calcolo parallelo piu' noti.Sicuramente si tratta di un testo maggiormente utile sia dal punto teorico che informativo rispetto al precedente; a mio avviso, tuttavia, mancano ancora un centinaio di pagine da dedicare alla generalizzazione dei problemi presentati.
V**N
This book is gr8 if you have access to CUDA ...
This book is gr8 if you have access to CUDA compatible compute Devices ...I am saying Tesla K20/40/80 or the Gforce type of graphics cards from Nvidia with the nvcc compiler.. the book is very comprehensive ..but remember its a hand book not a text book.. so if you take a lecture or two for the starters then the rest ...this book can do..
J**R
Five Stars
Book's quality is quite good and it arrived in perfect condition even before than I expected.
J**A
Gute Tipps für die Performance-Optimierung
Dieses Buch richtet sich an fortgeschrittene CUDA-Programmierer und hat seinen Schwerpunkt in der Performance-Optimierung von Kerneln. Hierzu werden auch die technischen Hintergründe beschrieben und sehr viele Ansätze für die Optimierung aufgezeigt. Teilweise geht der Autor auch bis auf den Maschinencode SASS herunter.Der Autor zeigt sich sehr kompetent und experimentierfreudig bei der Optimierung von Kerneln. Das Buch kann von vielen mit Gewinn gelesen werden.Ein Nachteil des Buches ist, dass es auch vom Autor als Nachschlagewerk (“comprehensive reference”) konzipiert wurde. Dadurch sind teilweise sehr große Tabellen im Buch vorhanden (S. 93ff, S. 259). Die Dokumentation von CUDA wiederum gibt es als HTML und als PDF, so dass ein Entwickler sehr viel schneller im Web an die gesuchte Information kommt, als in einem Buch zu suchen, siehe z. b. docs.nvidia.com. Ein weiterer Nachteil ist, dass sich APIs und Kommandozeilenoptionen auch ändern können.Zum Zeitpunkt des Buches war wohl CUDA 5.0 aktuell. Inzwischen gibt es CUDA 5.5 und die Version 6.0 ist in den Startlöchern (als Release-Candidate erschienen).Gestört hat mich auch, dass die Fehlerabfrage mit dem Makro CUDART_CHECK im Beispielcode immer abgedruckt wurde. Mit cudaGetLastError() könnte man dieses auch nach dem Aufruf ausführen und das Makro an das Ende der Zeile stellen, so dass man nicht ständig drüber hinweg lesen muss.Das Buch könnte gut und gerne geschätzte 50 Seiten dünner sein, wenn der Autor besser mit dem Platz umgegangen wäre. Der Abschnitt über die Amazon Web Services ist meiner Meinung nach auch für ein Buch über CUDA nicht relevant.Fazit: Wer CUDA-Kernel optimieren möchte und Ideen für Möglichkeiten dafür sucht, wird in diesem Buch fündig. Es ist allerdings auf dem Stand von Anfang 2013, aber die Grundlagen werden gut erklärt und man kann sich die neusten Informationen über CUDA 6.0 und Maxwell aus dem Web besorgen.
Trustpilot
5 days ago
1 month ago