My post on Mantle API rounded up a few comments about the whole AMD vs Nvidia…so here you go…
In the computing community there are everlasting turf wars about which brand is better, whether it’s Mac OS vs Ubuntu, Firefox vs Chrome or AMD vs Nvidia. In most cases, it’s purely down to preference, but in some it’s not and in the last of my examples, AMD vs Nvidia, it’s not.
AMD and Nvidia are the two titans of the GPU world and each come with a large and relentless fan base. If you go up to any PC gamer and simply say, “AMD or Nvidia,” I’m sure they’d be able to bore you to death with some story on which they prefer and why. The usual spiel is, “Nvidia, because it’s more powerful,”and this is partly true; but every task has a tool that is most suited to it. For example, you wouldn’t mount a picture on your wall using a sledgehammer; you could, it just wouldn’t be the best choice.
As a general rule of thumb, AMD tend to be less expensive than Nvidia, though this does not in any way mean that AMD don’t do high power cards, a quick look at the Radeon HD 7990 will prove this to you pretty damn quickly. Ultimately, if someone asks you which you would recommend and you don’t reply with, “what do you want to use it for?” then you’re wrong.
So why exactly are Nvidia cards more expensive? Nvidia has CUDA (Compute Unified Device Architecture) cores which are pretty much the main thing that pushes their price up.
Released in 2006, CUDA is Nvidia’s parallel computing platform and programming model, it gives developers access to the parallel computational elements in the supporting GPU’s. The card becomes accessible for computation in the same way a CPU is; but instead of focusing on running one thread as fast as possible, it uses parallel throughput to do several concurrent threads slowly. “The CUDA platform is accessible to software developers through CUDA-accelerated libraries, compiler directives and extensions to our already well-known languages such ass C, C++, Fortran. Third party wrappers are also available for Python, Perl, Fortran, Java, Ruby, Lua, Haskell, MATLAB, IDL, and native support in Mathematica.” A full list can be found here.
CUDA allows faster downloads and read backs to and from the GPU and also full support of integer and bitwise operations, including integer texture look-ups. Though copying between host and device memory may incur a performance hit due to system bus bandwidth and latency and valid C/C++code may sometimes throw an error and prevent compilation due to optimization techniques.
CUDA’s main uses include:
- Accelerated 3D rendering
- Accelerated interconversion of video file formats
- Distributed calculations
- Physical simulations, in particular in fluid dynamics
so if these are what you’re looking to get out of your GPU, then I – as an AMD fan – will say that you should definitely get Nvidia!
However, this is not the end of our road quite yet, as AMD also has something similar that is being very quickly implemented by major companies such as Adobe; this technology is OpenCL. For anyone who knows me well, you’ll know how much it pains me to say this, but Apple are the original authors of this framework – but Khronos Group did the development, and that makes it fine…or bareable. Personal turf war aside, I love OpenCL and how well companies are adopting it. Released originally in 2011 (version 2.0 was released for preview July 2013 and will feature an Android installable client driver extension), it is certainly younger than its Nvidia counterpart, though CUDA has just released version 5.5 in July also.
OpenCL can be used in a similar way to CUDA, as a non-graphical processor, that has been officially adopted by Apple, Intel, Qualcomm, AMD (obviously), Nvidia, Samsung, Vivante and ARM (kudos’ to Nvidia for not being snobby about it). With regards to speed and simplicity to program, let me say this loud and clear…THERE IS NO DIFFERENCE! but what pushes AMD in front of Nvidia is it’s more portable. So long as you stick to the OpenCL libraries and avoid vendor specific ones then you will have some strong, portable software. Also, OpenCL is supported by many more vendors than CUDA which has Nvidia and Nvidia alone which generally means more support and more chance of better success.
Personally, I think we will find a large increase in the popularity of AMD GPU’s with their recent announcement of Mantle and their amazing OpenCL support. But only time will tell.
Ultimately (or tl;dr), if you want to do video editing or 3D modelling Nvidia is probably the way to go – for now, and gaming is honestly, I really really mean this, amazing on the AMD spectrum of cards.
Opinions? Leave a comment below…