Is GPU computing dead already? back

Board: Home Board index Raytracing General Development

(L) [2013/03/26] [dbz] [Is GPU computing dead already?] Wayback!

After working on gpu raytracing for several years, I am starting to get the impression gpu computing has past its peak popularity and is on its decline. I noticed the following thinks that make me think gpu computing is not healthy.
- OpenCL development has completely stalled. Nvidia does not even support it anymore and in its current state(no official c++ in kernels) it is pretty useless for larger projects. OpenCL just never matured.
- OpenCL drivers are generally buggy(AMD) or nonexistent for more recent hardware(Nvidia).
- Hardly any serious applications except for example Luxrays use OpenCL. It seems fair to say OpenCL is pretty much dead at the moment.
-  Nvidia has crippled more recent hardware so GPU performance increases have stalled between generations(in an attempt to get people to buy expensive Tesla/Quadro cards?).
- The gpu computing developer forums at nvidia.com have been down for months last summer because of 'security issues' and my account has dissapeared. Not something I would expect from a large company the size of NVIDIA seriously supporting gpu computing. NVIDIA gives the impression they just don't care.
- There are no replies from NVIDIA staff anymore on the gpu computing forums. Again, they give the impression they just don't care about gpu computing.
- Like OpenCL, CUDA has not matured. Still ancient OpenGL techniques like putting geometry in textures for optimal performance instead of using global memory are required and not being able to copy objects of classes with virtual functions to the device is a huge limitation for me. (Although there are workarounds)
- Updating the driver may cause your CUDA program compiled for an older driver not to run anymore. This is a complete pain for non-technical end users.
- Compared to the cpu, gpu's remain a pain to program, debug and optimize. I think this is something that was majorly underestimated when the gpu hype started.
- Single gpu performance in raytracing is not really that great compared to a highly optimized cpu rayracer like Embree. The initial gpu computing hype was started because NVIDIA compared optimized gpu algorithms to single threaded unoptimized cpu algorithms and claimed the gpu is '10x to 100x faster'.
Overall, I pretty much get the impression that gpu computing has not lived up to its expectations (except for simple things like convolutions). Does someone recognize some of the issues? Or is it just me ...
(L) [2013/03/26] [graphicsMan] [Is GPU computing dead already?] Wayback!

Most of these are valid concerns.  I think NVIDIA is happy to not spend time optimizing it's OpenCL drivers since most people using NV cards tend to want to use the powerful features of CUDA that simply don't exist in OpenCL.
GPUs are very well suited to some compute tasks and not as well for others.  "Hard" ray tracing seems to get limited benefit, which makes sense because it becomes a problem that is hard to keep coherent -- the limiting factor for GPU compute throughput due to both divergence and bandwidth limitations.  I think they're making slow improvements over time to help us out.  Consider your example of having to use texture memory.  There is a new "constant" memory concept in Kepler that can help to make this easier to program.
10x to 100x is simply not reality for ray tracing (unless you're comparing to ARM chips).  Hopefully over the next few years we could actually see 10x over a beefy multi-core x86_64 chip.  But at the moment, you're right, not gonna see it.
(L) [2013/03/26] [stefan] [Is GPU computing dead already?] Wayback!

I don't think it's dead, I think it's just the hype that's fading. When CUDA and OpenCL came out, there was this general perception of "the CPU is dead, long live the GPU!". Everything would be 50x faster and cost half - at least according to press releases, fanboys and press. Reality, as we know, is not that: certain algorithms, if implemented properly, will scale beautifully on GPUs, others won't at all. The best practices still need to be figured out. The performance benefits are tempting, but they're not free. And in my experience, it's still safer to stay with glsl where possible than to use OpenCL.
My perception may be skewed - I attended GPUTech last week. I think Nvidia really, really wants you to use GPU computing.
(L) [2013/03/27] [Dade] [Is GPU computing dead already?] Wayback!

It is (sadly) more the concept of big discrete GPUs to look dead than OpenCL (or CUDA). Now the focus is all on mobile market and if you follow a bit the ARM ecosystem, there is a HUGE interest in OpenCL. Pretty much all vendors support OpenCL 1.1.
It is a market where AMD and NVIDIA are not relevant: may be you are just looking in the wrong direction.
ARM GPU Mali T-604 has a lot of interesting features: it is an APU and the GPU uses cache, threads divergence has no cost (!), etc. It may work a lot better than big discrete GPUs in term of capability to reach peak performances with ray tracing. I have received one a couple of days ago, I hope to do some test. Today, mobile GPUs still lack absolute performances but the situation is likely to change like always happen in IT.
(L) [2013/03/27] [spectral] [Is GPU computing dead already?] Wayback!

I completely agree with all of you,
dont' forget that :
1) NVidia is a commercial company and they have to do some choice when developing products. They still support and improve their OpenCL drivers but slowly.
2) I have a CPU & GPU renderer ([LINK http://www.spectralpixel/]) and to be honnest it is really faster on the GPU than on the CPU.
3) Developing on the CPU is easy, you have great development environment liks VS2010, debugger, profiler, full C++ etc... we don't have such tools for OpenCL yet
4) C++ compilers have 10-20 years now and still improving... you can't compare them to GPU compilers... not yet
5) Current algorithms does not always fit well on the GPU... so we need time to adapt them
I will just say that we need some time to re-implement and re-invet all the features of the CPU renderers... it is not an easy task [SMILEY ;-)]
Don't forget that for CPU renderers teams are working since several years now... and it is still evolving [SMILEY ;-)]
(L) [2013/03/29] [raider] [Is GPU computing dead already?] Wayback!

I think NVidia just overestimated the GPGPU market, they invested a lot, and the outcome is quite poor. GPGPU is still a niche market. I would say this is a subset of HPC which is a niche by itself. Yes, relative profits are high, but the revenue is still not sufficient. So, they will need to cut investments. The same issue caused Intel to discontinue the Larrabee. But Intel has CPUs so they can afford to through out GPGPU. NVidia has nothing except its GPUs, strategically that was right decision (for NVidia) to go and try to create new market, is vital for them, otherwise Intel/ARM/AMD will eat them, but they are failing to convert it from a niche to commodity. And I'm afraid GPGPU has no future, unfortunately (a little bit more about that below).
If you want to invest your time and (or) money to software (i.e. render) which is highly dependent on some uncommon/proprietary hardware (like GPUs) think twice. Strategically it is much smarter to invest into CPU-based solutions: your customers do not need to buy expensive hardware specially for running your software, so, having the same budget, they will be able to spend more on YOUR software, which is better for you. Customers are afraid of investing into proprietary or uncommon hardware, that ties them to a single vendor. Common hardware can be used for multiple distinct tasks (especially with growing popularity of virtualization) so hardware need for your software will likely share costs among other functions, it's most probably your customers already have enough CPU power to render you just need to utilize that properly - multithreading, distributed rendering on heterogeneous hardware is what you should invest your efforts to. I would say, writing GPU renders you mainly support NVidia, don't do it, unless NVidia pays you for that ;)
(L) [2013/03/30] [hobold] [Is GPU computing dead already?] Wayback!

The whole computing landscape is shifting, because the era of exponentially increasing single thread CPU performance is over.
IMHO this means two things:
1. Mainstream computing performance will actually shrink, as the mainstream shifts from big bulky personal computers to tiny little gadgets.
2. Parallelism will increase for everyone who does performance sensitive computation. This will be painful, because parallelism just ain't easy. But it's the only direction left that is known to sort of work.
We're still at the beginning of both these trends. Mobile gadgets go parallel because it helps them save energy. High end hardware goes parallel because that's the only way performance can grow further. We are forced to go down that path, but it is really unexplored territory. Our understanding of parallel algorithms is limited (on top of fundamental limits of parallelism itself!). Our tools are still immature. Our hardware is still rough around the edges.
Or, to return to the topic: GPU computing is not so much dying as it is living through its birth pangs. I don't know if Nvidia will live, if CUDA will remain a supported platform. I don't know if OpenCL will eventually become friendly to developers. I don't know if Intel's Xeon Phi (which is the son of Larrabee) will ever be succeeded by a mass market product.
But I do know that "throughput processors" will continue to evolve. I can imagine how the cost of SIMD thread divergence could be drastically reduced, for example (so the claims about ARM's Mali GPU don't sound so incredible to me). And I guess this is not the last improvement that people will come up with.
Many of us here are early adopters of "general purpose parallelism". We don't realize it, because we think parallelism is an old hat. But parallelism used to be a specialty for a small number of supercomputer applications. Nowadays there is quite a bit of pressure to parallelize many more applications.
But parallelism is still hard. Frustration with the GPGPU paradigm is just one of the many ways in which we early adopters have to suffer for our privilege. Our privilege of getting a small integer factor higher performance per dollar, per square millimeter of silicon, per watt. Exponential performance growth is over.
(L) [2013/03/30] [graphicsMan] [Is GPU computing dead already?] Wayback!

Very reasonable points.  I think that what will continue to happen is that (a) CPUs will get more "wide" features to allow more instruction-level/data-level parallelism, and (b) GPUs will get better at processing incoherent tasks, and also continue to get better at running different code concurrently (note that this trend has already started as different SMs on NV hardware can already be running different code.  Unsure about ATI GPUs.)
In many ways the Xeon Phi should be easier to use than an NV GPU, however, IMO, Intel does not have their act together in terms of enabling usable programming models for maximizing use of the chip nearly as well as NV's CUDA.  They have primitives for easy offload, which is great, but you won't get great use out of the chip.  As time goes on, NV and ATI hardware will get better at handling incoherent tasks/different code while Intel will inevitably provide better software support.
The point has already been made, but is worth reiterating: the time is near when most improvements will come from writing software differently rather than more FLOPs.  The hardware may provide better support for writing that software (Unified Memory Architecture, etc...), but raw compute will not continue to increase like it has in the past.
(L) [2013/03/30] [Dade] [Is GPU computing dead already?] Wayback!

>> graphicsMan wrote:Very reasonable points.  I think that what will continue to happen is that (a) CPUs will get more "wide" features to allow more instruction-level/data-level parallelism, and (b) GPUs will get better at processing incoherent tasks, and also continue to get better at running different code concurrently (note that this trend has already started as different SMs on NV hardware can already be running different code.  Unsure about ATI GPUs.)
Yup, now it is common among new GPUs to have the hardware support for multiple queues; mostly because it is required to run OS/driver kernels and application kernels at the same time. Otherwise the GUI freeze each time you run a kernel requiring more than few ms (i.e. it can be really annoying).
 >> graphicsMan wrote:The point has already been made, but is worth reiterating: the time is near when most improvements will come from writing software differently rather than more FLOPs.
Just to show a proof of this concept, this is a comparison between a classic C++/multi-thread path tracer Vs a OpenCL path tracer _both_ running on the same CPU: [LINK http://www.youtube.com/watch?v=jk-N4f9ze4k&feature=youtu.be&t=31s]
Interestingly, the OpenCL version is faster because the kernel is built and compiled on the fly with many scene parameters expanded has constants, many not used code paths removed, etc.
(L) [2013/04/03] [shiqiu1105] [Is GPU computing dead already?] Wayback!

>> dbz wrote:After working on gpu raytracing for several years, I am starting to get the impression gpu computing has past its peak popularity and is on its decline. I noticed the following thinks that make me think gpu computing is not healthy.
- OpenCL development has completely stalled. Nvidia does not even support it anymore and in its current state(no official c++ in kernels) it is pretty useless for larger projects. OpenCL just never matured.
- OpenCL drivers are generally buggy(AMD) or nonexistent for more recent hardware(Nvidia).
- Hardly any serious applications except for example Luxrays use OpenCL. It seems fair to say OpenCL is pretty much dead at the moment.
-  Nvidia has crippled more recent hardware so GPU performance increases have stalled between generations(in an attempt to get people to buy expensive Tesla/Quadro cards?).
- The gpu computing developer forums at nvidia.com have been down for months last summer because of 'security issues' and my account has dissapeared. Not something I would expect from a large company the size of NVIDIA seriously supporting gpu computing. NVIDIA gives the impression they just don't care.
- There are no replies from NVIDIA staff anymore on the gpu computing forums. Again, they give the impression they just don't care about gpu computing.
- Like OpenCL, CUDA has not matured. Still ancient OpenGL techniques like putting geometry in textures for optimal performance instead of using global memory are required and not being able to copy objects of classes with virtual functions to the device is a huge limitation for me. (Although there are workarounds)
- Updating the driver may cause your CUDA program compiled for an older driver not to run anymore. This is a complete pain for non-technical end users.
- Compared to the cpu, gpu's remain a pain to program, debug and optimize. I think this is something that was majorly underestimated when the gpu hype started.
- Single gpu performance in raytracing is not really that great compared to a highly optimized cpu rayracer like Embree. The initial gpu computing hype was started because NVIDIA compared optimized gpu algorithms to single threaded unoptimized cpu algorithms and claimed the gpu is '10x to 100x faster'.
Overall, I pretty much get the impression that gpu computing has not lived up to its expectations (except for simple things like convolutions). Does someone recognize some of the issues? Or is it just me ...
Hi, can you please explain more about the walkarounds not being able to pass virtual function classes to the kernel? I am having the same problem recently.

back