Nvidia limiting gtx680 compute speed? back

Board: Home Board index Raytracing General Development

(L) [2012/03/22] [beason] [Nvidia limiting gtx680 compute speed?] Wayback!

According to this article:
[LINK http://www.tomshardware.com/reviews/geforce-gtx-680-review-benchmark,3161-15.html]
Nvidia is artificially limiting the compute speed of their consumer graphics cards, including the new GTX680. If so that is extremely disappointing.   [SMILEY :x]
Time for AMD I guess?
EDIT: or maybe just forget about the GPU altogether
(L) [2012/03/22] [apaffy] [Nvidia limiting gtx680 compute speed?] Wayback!

Wow unexpected benchmark result.  Makes a GTX 580 look a lot more tempting if the price can come down a bit.
(L) [2012/03/23] [stefan] [Nvidia limiting gtx680 compute speed?] Wayback!

"Never attribute to malice that which is adequately explained by stupidity" - it might just be a driver SNAFU. When they rush a gamer card to the market, their priority is to excel at gaming benchmarks and the OpenCL compiler will take the back seat. I'd be interested in seeing CUDA or OptiX benchmarks for that card, I would assume NVIDIA puts more emphasis on those than OpenCL.
(L) [2012/03/23] [Dade] [Nvidia limiting gtx680 compute speed?] Wayback!

>> beason wrote:Nvidia is artificially limiting the compute speed of their consumer graphics cards, including the new GTX680. If so that is extremely disappointing.    

Anandtech (and other websites) reports the same kind of results: [LINK http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/17]
NVIDIA may have designed the 680 just to be able to run the DirectX driver has fast as possible an nothing else. For instance, I have read some news about the size of the 680 register file that could explain why it is extremely slow in running complex GPU computing tasks. It sounds a bit strange considering it is a NVIDIA GPU but there should be another GPU incoming in the next months (i.e. 685 or whatever it is going to be called) and it should be the one for GPU computing (i.e. used in Tesla card, etc.). The die size it is supposed to be nearly 2x the one of 680.
P.S. it is really strange, it looks like we are going back to the days of fixed function hardware  [SMILEY :|]
(L) [2012/03/23] [stefan] [Nvidia limiting gtx680 compute speed?] Wayback!

>> Dade wrote:P.S. it is really strange, it looks like we are going back to the days of fixed function hardware  
I blame popular hardware reviews: their verdicts drive card sales, and they all run the same predictable benchmarks. As gfx card vendor that wants to make money, you obviously focus on scoring as high as possible in those benchmarks at the cost of ignoring the rest.
I'd welcome hardware review that not only run standard test but also put hardware through a bunch of surprise cases, the one the vendor did not optimize for. There is already a history of drivers [LINK http://www.bing.com/search?q=driver+benchmark+cheat&form=APMCS1 "optimizing"] for certain benchmarks.
(L) [2012/03/23] [toxie] [Nvidia limiting gtx680 compute speed?] Wayback!

>> beason wrote:Nvidia is artificially limiting the compute speed of their consumer graphics cards, including the new GTX680. If so that is extremely disappointing.    
This is DEFINETLY not true. There is no artificial slowdown in the driver/software or hardware crippling involved, its just a change of priorities that was chosen for this line of GPUs (i.e. mainly reducing power consumption drastically while keeping gaming performance at least on-par). For the last generation (Fermi) it was rather the opposite (i.e. huge jump in performance for compute, but moderate jump for games, except for the dx11 features of course).
(L) [2012/03/23] [Merax] [Nvidia limiting gtx680 compute speed?] Wayback!

I don't think it's a conspiracy, just changing design goals that resulted in less bandwidth per compute unit.   
This article has more details:  [LINK http://www.realworldtech.com/page.cfm?ArticleID=RWT032212172023]
And you can see from the above Anandtech link that it got faster in tests where computation dominates and slower in tests where memory access dominates.
(L) [2012/03/26] [dr_eck] [Nvidia limiting gtx680 compute speed?] Wayback!

I'm confused.  According to slide 14 of this presentation [LINK http://www.nvidia.com/docs/IO/113297/ISC-Briefing-Sumit-June11-Final.pdf], Kepler is supposed to have almost 3X the DP FLOPS/W of Fermi.  Yes, it burns fewer Watts, but I was expecting at least 2X the performance.  Is it just the OpenCL compiler?  Doesn't Nvidia have a history of poor support for OpenCL?  Please tell me that all is well with CUDA.
(L) [2012/03/27] [Dade] [Nvidia limiting gtx680 compute speed?] Wayback!

>> dr_eck wrote:I'm confused.  According to slide 14 of this presentation [LINK http://www.nvidia.com/docs/IO/113297/ISC-Briefing-Sumit-June11-Final.pdf], Kepler is supposed to have almost 3X the DP FLOPS/W of Fermi.  Yes, it burns fewer Watts, but I was expecting at least 2X the performance.  Is it just the OpenCL compiler?  Doesn't Nvidia have a history of poor support for OpenCL?  Please tell me that all is well with CUDA.
The slides are probably about the "Big" Kepler, the one with a >500 die size (nearly 2x the size of the 680). The one dedicated to computing tasks. It is supposed to be released in Q4.
NVIDIA seems to go along the path of having a GPU tailored for gaming and one for GPU computing. I'm a bit worried because this could have a bad side effect on prices: GPU computing has always been cheap because it is side product of the gaming market  [SMILEY :|]
P.S. Nvidia OpenCL support has been quite good until the release of OpenCL 1.1 support with CUDA 4.x back end. I think they have changed their compiler back end with CUDA 4.0 and something has gone really wrong for the OpenCL support (i.e. the performance have been nearly cut in half). It is a long standing problem, that for some unknown reason has not yet been fixed.
(L) [2012/03/27] [spectral] [Nvidia limiting gtx680 compute speed?] Wayback!

About OpenCL performance,
Have you introduce a bug in the partners.nvidia.com web site ? I'm sure that if you introduce an issue and an example (SLG...) they will give you some feedback and improve it !

back