Jump to content

Kepler or greater PowerMizer issues


Recommended Posts

Posted

Anybody else having issues with PowerMizer restricting core clock to 405 mHz regardless of the preferred mode or performance level? On my 650M SLI they're stuck on 405 but the memory clocks go 405-800-2000. This is on openSuSE 12.3 kernel 3.7.10 using the nVidia driver in the repo or any drivers I've installed manually (up to and including beta 325.08). I've found one reference to this problem on nVidia's site.

Granted, throwing SLI at the Steam games maxed out setting-wise allows almost playable framerates, but it is rather frustrating to have all that untapped potential there.

  • Like 1
  • 2 weeks later...
Posted

I think you onto something, I have checked several of our computers (which use linux Fedora 19) and all report 399 Mhz memory clock with Nvidia X Server Settings. Most computers use a Nvidia GTX 660, but even my own computer with GTX 670 says 400 Mhz. For the Nvidia GTX 660 and GTX 670 it should be 1500Mhz. We even have a old socket 775 computer with a Nvidia GT 9500 which also says 399Mhz....

So maybe the software doesn't detect the frequency correct or it's a problem. Anyway we run on all computers the same kernel 3.10.3-300.fc19.x86_64 (actual I'm not sure the old socket 775 with a Nvidia GT 9500 is running x86_64).

For my feeling, if I boot one of our computers, which is dual boot Fedora 19 / MS Windows 7 I have the feeling that the graphics are about the same speed... but with a completely different graphical user interface this feeling is as scientific as holding a wet finger in the wind...

Posted

I think you onto something, I have checked several of our computers (which use linux Fedora 19) and all report 399 Mhz memory clock with Nvidia X Server Settings. Most computers use a Nvidia GTX 660, but even my own computer with GTX 670 says 400 Mhz. For the Nvidia GTX 660 and GTX 670 it should be 1500Mhz. We even have a old socket 775 computer with a Nvidia GT 9500 which also says 399Mhz....

 

So maybe the software doesn't detect the frequency correct or it's a problem. Anyway we run on all computers the same kernel 3.10.3-300.fc19.x86_64 (actual I'm not sure the old socket 775 with a Nvidia GT 9500 is running x86_64).

 

For my feeling, if I boot one of our computers, which is dual boot Fedora 19 / MS Windows 7 I have the feeling that the graphics are about the same speed... but with a completely different graphical user interface this feeling is as scientific as holding a wet finger in the wind...

I will try and run Unigine benchmark on it under Win8 and see if there's a big difference. You know that nVidia doesn't support SLI on mobile for Linux so I will have to disable it on Win8 and bench the card.

Sent from my GT-N7100 using Thaivisa Connect Thailand mobile app

Posted

Ok, an update.

First, nVidia's drivers seem to be a little better than the Rage128 ones I remember. Cases in point; no SLI in Linux, an apparently crappy Win8 OpenGL system etc.

So, here's a screenshot of my OpenSuSE 12.3 using a single GT 650 (obviously OpenGL):

post-27441-0-24376300-1375694841_thumb.p

A reboot into Win8 and using the DX11 option (because I forgot to change that first) and SLI disabled:

post-27441-0-59788000-1375695010_thumb.p

130% increase in performance.

I was interested in the SLI performance:

post-27441-0-92317100-1375695360_thumb.p

150% increase over the Win8 single card performance and 195% over the Linux single card performance.

Realising I had not done a proper comparison, I changed the settings in the Win8 controls to OpenGL and reran the test. With a single GT 650:

post-27441-0-92496200-1375695472_thumb.p

A 122% increase over the Linux results, but a disappointing 94% performance compared to the DX11 results. That is outside what I would consider a normal benchmark deviation so obviously between that benchmark and nVidia's drivers and Windows OpenGL optimistations there's an issue.

And now the Win8 OpenGL SLI results:

post-27441-0-54162700-1375695621_thumb.p

This is what threw me for the biggest loop. How is there such a regression? Is it the SLI's fault of the game engines? Perhaps OpenGL has major issues with SLI? But the fact that enabling it makes the result plummet like that is not so good.

As an aside, when in openGL SLI mode the primary card stayed at 400/400 speeds.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.



×
×
  • Create New...