Showing Posts For MartinSPT.6974:

Performance inconsistencies...

in Account & Technical Support

Posted by: MartinSPT.6974

MartinSPT.6974

Discounting that there was some flunk here.. There are countless other threads on this board with others users having the same problems. There is even a thread with Anet itself asking for Dxdiags and configs. Countless top of the line system are in that thread. Systems that should have zero issue at nearly all times.

Almost all the FPS issues I’ve read are a user issue. They all appear to be GPU memory bandwidth limited (“My GPU isn’t working flat out” is what gives it away) .

I haven’t seen countless top of the line systems in those threads mostly I’ve seen people with mobile GPU’s or cheap desktop GPU’s with a narrow memory bus width e.g. 64b, 128b etc resulting in low bandwidth capabilities in the order of 80-100 GB/s and wondering why they don’t get the performance of cards with 384b etc with 192 GB/s.

I have no doubt there are some quirks however in the vast majority of cases people have their aspirations ahead of their machine’s capabilities.

Wow, trying to be technical, and you don’t have a clue what you are talking about.
GPU memory bus width? Really? 64bit,128bit? Really again?

I haven’t see even a ‘credible’ sub-128bit VRAM bus in about 10 years, and you are talking about people playing this game with a video card with 64bit memory bus?

GDDR5/4 is 256-bit, and the ‘old’ stuff is GDDR3/GDDR-2 which is 128bit and has been ‘common’ since 2003.

Are you maybe talking about the Geforce 7xxx Series, where some of the low end cards used 64bit internal blocks, but still had a 128bit memory bus? That is about as old as I can think that would even remotely have anything to do with what you are talking about.
(And I don’t think many are trying to play the game with a PS3 class GPU.)

The grownups are talking, go play outside, or troll another forum, please.

Performance inconsistencies...

in Account & Technical Support

Posted by: MartinSPT.6974

MartinSPT.6974

With your lack of knowledge on how Games use mutli-threading I’m willing to bet you and your “tech teams” are the goofs and the source of your problems.

Games do not utilise mutli-threading as per intended – they cheat. They load the main module on one cpu, the sound module on another and so on. Restricting the availability of CPU’s forces them onto the ones remaining and consequently increases their utilisation. This is completely different to say a video rendering process that will fully load all cores.

Wouldn’t surprise me if you were fraps’ing your tests with it set to record at 29fps or something similarly stupid given your mutli-threading claims.

GPU threading LOL.

Its a brave man that throws out the goof challenge on the interweb – there are always people smarter than you (and me) out there.

Really? Wow…

GPU Threading? Is this a new concept to you? Have you never worked with user mode shaders?

Go ask a developer about shaders, or threads per pixel, and see if they think you are funny when you tell them GPU threading is a LOL.

In addition to internal shader threading on the GPU, there is a higher level scheduling that occurs in the game, and more recently at the OS level.

Have you never read ANYTHING about the WDM/WDDM technologies in Windows Vista/7/8? Windows has GPU level threading/scheduling, that was previously up to applications to provide yielding, like OpenGL does on Linux. Now, threading and scheduling can be granularly handled by the OS or the application, but Windows gets the final say in ‘threading’ and ‘scheduling’ of GPU operations, with an increasing level of preemption since WDM 1.0 and Vista.

(This is one reason Windows is and continues to kick Linux and OSX in graphics and gaming, as the OS is handling the GPU, and a Game or GPGPU operation is not going to cause the UI or another GPU dependent application to stutter for fail.)

Go look up WDM/WDDM, or start reading here, little one.
http://msdn.microsoft.com/en-us/library/windows/hardware/gg487344.aspx

PS
Hit up the SWTOR forums, notice the name that posted the GPU VRAM issue and how to trick the client so it would listen to the OS when it sets up VRAM.

The HeroEngine client (used by SWTOR) checks for Vista/7/8 and then ignores the OS reported VRAM and the VRAM supplied by the OS through the GPU virtualization technologies. This is why turning on ‘XP Compatibility Mode’ increases the performance of SWTOR, as the game is not ‘disregarding’ the OS reported VRAM level and goes ahead and attempts to use all VRAM (virtual and physical) supplied to the game.

I don’t care WHO you think I am, but there is a threading issue with GW2, and if you don’t give a flip, go away and troll the WoW forums.

Ok?

(edited by MartinSPT.6974)

Performance inconsistencies...

in Account & Technical Support

Posted by: MartinSPT.6974

MartinSPT.6974

The last few days, my tech team has been trying various configurations to see if they could isolate the bottleneck or track down the wide range of performance they were seeing on our test systems, and it appears users in the forums are as well.

Here are some things noted by my techs, that don’t make sense, except that there may be some major problems with both CPU and GPU threading.

•System 1
Intel Core i7-3770K (More than enough CPU)
ATI 6970 1Gb
16gb RAM
SSD
Win7 x64
Avg 26fps @ 1680×1050

•System 2
Intel Pentium 4 3.4ghz (Not even close to enough CPU)
NVidia 7950 512mb
4gb RAM
SSD
Win7 x64

Avg 29fps @ 1920×1200

Now why does this not make any sense? It is impressive that a computer from 2006 can hit nearly 30fps, but virtually the fastest computer you can build today does not seem to be using it resources wisely.

Before some goof jumps in with, “your System is Messed,” up we have a test center, and configured 8 different systems and tested several MB/RAM/CPU/GPU configurations and various OS/driver configurations.

Here are things we tried, that don’t make sense…

On System1, we put in a NVidia GTS 250 768mb, latest drivers, on clean OS install. W were trying to pin down NVidia/ATI over several generations. Overall, both ATI and NVidia sucked the same.

However, one interesting thing about the game, Hitting AutoDetect in Game shoves everything to Medium, and Native Resolution with the 3x slower NVidia GTS 250; however, with 6970, 6850, 5850 ATI Video Cards AutoDetect puts everything at Low, and even enables subsampling.

(**When Affinity was limited to 2 CPU Cores, the game would then Auto Detect the 3X faster GPU at Medium settings. Changing the affinity back to 4 cores, and Auto Detect would set everything to low again, but at medium on the slower GPU.)

That should not be happening.

On the i7, i5, and AMD CPUs that have 4 cores, the game would have a small amount of threads, yet consistently peg at 1/4 of each actual core. Leaving the game at about a 75% avg CPU usage.

Which does not make sense, because if the game if starving for CPU, as it appears to be doing, it is not maximizing the use of any core.

Expanding on this theory, if when we shoved affinity to 1 CPU Core, it would Hit 100%, on that core, and when we shoved affinity to 2 CPU Cores, both cores would hit near 90-100%; however, when we enabled 3 CPU Cores, each Core’s usage dropped to around 50%, and if we enabled 4 CPU Cores, usage would drop to 25% per core, and the game would still be starving for CPU.

There is no reason that expanding out affinity to all cores would ‘decrease’ performance and CPU load. (Yes there are some CPUs that drop with Heat, but this was not what we were seeing.)

If the system is starving for CPU usage, and threading out to 4 cores, as the GW2 client was doing, it should be consuming as much on each core as possible, shoving them all to 100% if needed.

After 2 cores, enabling 3 and 4 cores no longer increased FPS in the game, even though the game was using a portion of all 4 cores; however, not only a percentage of each core.

I am wondering if the new scheduling code that was added that Tom’s Hardware reported is maybe where this problem could be?

It is almost like it is threading too much and not ‘managing’ the threads well, which again would be wiser to shove to the NT kernel than to manage in the game internally.


There is no reason the Pentium 4 system should run ok, and a fast i7 with a much faster GPU and more resources should be so bad.

There is also no reason that the game performance with only 2 Cores set to the game would be ‘equal’ in FPS to using 3 and 4 Cores on the same system.


I see a lot of people being dismissive of players that are not happy with performance, telling them to get a newer CPU or newer GPU, etc… However, there is an issue with the game’s performance that does not match the performance specifications of the hardware.

So maybe be a bit kinder to your fellow players, as non-informative and dismissive statements are not helping find the solution, and it is also discouraging the developers from taking the performance issues seriously.

So take this information with a grain of salt or use it to figure out what is up with the performance of the game.

Good luck and hope that a developer is also listening/reading.