Showing Posts For Dricoust.4592:
Yes, but QPC relies on QPF precision, and with the platformclock flag set, QPF is derived only from the HPET timer. On my system, with HPET on + platformclock on, QPF = 14.31818MHz, which by disabling TSC will disable the need for synchronization. If I leave HPET on, but disable platformclock, QPF = ~3.8MHz.
Again, QPC does nothing other than return the current TSC of the current processor as can be discovered with a debugger. This is why you are warned about setting the platformclock flag as it’s only intended for debugging purposes — why this hasn’t been mentioned here I’m not sure.
Why would GW be page faulting when it uses so little RAM? Just curious. I have 16 GB of RAM on my machine, and task manager claims only 2 to 3 Gig is in use. I know windows is a little whacky about how the page file works, but it’s odd if it’s page faulting that often with sufficient physical RAM.
The first restriction is the process working set size which establishes an upper bound on the total amount of comitted (physically mapped) memory. The working set size is not immutable however by default it’s only a few MB (the actual value is a function of the particular OS and available memory). Secondly, the memory manager aggressively trims the current working set size in an opportunistic fashion in an attempt to keep as much physical memory free at all times incase of a peak in the allocation request stream.
Also, the working set size of GW2 is responsible for those out-of-memory errors you may occasionally receive; I reported it awhile back yet it has yet to be fixed.
Unless you explicitly enable platformclock=true in the BCD, windows will use a combination of (RD)TSC+HPET.
Windows only disables per core TSCs when platformclock is set on the BCD, reverting to LAPICs (on Windows 7, because Win 8 is a different story it seems…) when HPET is off or using HPET when it is on in the ACPI.
Anyhow, TSCs are a real implementation mess, glad there is a way to be rid of them.
At present, QPC still relies entirely on the current processors TSC; the HAL callback is merely a stub. To overcome the associated problems with this implementation usually you would adjust for the time drift by synchronizing with the system timer which is indeed typically derived from the most accurate available source (e.g, a PIT) however even this is further [intentionally] diminished in resolution to prevent excessive interrupt servicing. The GW2 developers do not account for this.
For one, Guild Wars doesn’t use the mm-timer (it uses RDTSC) and furthermore the potential gain in precision is redundant given the preemptive environment that is Windows/Mac.
Based on my own analysis, the performance issues with the game are simply due to poor design; no amount of hardware you throw at it, tweaking that you do -is going to compensate for this.
For example, even someone with very little technical knowledge would understand that disk access is an expensive operation — Guild Wars incurs, based on several sample machines, roughly 60 page-faults/s and each page-fault results in disk access which typically have a latency of around 2ms. This alone is having a severe impact on the game’s performance [as is easily demonstrated using a profiler]
Did you try turning it off and on again?
I’ve been reporting this error and its solution for awhile now and it’s getting on my nerves given the number of subsequent updates that have yet to address it.
The error — out of memory — which will affect any Windows target is being caused due to the artificial commit limit as imposed by the process’s working-set size. And again, the solution is simply to adjust the working set size.
Also, start opportunistically locking your memory pools; there’s an excessive number of page faults which are having a severe impact on performance (as is easily demonstrated using a profiler)