(edited by Murderin.8269)
Is there any performance tweaks?
Because with Dx9 memory management of the ram on the video card is entirely under programmers control rather than DirectX as it can be in 10/11. Likely for the reason to be friendly to as many players as possible, it seems that the game shoots for around 1GB of graphics memory max. Likely because the era the game engine was being designed in 1GB cards were common while 2, 3 and 4GB cards weren’t.
It also seems that the game engine met their expectations right up to the point they started adding lots of players to the mix. 5 man parties worked well, 50 man zergs not so much.
Also the 32 bit client can force a dreadful amount of disk loading in crowds as you move among other players, very obvious when you have Character Model Limit and Quality not set on their highest. My drive is running at 10-20 MB/s or more constantly during boss fights as I move around the crowd of players. Setting quality to low which leaves them as generic figures eliminates nearly all of that. A 64 bit client should be able to hold more character models longer in memory rather than discarding them as you hit the limits so memory is released back with a 32 bit client.
Hey nobody is going to believe me here. Everyone thinks that software automagicly adapts to whatever hardware in the system. More cores are automatically better, more VRAM is automatically better, faster GPU is automatically better. But without understanding how code works from the client to the OS to APIs, I’m talking to a wall.
RIP City of Heroes
(edited by Behellagh.1468)
Hey nobody is going to believe me here. Everyone thinks that software automagicly adapts to whatever hardware in the system. More cores are automatically better, more VRAM is automatically better, faster GPU is automatically better. But without understanding how code works from the client to the OS to APIs, I’m talking to a wall.
Hey I believe you. Its a far better explanation than I hoped for.
I just feel like I had better performance in the past than I do now. Its odd for a 3 year old game with subpar visuals.
As is stated often in these parts, gw2 loves single thread CPU performance above all.
Over clocking is the best thing you can do for FPS, if your cpu and motherboard support it.
I’m already overclocked to 4.6, can’t go any higher on this cheap cooler.
I’m already overclocked to 4.6, can’t go any higher on this cheap cooler.
What CPU?
Laptop: M6600 – 2720QM, AMD HD6970M, 32GB 1600CL9 RAM, Arc100 480GB SSD
i5 3570k
15characters
i5 3570k
15characters
Ok, then your already getting some of the best single threaded performance at 4.6ghz. Adding an SSD will help with load times and getting textures into the GPU (they are all pulled from the gw2.dat on the fly), and depending on your GPU that could also use an upgrade (GTX660, GTX750Ti, HD7790, R7260x or better is all that is required to get 60FPS at 1080P in this game)
Laptop: M6600 – 2720QM, AMD HD6970M, 32GB 1600CL9 RAM, Arc100 480GB SSD
Already running on an SSD and my GPU is a 770.
As I said from the start, there is no reason this game should be running at 20 FPS in LA.
Do you have a large amount of processes running in the background? My win7 sits around 45 with MSI afterburner+rtss+wireless peripherals applications.
You could be eating up your cycles.
Do you have a large amount of processes running in the background? My win7 sits around 45 with MSI afterburner+rtss+wireless peripherals applications.
You could be eating up your cycles.
Nah I only run software for my mouse, MSI afterburner, and skype. I clear out boot items every few months.