(edited by Sidizen.9048)
How does WvW FPS Work??
after the linking, it does get crazy when theres huge zergs in the map. at one point during a 3 way fight my fps went from 102 to 15 within a second. felt like hell.
after the linking, it does get crazy when theres huge zergs in the map. at one point during a 3 way fight my fps went from 102 to 15 within a second. felt like hell.
Yeah I noticed that too but it doesn’t seem like graphics affect FPS in WvW. Playing on high settings and playing on low settings gave me the same FPS.
30 FPS + 200 ping + 3 second skill lag = best console wvw experience.
Yes it’s common and no, there isn’t much you can do besides upgrading your cpu, which is usually the bottleneck.
30 FPS + 200 ping + 3 second skill lag = best
consolewvw experience.Yes it’s common and no, there isn’t much you can do besides upgrading your cpu, which is usually the bottleneck.
Ok got it. 30 FPS is just fine by me I just think it’s weird that graphics don’t affect my FPS.
Sounds like you have frame limiter on?
Try setting it to 60/Unlimited.
Sounds like you have frame limiter on?
Try setting it to 60/Unlimited.
I have it at unlimited.
Well known fact, unless you are running on a toaster the only thing that will affect fps in WvW in any way is character model limit (ie how many fully outfitted characters you see compared to default). Even with this fact, the game can easily slow to a crawl. Just today I had an example when we where taking SW camp on Alpine with maybe 8v8 players. We got them downed, then fps took a noticable hit all of a sudden. The reason? Well a 30 man zerg just entered the archway. I wasnt even looking in their direction.
It’s also the reason why we essentially have wallhacking in WvW, lol. You can feel when a zerg is about to rush you from behind a door/wall/cover.
Are you playing in fullscreen or windowed borderless mode? If I run WB in wvw my graphics are 30ish with occasional screantears. If I run it in fullscreen I get 60 easy
Jester – Hand of Blood [HoB]
Piken Square
You will see a significant fps improvement the lower your ‘character model limit’ option is. It’s at the bottom in the graphics tab of the options menu.
Maguuma – Predatory Instinct [HUNT]
Necromancer
Character model limit and model detail impact fps in wvw a lot. For met its 25 fps difference when i lower it
I hear a lot about this issue from other people but almost never experience it and I run with the highest settings on everything. I do believe it can, at times, have a lot to do with the type of system the game is running on, whether or not the memory is shared between video and cpu, and so on. Also the internet speed you run I believe would make a difference as well. I actually run GW2 on a ASUS gaming laptop and don’t have any issues when all three blobs are on the same screen fighting. My fps rate is typically between 55-80 except when blobbing… then it can drop down to around 30 at worst. (Note: this is not a brag, it’s an observation based on what I’m running, and it makes a difference imo).
I have a similar problem as OP.
I think the loss of FPS in these situations is related to the processor (CPU) and not the graphics card. Large scale fights puts a load on your processor to parse all incoming information before it can be rendered. This is pretty much confirmed by the FPS increase I get when I overclock my processor when in WvW. The increase in FPS is proportional to the increase of my clock speed.
Gw2 is very heavy cpu based.
TorquedSoul.8097 ive noticed similiar situations, even when changing machines my laptop is what runs gw2 better on high spam gameplay.
I have run GW2 on some crazy powerful machines including 980ti GPUs, RAID 0 SSDs, servers with six processors/terabyte of ram, my own rigs that are very respectable and just for giggles loaded the entire game onto a RAM drive.
Best I can tell is that the game is bound by the CPU and hamstrung by its networking code. You can basically load the game on the most bad kitten rig and it will still perform graphically like a dog when a lot is going on.
I suspect their network and rendering code isn’t fully threaded and is at least partially part of the main thread. The result is that the graphics processing ends up waiting on network packets to be processed.
“Youre lips are movin and youre complaining about something thats wingeing.”
I have run GW2 on some crazy powerful machines including 980ti GPUs, RAID 0 SSDs, servers with six processors/terabyte of ram, my own rigs that are very respectable and just for giggles loaded the entire game onto a RAM drive.
Best I can tell is that the game is bound by the CPU and hamstrung by its networking code. You can basically load the game on the most bad kitten rig and it will still perform graphically like a dog when a lot is going on.
I suspect their network and rendering code isn’t fully threaded and is at least partially part of the main thread. The result is that the graphics processing ends up waiting on network packets to be processed.
It’s important to note that this isn’t specifically a GW2 thing. The lag experienced in the crowded zones in any MMO is caused by the same thing.
There may be a better way to pipeline the new network data into rendering frames faster, but I don’t think anyone has solved that problem yet.
It’s important to note that this isn’t specifically a GW2 thing. The lag experienced in the crowded zones in any MMO is caused by the same thing.
Lag is sort of a different animal. Lag is when a player action is delayed due to server contention, slow network, etc. GW2 is somewhat odd in that network performance directly effects graphics performance. Many games (especially FPS shooters) render in one thread and process network code in another. The game still performs and looks great but skills may not fire responsively. I suspect in GW2 this issue is because of the proximity design of GW2 where attacks in fields have to be resolved server side.
There may be a better way to pipeline the new network data into rendering frames faster, but I don’t think anyone has solved that problem yet.
The solution most games use is that they don’t have a massive amount of AoE and positional fields are basically non-existent. This reduces the graphics processing required, server computational load and greatly reduces the network traffic. This is why some old school MMOs in many ways can out perform GW2. That said what ANet has accomplished in this design is truly impressive even though the design is not conducive to performance in large scale play.
I suspect if there is a GW3 it will have far less AoE, more reliance on single target skills and few if any spammable fields.
“Youre lips are movin and youre complaining about something thats wingeing.”
https://forum-en.gw2archive.eu/forum/game/wuv/FPS-CPU-GPU/first#post5984493
can someone try a 5ghz at zerg vs zerg fights (cant push high enough past 4.7ghz only air cooled)
or even if have the resources have a dual or quad socketted server board not OCed ofc test the fps at zergs
mine is at i7 4.5 ghz and a 980 ti and still get 35fps at zergs (queued zerg)
only improvement ive made was to turn off all “names” enemies, players, usable objects etc" then my fps at zergs got boosted up, downside is you dont know who is the enemy lol
turn off names and let the graphics identify an ally to an enemy (i wont spoon feed the lazy designers), even highlight enemies on minimap instead of the screen, no need to identify an “incoming” from red names that appear on screen, it just have to be on minimap
Gate of Madness
(edited by Norbe.7630)
Best I can tell is that the game is bound by the CPU and hamstrung by its networking code. You can basically load the game on the most bad kitten rig and it will still perform graphically like a dog when a lot is going on.
My machine must be super bad xxxx then since I don’t have the problems described here at any time even when 3 blobs are in the same spot fighting. As I said above, I’m seriously not boasting anything… just stating that in my own personal experience it does make a difference when you have ALL things running well: Good CPU, separate high performing graphics card with own memory, very high speed internet and so on. I run the highest settings on everything in GW2. .. and I’m on a laptop… Maybe I’m just lucky?
My machine must be super bad xxxx then since I don’t have the problems described here at any time even when 3 blobs are in the same spot fighting. As I said above, I’m seriously not boasting anything… just stating that in my own personal experience it does make a difference when you have ALL things running well: Good CPU, separate high performing graphics card with own memory, very high speed internet and so on. I run the highest settings on everything in GW2. .. and I’m on a laptop… Maybe I’m just lucky?
GW2 in big fights (as in 3 way SM map q’d fights) runs poorly with a lot of skipped frames and poor FPS no matter the environment that I have tested.
It is a game that doesn’t seem to run any better on a 980ti than a 770 at least with a 1080p resolution. It also doesn’t seem to be effected by cores as our Xeon E5 rigs or our overclocked i7 doesn’t significantly improve performance over a stock i5.
Drive performance also doesn’t seem to matter past a stock SSD. A RAID0 SSD setup didn’t run it any faster nor did a pure RAM drive. Throwing a ridiculous amount of RAM at it had the predicable effect of doing nothing.
I haven’t dug into the 64 bit client but the fact that the system performs better when running it in big fights seems to imply some local memory pressure that Win32 limits. The game seems to run the same in 2 cores as it does in 4 or 8 implies it is driven mostly by a main thread.
I won’t go into what I have or haven’t done looking under the hood but I am programmer with years of professional assembler development/debugging on mainframes and PC environments (yep I am old-er).
“Youre lips are movin and youre complaining about something thats wingeing.”
Straegen.2938, you should defenitly try the 64bit client, i noticed i very good boost, but still having alot of skill lag, and fps drops when blob passes nearby, but ic it has an alert tool lol.
Straegen: I’m running i7 with at 2.4 ghz – and I am running the 64bit GW2. Yes I have fps drops when big numbers are on screen but never the lag people talk about. To top it all off I run in windowed mode, never full screen… lol, is it because ASUS just makes a better gaming laptop than most desktops? maybe?
Straegen.2938, you should defenitly try the 64bit client, i noticed i very good boost, but still having alot of skill lag, and fps drops when blob passes nearby, but ic it has an alert tool lol.
I run the 64 bit client… what I have not done is any analysis of what it is doing on the assembler level. I suspect it is doing the exact same thing as the 32 bit but with a larger memory pool and more registers. Floating point calculations are also faster in 64 bit which may account for some performance.
FPS-wise I am seeing relatively the same performance. The 64 bit client mostly resolved a crashing issue for me which was likely a register or memory pool issue. I think a lot of people think 30 FPS is “fine” while others that run games at 4k, with high detail settings in 60+ FPS can really appreciate just how poorly 30 FPS looks and plays.
“Youre lips are movin and youre complaining about something thats wingeing.”
I just think it’s weird that graphics don’t affect my FPS.
The graphics settings aren’t affecting your WvW FPS because the graphics systems of the client are mostly c. 2010 code for DirectX9, using visual effects methods, textures, models, etc., that fit within a compute budget appropriate for that year.
Those client systems have been improved since launch, but the foundation was built before launch. Your GPU and CPU are simply over-spec for the client’s target.
WvW causes downward spikes in FPS because the GPU stops producing frames while it waits for the CPU or Server to tell it what to draw. That is only a local problem for your PC if the CPU is bottlenecking, and that’s actually difficult to analyze sometimes.
If your PC parts are all younger technology than c. 2014 for when their technology was released by their manufacturers, we can just assume that the “problem” is in how the WvW server/client relationship code is structured, and can also assume that the FPS loss is that most common form of it: WvW on GW2.exe can not calculate a battle between all of the clients it allows to connect to the same instanced map.
The FPS loss in that scenario is irrelevant to any graphical effects calculations. That’s why your settings don’t change the outcome. The FPS loss is in the client-to-server-to-client communication loop, which is necessarily forcing all clients to occasionally (and frequently) pause their image (frame) draw routines while waiting for more data.
RvR isn’t “endgame”, it’s the only game. Cu in CU.
(edited by Virtute.8251)
My machine must be super bad xxxx then since I don’t have the problems described here at any time even when 3 blobs are in the same spot fighting. As I said above, I’m seriously not boasting anything… just stating that in my own personal experience it does make a difference when you have ALL things running well: Good CPU, separate high performing graphics card with own memory, very high speed internet and so on. I run the highest settings on everything in GW2. .. and I’m on a laptop… Maybe I’m just lucky?
GW2 in big fights (as in 3 way SM map q’d fights) runs poorly with a lot of skipped frames and poor FPS no matter the environment that I have tested.
It is a game that doesn’t seem to run any better on a 980ti than a 770 at least with a 1080p resolution. It also doesn’t seem to be effected by cores as our Xeon E5 rigs or our overclocked i7 doesn’t significantly improve performance over a stock i5.
Drive performance also doesn’t seem to matter past a stock SSD. A RAID0 SSD setup didn’t run it any faster nor did a pure RAM drive. Throwing a ridiculous amount of RAM at it had the predicable effect of doing nothing.
I haven’t dug into the 64 bit client but the fact that the system performs better when running it in big fights seems to imply some local memory pressure that Win32 limits. The game seems to run the same in 2 cores as it does in 4 or 8 implies it is driven mostly by a main thread.
I won’t go into what I have or haven’t done looking under the hood but I am programmer with years of professional assembler development/debugging on mainframes and PC environments (yep I am old-er).
clock speed matters more than CPU cores , with IPC being most important (Intel ; unless you have less than 4 threads you won’t see much difference)
That is why you didn’t see an improvement from a Xeon
2.6-3.0Ghz Intel core i5 vs 4.6Ghz+ Intel i5/i7 is a big difference
OP:
Model limit medium
turn off post process , reflections
use 64bit client
In your backline: Elementalist+Mesmer+Necromancer
(edited by Infusion.7149)