How the 'culling' system can Slow your game.
I’m not very in the know on tech stuff, but what does an SSD have to do with loading character models in an event?
Sligly off topic for this forum but this is also why I think some WvWers see more ‘lag’ with the new culling system as the first thing it does when you see opponents is Unload half your ally warband and load the textures for the opponents.
Then when your fighting it is constantly in a load/unload cycle as it dynamically adjusts who to show.
If it was not so aggressive people who had under top spec CPU may get much better frame rates.
I’m not very in the know on tech stuff, but what does an SSD have to do with loading character models in an event?
If you are loading from an SSD its IO is much greater so it can rush the textures into ram much faster than a conventional spinning hard-drive.
That is to say because it can be 10 to 100 times faster at random reads vs spinning HD then instead of the harddrive bottlenecking and people loading slowly you get people loading and unloading multiple times a second generating lots of CPU work.
Either way, we’re getting a change in culling next Tuesday so hopefully we’ll see improvements.
[Currently Inactive, Playing BF4]
Magic find works. http://sinasdf.imgur.com/
Just to be clear my slight fix would be to make the game hold on to the players it displays for a sort time say 2 seconds before they are ‘culled’ out for another player. This would mean as you moved through a group they would display later however if you are in a constant ‘culling’ back and forth situation you would see a much more stable display of players.
Either way, we’re getting a change in culling next Tuesday so hopefully we’ll see improvements.
There will be no change to the ‘culling’ at the dragon fights next Tuesday. It is only for WvW.
Either way, we’re getting a change in culling next Tuesday so hopefully we’ll see improvements.
In WvW there stil working on PVE side of things but that will still be some time this year.
http://youtu.be/BaDnlUFuQ04 short youtube of me testing culling in PvE
Is part reason why I suggested an option for quaggans, here.
Honestly I think it’s an issue with how the character models are stored(or not stored, as is the case) in RAM. Once my system has the model rendered it shouldn’t be dumping it from vRAM and rerendering unless I’m maxing out the RAM, which I’m not. There’s something screwy with the way their code is working at the moment.
I would be interested if someone with a spinning HD could try what I did in that video and see if they also get huge CPU spikes or if the slow loading smooths it out a bit.
Honestly I think it’s an issue with how the character models are stored(or not stored, as is the case) in RAM. Once my system has the model rendered it shouldn’t be dumping it from vRAM and rerendering unless I’m maxing out the RAM, which I’m not. There’s something screwy with the way their code is working at the moment.
Well first…the way the stuff is handled is server-side. The server determines who you should or should not have rendered. Secondly, if it didn’t dump the information from your RAM when you don’t need it, that would be called a memory leak. The game and your computer can’t magically know when you run out of RAM and know what to dump at that point. It doesn’t work that way.
Well first…the way the stuff is handled is server-side. The server determines who you should or should not have rendered. Secondly, if it didn’t dump the information from your RAM when you don’t need it, that would be called a memory leak. The game and your computer can’t magically know when you run out of RAM and know what to dump at that point. It doesn’t work that way.
The culling discussed here is handled server side, but you are incorrectly using the term memory leak. (A memory leak is when a program allocates memory, no longer needs it, but does not properly return that memory to the system/memory manager. The system thinks it is in use, though it is not.)
There are many ways that programs can load and unload information from memory and many strategies for doing so. It is unlikely that players who you see one second but not the next are being dropped from RAM. Those textures are very likely still in RAM, or at least the majority of them.
If GW2 needs to drop textures from RAM within seconds of that character dropping from view, they have a very-poorly written graphics engine. I doubt that’s the case.
So I believe rizzo is correct here.
www.getunicorned.com / northernshiverpeaks.org
Honestly I think it’s an issue with how the character models are stored(or not stored, as is the case) in RAM. Once my system has the model rendered it shouldn’t be dumping it from vRAM and rerendering unless I’m maxing out the RAM, which I’m not. There’s something screwy with the way their code is working at the moment.
Well first…the way the stuff is handled is server-side. The server determines who you should or should not have rendered. Secondly, if it didn’t dump the information from your RAM when you don’t need it, that would be called a memory leak. The game and your computer can’t magically know when you run out of RAM and know what to dump at that point. It doesn’t work that way.
After 15 years in the business I’m perfectly aware of what a memory leak is, and what I’m talking about is not a memory leak.
Your OS absolutely does know when you’re getting low and running out of RAM, it’s called dynamic memory allocation and has been a function of OS’s for quite a long time. Windows 7 is actually really good at it, which is why a machine that is capable of running 7 runs it better than either Vista or XP.
There are many ways that programs can load and unload information from memory and many strategies for doing so. It is unlikely that players who you see one second but not the next are being dropped from RAM. Those textures are very likely still in RAM, or at least the majority of them.
Huh never thought of it from that angle. The server should be sending the command to display the model, but it’s not. Either way, something is screwy because I can definitely render more models than 15-20 at a time.
(edited by rizzo.1079)
You know I played EQ, and whenever I enter Guild Lobby and theres like 250+ people sitting in center of that tiny zone, it would lag extremely bad for like 3-6 seconds then the lag would go away and you would be able to see everyone. I’ve never seen culling in EQ, GW2 is my first experience with this culling.
I’m not very in the know on tech stuff, but what does an SSD have to do with loading character models in an event?
There is some anomaly in the way GW2 is written that causes it to do a large number of (presumably) random read I/Os to disk when zoning in to a highly populated area. While the EULA clearly limits instrumenting the client to chase this down, I observe that replacing a traditional hard disk with a decent SSD radically improves performance (given the three people’s computers I’ve done this on) of zone in and at certain other poorly characterized times.
I speculate it’s fetching graphics elements (textures) randomly from the disk.
TLDR: Massive CPU use as game loads the different textures in and out of RAM when you move in a High ‘culling’ area.
I have an SSD and 4.5GHz 2600k. I was playing with core affinity while I was waiting for a dragon fight so I could see where CPU load spikes were coming from. When I set the game to use just two cores and ran back an forth though the 50 or so people waiting for the fight I could see the cores Max out as the charactors were loaded and unloaded from the culling system.
As I have an ssd this happens very fast and people can appear and vanish very fast there seems to be no pause or throttle on this so if you run in circles you generate lots of load and unload in a short space of time.
I actually think the system needs some sort of persistence choice added. That way it would chose 20 people in player range and not unload any unless the amout changes by a large number.
With the client always trying to show the closest 20 it is caught in a cycle of loading different textures.
Either let it load all the textures (guess this would not work for 32bit people) or make it pick some of the people and stick with them rather than the totally dynamic system we have now where if you have a fast enough pc you can ‘cull’ in and out 10-20 people each second with all the loading that produces on CPU.
When I have all cores on I do not have any slowing of my pc but when you run around a large group you can see the Huge load this generates vs standing still.
Has anyone tried this with a graphics card with enormous (say >2GB) memory? i speculate this is thrashing of the texture cache rather than just loading textures.
I do not observe this on computers with old graphics cards for which the game caps settings and renders with much lower quality.
Concur with your observation on SSD.
There are many ways that programs can load and unload information from memory and many strategies for doing so. It is unlikely that players who you see one second but not the next are being dropped from RAM. Those textures are very likely still in RAM, or at least the majority of them.
Huh never thought of it from that angle. The server should be sending the command to display the model, but it’s not. Either way, something is screwy because I can definitely render more models than 15-20 at a time.
That’s (probably) what’s happening. There are maybe 100 or 150 characters’ textures in RAM, but due to server-side culling, your client only knows the position of 50 of them. (Or however many.) The other 100 characters’ worth of textures are in memory, but the characters are not being drawn.
Now… it’s possible that the client is handling these textures in a VERY inefficient manner and dropping textures once the server stops reporting their position. This seems to be what the original post is suggesting. That would be a TERRIBLE design, but perhaps could happen if there is a large enough number of characters all in the same area.
There’s no way that we, as players, can easily test or verify this. CPU spikes don’t really tell us whether this is happening. CPU spikes plus HDD reads might tell us this, but there’s still a level of speculation.
www.getunicorned.com / northernshiverpeaks.org
(edited by timmyf.1490)
You know I played EQ, and whenever I enter Guild Lobby and theres like 250+ people sitting in center of that tiny zone, it would lag extremely bad for like 3-6 seconds then the lag would go away and you would be able to see everyone. I’ve never seen culling in EQ, GW2 is my first experience with this culling.
That’s possibly a different problem because EQ had to push information on all 250 players across the network, then the engine had to load all 250 into its data structures. I studied DAoC in the same timeframe and long suspected that its design choice to use TCP for your own updates and UDP to tell you about the nearby characters would inevitably lead to TCP window collapse when you walked into a large crowd, putting one or more TCP timeouts on the critical path of you seeing the crowd you just ran into. But then the purchased client engine DAoC used took so long to instantiate a mob/player/etc that if it queued hundreds of them up suddenly the keep-alive messages were delayed so long in the queues that you LD’ed. Silly design choice.
Huh never thought of it from that angle. The server should be sending the command to display the model, but it’s not. Either way, something is screwy because I can definitely render more models than 15-20 at a time.
That’s (probably) what’s happening. There are maybe 100 or 150 characters’ textures in RAM, but due to server-side culling, your client only knows the position of 50 of them. (Or however many.) The other 100 characters’ worth of textures are in memory, but the characters are not being drawn.
Now… it’s possible that the client is handling these textures in a VERY inefficient manner and dropping textures once the server stops reporting their position. This seems to be what the original post is suggesting. That would be a TERRIBLE design, but perhaps could happen if there is a large enough number of characters all in the same area.
There’s no way that we, as players, can easily test or verify this. CPU spikes don’t really tell us whether this is happening. CPU spikes plus HDD reads might tell us this, but there’s still a level of speculation.
I would think the server sends a command to instantiate a model, which causes a data structure to be built in the client. The client at that time pulls the details of the particular mob/player/object/etc from the disk, which includes its graphics. If the client is counting on the OS to cache the disk, and it isn’t, we would get this result.
I have spent $800 on solid state disks to keep 4 family members playing this game since beta started, much more than we have paid for our copies of the game and enough gems to max bank vaults and many characters’ backpack slots.
Now… it’s possible that the client is handling these textures in a VERY inefficient manner and dropping textures once the server stops reporting their position. This seems to be what the original post is suggesting. That would be a TERRIBLE design, but perhaps could happen if there is a large enough number of characters all in the same area.
The playing around I’ve done with it has been with people waiting for meta events to start. It seems to render/display about 15-20 models that are right around me, also any guild/party members that are onscreen no matter their distance from me. If I move to a seemingly empty spot, the models that were previously rendered disappear and new models render after a few seconds. Same thing happens after I return to my original position and the render takes just as long, which is why I think the client could be flushing the previous renders totally. If it was storing them in vRAM, I would think they would appear as soon as I got close enough for the server to tell my machine to display them rather than all at once popin. I could very well be wrong though, I’m not an expert on video game code and vRAM, just a professional generalist.
This is on a decent vid card with 1.5gbvRAM and a hybrid SATA drive(main SSD is almost full atm).
I would think the server sends a command to instantiate a model, which causes a data structure to be built in the client. The client at that time pulls the details of the particular mob/player/object/etc from the disk, which includes its graphics. If the client is counting on the OS to cache the disk, and it isn’t, we would get this result.
Yeah. Exactly. But I would think that Anet would have optimized the caching process to prevent this sort of thing from happening, but it makes me wonder… I mean, they didn’t seem to think that culling would be as huge of a problem as it has been, so maybe they didn’t see the inefficiencies with texture management either?
But again, all speculation. If only Anet opened up a developer kit so we could help them test! :-D
www.getunicorned.com / northernshiverpeaks.org
Please watch http://youtu.be/BaDnlUFuQ04 and tell us if your loading times when you move are the same as I am getting in this video?
Now… it’s possible that the client is handling these textures in a VERY inefficient manner and dropping textures once the server stops reporting their position. This seems to be what the original post is suggesting. That would be a TERRIBLE design, but perhaps could happen if there is a large enough number of characters all in the same area.
The playing around I’ve done with it has been with people waiting for meta events to start. It seems to render/display about 15-20 models that are right around me, also any guild/party members that are onscreen no matter their distance from me. If I move to a seemingly empty spot, the models that were previously rendered disappear and new models render after a few seconds. Same thing happens after I return to my original position and the render takes just as long, which is why I think the client could be flushing the previous renders totally. If it was storing them in vRAM, I would think they would appear as soon as I got close enough for the server to tell my machine to display them rather than all at once popin. I could very well be wrong though, I’m not an expert on video game code and vRAM, just a professional generalist.
This is on a decent vid card with 1.5gbvRAM and a hybrid SATA drive(main SSD is almost full atm).
The video is accurate for me in PvE.
In WvW it’s a stranger mix. I will get this kind of instant apparition of people near me like that you showed in the video, while if I look in the distance I’ll see a player running without a designation of enemy or friend.
Main drive is SSD, secondary is hybrid drive, 8GB RAM, 2GB vRAM dual ati’s.
From what I understand of the WvW patch that is coming, the dev’s are offloading the culling to be controllable via the client. Basically if the machine can handle it you can designate the amount of culling yourself.
Devs: Trait Challenge Issued
Yep that looks like what I’m getting. Same experience in WvW also.
I hope the OP is right, if the CPU load further increases with the next patch WvW will finally become completely unplayable for a lot of people.
They said they’re gonna remove culling.
Yes i get the same as the Video shows but i use a standard sata WD 1TB HDD basically they materialize and disappear slower.
I have a card with 2GB of VRAM and culling is still a problem in large fights. also I have 12GB of RAM and the so called SSD. Why can’t we just take all the character textures and load them into RAM?
In case anyone has found a magical fix for the random frame drops in LA I have this card.
http://www.newegg.com/Product/Product.aspx?Item=N82E16814130837
(edited by Meeooww.3742)
I have a card with 2GB of VRAM and culling is still a problem in large fights. also I have 12GB of RAM and the so called SSD. Why can’t we just take all the character textures and load them into RAM?
In case anyone has found a magical fix for the random frame drops in LA I have this card.
http://www.newegg.com/Product/Product.aspx?Item=N82E16814130837
GW2 client is CPU hungry, not GPU. But congrats, you will have access to supersampling.