(edited by SRoode.7318)
Is GW2 Performance Limited at Source?
It’s limited by the game, it has a horrible performance regardless of how beastly you make your machine.
@Sicarius – At this point, I’m just trying to figure out specifically what the problem is. Is it the engine? I wouldn’t think so (to some extent) because I get decent performance when others are not around. My hypothesis is that it may be a limitation on the client servers. And, if that is the case, couldn’t Anet beef their servers up a bit (or loosen a “per player bandwidth cap”).
It’s limited by the game, it has a horrible performance regardless of how beastly you make your machine.
in what way has it terrible performance? the gfx are beautiful, I personally get 40-60 full setting s on a 3 year old rig. The only perf issue is wvw, and even then that’s still smooth. Try playing ESO and you will see what an average engine performance is like.
Re the problem, imagine the calculations involved when you have even 50 people grouped together spamming AOE that is hitting each other and interacting through fields, then add player movement, that’s a kitten ton of processing that has to be on the server to prevent exploitation.
“Trying to please everyone would not only be challenging
but would also result in a product that might not satisfy anyone”- Roman Pichler, Strategize
(edited by vesica tempestas.1563)
@vesica – That’s really my point. I get 35-60 fps too, old or new build. If my CPUs were at 100%, or my GPUs were at 100%, I could understand why I do not stay at 60 fps. But, no matter where I am and how empty or crowded the area is, my CPUs/CPUs are never breaking a sweat, but my FPS drop off.
You edited as I replied. Yes, that’s my point again, are we limited at the client server end?
You edited as I replied. Yes, that’s my point again, are we limited at the client server end?
I believe so, and ultimately the limit is hardware at this time.
“Trying to please everyone would not only be challenging
but would also result in a product that might not satisfy anyone”- Roman Pichler, Strategize
@Devs/Anet/Gaile, do you have any input?
You edited as I replied. Yes, that’s my point again, are we limited at the client server end?
Somewhat. In circumstances that you arent bottlenecked by network communication (largescale zerg content), I believe the primary bottleneck is CPU, not GPU. IIRC, GW2’s core threads are not multi-core compatible, so having a quad- to hexa-core processor doesnt net any boost in performance of the game unless the single core performance is higher on the given processor. Coincidentally, that’s also why AMD cpus can suck balls for GW2.
There is barely any multi-core support, and the game is CPU heavy, so your CPU is hardly going to be used. and the speed and 6 cores does not matter so much. The pentium D would probably run the game fine.
You could try unlimited FPS depending on your monitor.
@Aidan and John – My cores never went above 40% load. Not one of them, not even the most loaded core. So, even if it is single threaded, I was still limited somewhere else.
There seems to be an odd bug with newer Nvidia drivers, GW2, and VSync. Have you tried limiting your frames max FPS to 60 and disable VSync? Worked wonders on both our gaming machines here, dont see the big dip in FPS at all now.
Hi all,
I am a long time GW player and have been playing GW2 since roll-out. Over the years I have upgraded my PC components, and I had yet to see a significant jump in FPS performance. 2 months ago I built a new machine with an I7-5930 (6 cores). I watercooled the machine and it’s overclocked to 4.2 GHz (Prime95 stable). I also have two GTX-980Ti’s (also watercooled, and overclocked). I have 32 Gb of DDR4 memory. My internet is Comcast Blast, and I have 90 Mbps speed.
All that said, I still have not seen a significant change in FPS performance. When I run monitoring programs in the background, I can see my cores never go over 40-45% load, and my two video cards are the same, never even half loaded. I’m running 3 screens (5760×1080), and if I’m in an area with no other players, I get 60 fps (I have Vsync on). If I move to a populated area, I can drop down to 35 fps. This is no different than my prior build, with one screen, or with 3 screens.
I’m wondering at this point if performance is somehow limited by the client server. Does anyone have any insight/input on this?
(Edited to correct my screen resolution)
I have a 4k monitor which is like having 4 1080p monitors and a GTX 970 and have roughly the same performance you do.
Gw2 engine is a modified gw1 engine, imagine how old gw1 is.
The game is not multi core optimized so it will never use more than 2 full cores performance because game of those era limit themselves to 2 thread, as for why, google it.
Funny is that it is a dx9 game and I can run a dx11 game with awesome graphic, many NPC in the background while having much better FPS. If you attempt to set gw2 to higher graphic, your FPS will be affected and not because your graphic card cannot support those graphic, it is simply because of how much CPU workload are needed just to load those graphics.
Henge of Denravi Server
www.gw2time.com
(edited by SkyShroud.2865)
You edited as I replied. Yes, that’s my point again, are we limited at the client server end?
I believe so, and ultimately the limit is hardware at this time.
Not true at all. The problem is solely the way the engine runs, and the CPU-bounded nature of MMO games. There is room for optimization, but not to anything extreme.
Depending on how the code is written, the speed of the machine is completely irrelevant. I can write a simple recursive program that performs a very basic task which takes hours to complete versus picoseconds in other implementations. Some algorithms can’t be re-written accordingly without major complexity increases (harder to debug and much costlier to implement correctly).
As for the OP; more cores will not achieve anything in terms of performance. GW2 could run at almost identical speeds on an OC’ed pentium 4/core 2 machine given the rest of the hardware would be configured properly. DX9 doesn’t support multi-threading, so five of those cores are not being taken advantage of, and even then, multi-threaded process development can cause slowdowns depending on how the code is written, which is a relatively new and very complex task.
https://forum-en.gw2archive.eu/forum/professions/thief/ES-Suggestion-The-Deadeye-FORMAL/
@Aidan and John – My cores never went above 40% load. Not one of them, not even the most loaded core. So, even if it is single threaded, I was still limited somewhere else.
IIRC, with multicore cpus, you’ll never see a single core cap out at 100% usage unless all the cores do.
edit: the OS/BIOS would shunt other processes/threads to other cores to reduce load on that one core
Thank you all for your responses so far; there is a lot of great information here. I would still like to hear some feedback from the developers if they get a chance in the next couple of days. Thanks again.
The game engine isn’t designed with more than two primary thread and a handful of minor (in terms of CPU usage) ones. Since no thread can outperform a single core the only way more than three or four real cores can help is when you are running other CPU intensive activities at the same time as the game.
As for multiple video cards. The reason that works in some games is the workload is so high on the video card, the CPU can start sending another frame’s work to the second card without needing to wait for the first card to finish it’s frame. This game’s engine isn’t waiting on the video card that often but the reverse is true, the video card is waiting on the CPU to send it work to do. I’m guessing if the OP pulls one of his 980Tis that there would be virtually no change in frame rates because it’s not the video card processing speed being the issue.
But this isn’t news. We’ve been discussing this since launch. Admittedly more and more as hardware improves but it’s a known issue. Tiny God gaming systems are severe overkill for this game. The game engine isn’t designed to scale much beyond it’s minimum two core requirement.
RIP City of Heroes
Though My 3930K with 2 GTX 780’s (32Gb DDR3) is a bit outdated nowadays, I am looking at the same performance.
This is mostly cause the CPU’s are the main bottleneck,
Hence most people use i5’s clocked into red zones, as i5’s are easier to OC compare to i7’s.1 670 would be plenty to run Gw2, to be honest people with earlier series already got the same performance. I use my hex-core to simultaneaously watch streams/video’s/movies and read pages on the web while playing (yes Dulfy and others) while using the programs for normal work to the side in a corner of the 32 Gb. The PC is a multitask beast, and just focussing on gw2 with it’s 2-3 thread use and the few kb’s it needs each second is a waste of about 60-85% of your pc’s capabilities.
Been There, Done That & Will do it again…except maybe world completion.
Before somebody starts the old DX discussion, let me say this : at the moment i play
a game that runs on Unreal Engine with DX11 and i had it happen there now 3 times
that my FPS went lower than 1 in some Bossfights.
Best MMOs are the ones that never make it. Therefore Stargate Online wins.
You edited as I replied. Yes, that’s my point again, are we limited at the client server end?
I believe so, and ultimately the limit is hardware at this time.
Not true at all. The problem is solely the way the engine runs, and the CPU-bounded nature of MMO games. There is room for optimization, but not to anything extreme.
Depending on how the code is written, the speed of the machine is completely irrelevant. I can write a simple recursive program that performs a very basic task which takes hours to complete versus picoseconds in other implementations. Some algorithms can’t be re-written accordingly without major complexity increases (harder to debug and much costlier to implement correctly).
As for the OP; more cores will not achieve anything in terms of performance. GW2 could run at almost identical speeds on an OC’ed pentium 4/core 2 machine given the rest of the hardware would be configured properly. DX9 doesn’t support multi-threading, so five of those cores are not being taken advantage of, and even then, multi-threaded process development can cause slowdowns depending on how the code is written, which is a relatively new and very complex task.
Performance issues are usually related to network code, IO, Messaging, queries. I don’t really think CPU and GFX card is the answer because the real bottlenecks are in cities and in wvw, and we can see that people can get 60+ fps with no issues. When I refer to hardware I’m talking about these factors and server side processing where lots of players are interacting.
“Trying to please everyone would not only be challenging
but would also result in a product that might not satisfy anyone”- Roman Pichler, Strategize
(edited by vesica tempestas.1563)
From my experience, I tend to think that vesica is correct and we are limited somehow by how quickly the client server can send data. Again, none of my cores are even close to being loaded, and I’m almost certain that if you ran this game on a fast single core processor, you would see the same.
Fast single core matters.
I have a amd and intel machine. The intel machine works better than the amd’s. Amd is known to have slower single core performance.
The network message shouldn’t contain graphical codes, it simply contain actions data which your computer procoess to display the appropriate things on your screen using the graphic data installed in your harddrive. If the network is slow, you merely have a slow respond to your actions, people might be teleporting, walking into walls, etc. It shouldn’t affect your FPS.
Again, gw2 is made using modified gw1 engine, it is not multi threaded unlike the newest of games. At most, it is only partially parallelized.
Henge of Denravi Server
www.gw2time.com
(edited by SkyShroud.2865)
No, the game is limited by the game engine rendered in your computer. That’s why a guy with an overclocked intel can get way better performance than a guy with a AMD FX.
If it would be a network issue, everyone would see the same performance regardless the specs.
But I can corfirm that faster core frequency, faster RAM speed and lower latencies, and more pcie bandwidth improves the performance.
The only thing related to network issues is lag in situations with massive amount of data going on, like 3 blobs fighting in EB or the laser event in the desert borderlands.
i7 5775c @ 4.1GHz – 12GB RAM @ 2400MHz – RX 480 @ 1390/2140MHz
No what? I was referring to network issues in wvw and cities you just repeated what I said.. On your local pc if your not cpu or gfx capped or running out of memory then that’s not the problem (which is what the OP was referring to)
“Trying to please everyone would not only be challenging
but would also result in a product that might not satisfy anyone”- Roman Pichler, Strategize
@Aidan Savage.2078
‘’Coincidentally, that’s also why AMD cpus can suck balls for GW2.’’
Hmm, Amd Fx 6350;
Can go over 150,…it’s on gtx950,It seems that my processor does not share your opinion.
(edited by Vienna.1579)
Well this video:
https://www.youtube.com/watch?v=0c-0pmSxiEE
Shows that 980 TI SLI is capable of running GW2 at 4k Resolution, at 60fps. Probably a problem with Temperature, PSU, or GPU. The problem may also be running the triple monitors, as optimization for running a game in multiple monitors is hard to come by and a very niche environment.
No what? I was referring to network issues in wvw and cities you just repeated what I said.. On your local pc if your not cpu or gfx capped or running out of memory then that’s not the problem (which is what the OP was referring to)
I said network issues are only related to experienced lag and ping, not framerate. That’s determined by your computer specs.
PD: It is impossible not to be capped by the cpu in this game. It doesn’t exist such a cpu.
i7 5775c @ 4.1GHz – 12GB RAM @ 2400MHz – RX 480 @ 1390/2140MHz
(edited by Ansau.7326)
A overclocked intel will not have significant gain in terms of performance. On the other hand, a overclocked amd will have a significant performance gain. Like I have said, the intel is better than amd in terms of single core performance, however, that doesn’t mean that “speed”, literally, is faster.
Intel has a higher IPC compared to AMD thus if the current intel is capable of processing all of gw2’s data, any further increase of speed will make little gain. AMD on the other hand will get significant gain from overclocking because data processing is completed faster which then reduced the waiting time for the other data. To put it in laymen, intel has more service counters than amd in one center, so intel has more than enough counters to handle X number of customers gw2 delivered in one day but for amd, they had to increase their speed to handle the same number of customers due to their limited counters.
Henge of Denravi Server
www.gw2time.com
@Aidan Savage.2078
‘’Coincidentally, that’s also why AMD cpus can suck balls for GW2.’’Hmm, Amd Fx 6350;
Can go over 150,…it’s on gtx950,It seems that my processor does not share your opinion.
That’s a bit of a “l2read” moment right there You do realize I said “CAN suck balls,” right?
A overclocked intel will not have significant gain in terms of performance.
Intel has a higher IPC compared to AMD thus if the current intel is capable of processing all of gw2’s data, any further increase of speed will make little gain.
These statements are false.
i7 5775c @ 4.1GHz – 12GB RAM @ 2400MHz – RX 480 @ 1390/2140MHz
(edited by Ansau.7326)
A overclocked intel will not have significant gain in terms of performance.
Intel has a higher IPC compared to AMD thus if the current intel is capable of processing all of gw2’s data, any further increase of speed will make little gain.
These statements are false.
I have an intel and amd machine, of course, just saying it without any established reviewers benchmark test is pointless so here the link.
http://www.tomshardware.com/reviews/guild-wars-2-performance-benchmark,3268-7.html
A thread in the tech forums, a simple 1 minute google.
http://www.pcguide.com/vb/showthread.php?90486-Intel-vs-AMD-(CPU)-in-2015
Henge of Denravi Server
www.gw2time.com
(edited by SkyShroud.2865)
I have an intel and amd machine, of course, just saying it without any established reviewers benchmark test is pointless so here the link.
http://www.tomshardware.com/reviews/guild-wars-2-performance-benchmark,3268-7.html
A thread in the tech forums, a simple 1 minute google.
http://www.pcguide.com/vb/showthread.php?90486-Intel-vs-AMD-(CPU)-in-2015
A test done in 2012, when the game was not even released worldwide, doesn’t say anything. Specially since back then the game has seen quite amount of optimization improvements. And even more when this test is done in a pve map, the Norn staring map. This test is unvalid.
Then you show a general basic guide telling us even less. Like I could take it seriously when it tells us an i3 is just a tad better than a FX 4000, and the reality shows in several games an i3 can smash a FX8000.
Another example is pcie bandwidth, where several reviews show it doesn’t matter that much, and yet in gw2 it is crucial.
Anything found on internet cannot be extrapolated in gw2, unless tested specifically in the game with the proper methods.
i7 5775c @ 4.1GHz – 12GB RAM @ 2400MHz – RX 480 @ 1390/2140MHz
(edited by Ansau.7326)
Well this video:
https://www.youtube.com/watch?v=0c-0pmSxiEEShows that 980 TI SLI is capable of running GW2 at 4k Resolution, at 60fps. Probably a problem with Temperature, PSU, or GPU. The problem may also be running the triple monitors, as optimization for running a game in multiple monitors is hard to come by and a very niche environment.
I can get 60+ fps too. The video shows the player playing with nobody else around. When I play with nobody else around, I get 60+ fps. I drops off the table though in populated areas, like the mists lobby. Also, I have the same drop off whether using 1 or 3 monitors. Again, that’s what is making me think it is a client server limitation.
Intel do have a higher IPC per core than AMD, nearly two to one actually. The clock speed differences closes the gap slightly for AMD. Intel can simply execute the main processing loop more frequently than AMD, baring holds for hardware like asset loading from drives and a slow GPU.
And SkyShroud’s analogy is a variation of the bank queue analogy but used to describe the difference in pipelines between the two CPU architectures. I usually use that for threads to core assignment by an OS.
Basically Intel’s cores are over designed which allows more opportunities to process multiple instructions at the same time. So much so that’s where Hyper Threading comes in to improve the number of opportunities to do that. The Bulldozer integer core was designed with significantly fewer resources to allow multiple instruction opportunities but the trade off was they could have more of these cores in the same size sliver of silicon. Great if your system has loads of threads to run at the same time, not so good if you only have a few.
RIP City of Heroes
(edited by Behellagh.1468)
I have an intel and amd machine, of course, just saying it without any established reviewers benchmark test is pointless so here the link.
http://www.tomshardware.com/reviews/guild-wars-2-performance-benchmark,3268-7.html
A thread in the tech forums, a simple 1 minute google.
http://www.pcguide.com/vb/showthread.php?90486-Intel-vs-AMD-(CPU)-in-2015A test done in 2012, when the game was not even released worldwide, doesn’t say anything. Specially since back then the game has seen quite amount of optimization improvements. And even more when this test is done in a pve map, the Norn staring map. This test is unvalid.
Then you show a general basic guide telling us even less. Like I could take it seriously when it tells us an i3 is just a tad better than a FX 4000, and the reality shows in several games an i3 can smash a FX8000.
Another example is pcie bandwidth, where several reviews show it doesn’t matter that much, and yet in gw2 it is crucial.Anything found on internet cannot be extrapolated in gw2, unless tested specifically in the game with the proper methods.
There was a guy in another thread uses the same argument like your’s, citing dated article without any substantial technical arguments. Allow me to tell you why the article continue to be relevant despite being dated. Even if anet did any forms of optimization to the engine, that level of optimization does not exceed the 2 cores limitation. As long as that optimization does not exceed the 2 cores limitation, it will continue to be related to single core performance. As long it is related to single core performance, IPC will continue to be relevant. Likewise, since 2012, there isn’t any major changes to the CPU architecture thus all in all, the article will continue to be relevant.
You further claim that PCI-E (you didn’t state what version) is crucial to gw2, if you are comparing PCI-E 1.0 and 4.0, I really don’t know what to say.
Regardless, even if you lack the technical understanding, the link to that forum I have provided already explained what is IPC and how IPC matters to gaming performance but I guess you totally do not bother reading it while continue to insist overclocking intel processor will result in greater gain than amd’s.
Don’t embarrass yourself further.
Henge of Denravi Server
www.gw2time.com
Children pretending to know about tech is so hilarious to see XD
But I don’t think tech has anything to do with this, the Gw2 engine is just ancient crap salvaged from the good old gw1 era, it’s bound to go wrong someday, Anet is just making the best out of it, adding ducktape to things usually does the job :’)
You can’t separate software from hardware when you are talking performance. They go hand in hand. But the only thing a player has control over is their hardware so why not choose the hardware that runs the client the best?
RIP City of Heroes
If I might be honest here; you should be appreciative of what you’ve got I play on a laptop with an HD4000m / i7 8gb, I’d give anything for +1 frame above 10.
There was a guy in another thread uses the same argument like your’s, citing dated article without any substantial technical arguments. Allow me to tell you why the article continue to be relevant despite being dated. Even if anet did any forms of optimization to the engine, that level of optimization does not exceed the 2 cores limitation. As long as that optimization does not exceed the 2 cores limitation, it will continue to be related to single core performance. As long it is related to single core performance, IPC will continue to be relevant. Likewise, since 2012, there isn’t any major changes to the CPU architecture thus all in all, the article will continue to be relevant.
You further claim that PCI-E (you didn’t state what version) is crucial to gw2, if you are comparing PCI-E 1.0 and 4.0, I really don’t know what to say.
Regardless, even if you lack the technical understanding, the link to that forum I have provided already explained what is IPC and how IPC matters to gaming performance but I guess you totally do not bother reading it while continue to insist overclocking intel processor will result in greater gain than amd’s.
Don’t embarrass yourself further.
And here is where you fail miserably.
The 2 core limitation is not true. The game can stress all cores of a 4690k oced to 4.5GHz to 70-90%.
And yes, since 2012, anet has done some optimization, as Johan said here (https://www.reddit.com/r/Guildwars2/comments/3ajnso/bad_optimalization_in_gw2/csdnn3n). They try to take away thing from the main thread and in 3 years they’ve done some work, which makes that test irrelevant, as back then a lot more was going on the main thread than now, aka
About pcie speed, no you don’t have to go to pcie 1.0. At 3.0, there’s still an improvement from x8 to x16, and moving 3.0 x4 there’s a huge drop of performance.
I did some test last year:
- x16 had 110fps underwater, 70’s in empty map, 50’s in more heavy graphic situation and 23-24 in heavy cpu situation.
-x8 had 100 fps underwater, 60’s in empty map, 46 in more heavy graphic situation and 22-23 fps in heavy cpu situation.
- x4 had 77fps underwater, 45’s in empty map, 36 in more graphic situation and 20-21 in heavy cpu situation.
Last week I could experience again the effect of pcie speed. Took off the gpu to clean it and I didn’t put it right, opened GW2 and I was getting 30-35 fps in LA. Looked at pcie speed and it was 3.0 x4. Reseated the gpu and getting 50-55fps when working at x16.
The funny thing is all your arguments are based in tests showing nothing relevant. Really a general guide talking about how intel is better at ipc than AMD? Isn’t that obvious? Why saying intel has better ipc than amd makes the overclocking in intel irrelevant?
I told you that gw2 performance test was not valid and I gave you 2 solid arguments (lack of optimization and test done in a pve map where the cpu is not stressed). Then you tell me about no architecture improvements, where from Sandy Bridge to Skylake there’s about a 25% improvement in the IPC, plus faster RAM speeds and lower overall latencies.
Still you haven’t said anything worth to prove your points, or to make mines unvalid.
Here you have more proves why you’re wrong.
I did some tests last year. I’l show you how core speed affect the framerate in gw2 while in citadel. On the left there’s all the tests of a 4690k at 4.4GHz and on the right at 3.6GHz. So a 22% increase of core frecuency gives me a 12-18% more framerate. And this is without an optimized system, as ram was at 1600, pcie at x8 and gpu at stock.
i7 5775c @ 4.1GHz – 12GB RAM @ 2400MHz – RX 480 @ 1390/2140MHz
Well this video:
https://www.youtube.com/watch?v=0c-0pmSxiEEShows that 980 TI SLI is capable of running GW2 at 4k Resolution, at 60fps. Probably a problem with Temperature, PSU, or GPU. The problem may also be running the triple monitors, as optimization for running a game in multiple monitors is hard to come by and a very niche environment.
I can get 60+ fps too. The video shows the player playing with nobody else around. When I play with nobody else around, I get 60+ fps. I drops off the table though in populated areas, like the mists lobby. Also, I have the same drop off whether using 1 or 3 monitors. Again, that’s what is making me think it is a client server limitation.
I have the same experience. I have a range of PCs with a wide variety of GPUs/CPU/APUs.
It is clear to me that in terms of GW2 you can have a fairly low specified CPU and retain playability; having a very high specified CPU will not protect you from SUDDEN drops in fps under certain conditions in GW2 – pve Boss fights being the most obvious.
Giving that the server has a limited process power for a given time cycle, and that the load will be dependent on the number of peeps requiring processing, a most obvious solution to dealing with saturated loading is to reduce the rate of information transmitted per peeps. This would have no effect on the quality of each frame displayed by the client but will reduce the frame rate processed by the client. Very few people are capable of noticing the difference between 30 fps and 60 fps unless there is a little number on the screen telling them otherwise – however everyone will notice the difference in single frame qualities (although some aspects are certainly not worth it imo, giving the increase in wattage required!).
The alternative to this approach is to DS peeps (old style method). GW2 – aside from the known a RAM problem (3GB anyone?) – very rarely DSs peeps. Additionally I have never seen any evidence of rubber banding in GW2. For an MMO game, GW2 is extremely stable.
You tend not to see the server doing this as often there will be other bottlenecks on the client that kick in on high populated areas – although in general I would say these are minor compared to the effect of varying the population. Clearly the lower your ‘normal’ fps is the more likely you will run into client bottlenecks under such circumstances.
Regards the Intel/AMD core arguments, I would in general say that GW2 has a fairly low single/dual core requirement – in terms of recent low wattage processors – and that very recent GPU cards don’t add all that much increased prettiness (for the wattage, noise and cost). As such the AMD APUs are very impressive when playing GW2, and hard to beat on a low budget for casual gaming, imo.
For example, a A8 7600 will on it’s own give you >30fps ‘normal’ playability in GW2 for £45 to £70 – and you won’t lose much – about 10% – if you toggle it down to 45watts operation.
(edited by lilypop.7819)
There was a guy in another thread uses the same argument like your’s, citing dated article without any substantial technical arguments. Allow me to tell you why the article continue to be relevant despite being dated. Even if anet did any forms of optimization to the engine, that level of optimization does not exceed the 2 cores limitation. As long as that optimization does not exceed the 2 cores limitation, it will continue to be related to single core performance. As long it is related to single core performance, IPC will continue to be relevant. Likewise, since 2012, there isn’t any major changes to the CPU architecture thus all in all, the article will continue to be relevant.
You further claim that PCI-E (you didn’t state what version) is crucial to gw2, if you are comparing PCI-E 1.0 and 4.0, I really don’t know what to say.
Regardless, even if you lack the technical understanding, the link to that forum I have provided already explained what is IPC and how IPC matters to gaming performance but I guess you totally do not bother reading it while continue to insist overclocking intel processor will result in greater gain than amd’s.
Don’t embarrass yourself further.
And here is where you fail miserably.
The 2 core limitation is not true. The game can stress all cores of a 4690k oced to 4.5GHz to 70-90%.
And yes, since 2012, anet has done some optimization, as Johan said here (https://www.reddit.com/r/Guildwars2/comments/3ajnso/bad_optimalization_in_gw2/csdnn3n). They try to take away thing from the main thread and in 3 years they’ve done some work, which makes that test irrelevant, as back then a lot more was going on the main thread than now, akaAbout pcie speed, no you don’t have to go to pcie 1.0. At 3.0, there’s still an improvement from x8 to x16, and moving 3.0 x4 there’s a huge drop of performance.
I did some test last year:
- x16 had 110fps underwater, 70’s in empty map, 50’s in more heavy graphic situation and 23-24 in heavy cpu situation.
-x8 had 100 fps underwater, 60’s in empty map, 46 in more heavy graphic situation and 22-23 fps in heavy cpu situation.
- x4 had 77fps underwater, 45’s in empty map, 36 in more graphic situation and 20-21 in heavy cpu situation.
Last week I could experience again the effect of pcie speed. Took off the gpu to clean it and I didn’t put it right, opened GW2 and I was getting 30-35 fps in LA. Looked at pcie speed and it was 3.0 x4. Reseated the gpu and getting 50-55fps when working at x16.The funny thing is all your arguments are based in tests showing nothing relevant. Really a general guide talking about how intel is better at ipc than AMD? Isn’t that obvious? Why saying intel has better ipc than amd makes the overclocking in intel irrelevant?
I told you that gw2 performance test was not valid and I gave you 2 solid arguments (lack of optimization and test done in a pve map where the cpu is not stressed). Then you tell me about no architecture improvements, where from Sandy Bridge to Skylake there’s about a 25% improvement in the IPC, plus faster RAM speeds and lower overall latencies.
Still you haven’t said anything worth to prove your points, or to make mines unvalid.Here you have more proves why you’re wrong.
I did some tests last year. I’l show you how core speed affect the framerate in gw2 while in citadel. On the left there’s all the tests of a 4690k at 4.4GHz and on the right at 3.6GHz. So a 22% increase of core frecuency gives me a 12-18% more framerate. And this is without an optimized system, as ram was at 1600, pcie at x8 and gpu at stock.
Unfortunately, if you have any programming knowledge, you will know that level of optimisation isn’t much. What they did is just merely moving non-renderer stuffs off the renderer thread, like network and sound etc to the other thread. They have two threads, that’s what the other thread usually do for a old school engine. This two threads are also why it uses two cores. Of course, there are other smaller threads but usually destroyed once they done doing their workload. At most, the optimisation give a small increase of FPS but that small increase of FPS is hard to be noticeable, afterall, gw2 do add eye candies to the game over the years and that cancel out the gain. Anet never optimise the engine to the point that it uses data parallelisation to scale so it never go beyond performance of two cores. To give you a example of a modern game that scale, I recently played Homeworlds: DOK and it scale pretty well, it increase in CPU usage as more units appear on the field, likewise decrease when units reduce.
Furthermore, threads do not have to maintain at one core, it can flow between different cores. Simply because threads are destroyed and recreated when need during runtime, there is no rule that say it has to maintain at one single core. This create a illusion that it might be using more than 2 cores performance if you just merely looking at one point of time instead of the average of long period.
Also, you insisted that intel benefit more from amd overclocking. The dated article already shown how much gain a amd can obtain by overclocking, way much more than intel gain. Furthermore, I explained why overclocking intel will not result similiar gain as amd’s by citing IPC.
Again, I have to say I have two machines, an i7-4770, yes, same generation as your’s but mine is i7. The amd is a Phemon x4. Overclocking amd have much more significant gain. Though mine amd is older, you can easily find a thread about a amd user in this forums and ask them how much gain they get from overclocking amd. It will not be just 12-18% like your’s, it is like 50% and more. To make the comparison fair, you will have to find a amd user that have the same stock and overclock frequency as your’s.
And again, I have already made a point but you refuse to accept it, heck, you also ignored Behellagh’s more precise explanations.
Also, I can see that your arguments are base on actual experience that happen to your machine which is a intel, do you ever have a amd and if you do, did you ever overclock it and run gw2 on it?
Edit:
Intel also uses SMT while amd uses CMT. This two different approach will also result how threads are handled and thus how effective the core is used. Amd dropped CMT and moving back to SMT with their next processor, Zen, which will result in a major architecture change.
Edit2:
It bothers me that you state the IPC improvement over a few gens, not just one gen, all while attempting to confuse the argument of overclocking intel vs amd. That is pretty dishonest.
Henge of Denravi Server
www.gw2time.com
(edited by SkyShroud.2865)
Edit: removing valid info, this is a slap fight between 2 mediocre PC users lol
(edited by Loxias.2375)
While it is true Gw2 has 2 main threads, it is also true it has tons of others that still require quite amount of core usage, like you can see in my photos. Also, even threads are placed in different cores, if a 4690k oced to 4.5GHz is being used so much for long periods of times (like here https://youtu.be/3XSvcnLcmwI?t=216) it means it is happing more than just 2 threads, and even a cpu with such high IPC struggles.
So yes, overclocking an intel cpu is also profitable. Intel cpus are still not good enough to run this game perfectly at stock speeds.
Then, you should avoid quoting things I’ve never said:
“You insisted that intel benefit more from amd overclocking”. I’ve never said this. Reread my posts twice and you will never see a sentence or an idea pointing to that direction. What I’ve said is that overclocking in intel sees benefits, just like amd, in response of one of your false statements: “A overclocked intel will not have significant gain in terms of performance”.
From a 22% overclock in Intel I’m seeing 12-18% more performance. Of course people in AMD can see more performance, but I’d also have done if I did a test 3GHz vs 4.5GHz. The thing is with intel I’m taking advantage of 55-80% of the theoretical improvement of the overclock.
This shows how personal you took it, failing at such basic point: never think in absolute results, but relative to another thing or their maximum potential.
“It bothers me that you state the IPC improvement over a few gens, not just one gen” Don’t cook the sentences in your flavour. You said since 2012 there’s no architecture improvements, yet they exist, a Haswell processor plays gw2 better than a SB, and a skylake even better.
But this is not about if a newer intel cpu plays it better than a older. The thing is the methodology used in that test doesn’t show the reality, as the conditions were not the ideal, not in the place tested (not cpu stressful) neither in the time (remember it was done before worldwide launch 28th August 2012, and the game perform much better in the same hardware than before).
You can see it in this 2 videos. First is from second beta, 2-4 months before the test you psted, and the second is from mid 2014.
https://www.youtube.com/watch?v=2UHpPVhc-NE
https://www.youtube.com/watch?v=hfL0myRvDGM
In the second I get better performance in a blob vs blob than in the first with some dudes hitting a door.
As you can see, you failed to understand my objection. It is not about intel vs amd IPC, it is not about thread management, it is not about architecture over generations, it is not about SMT vs CMT (which cannot be compared directly as one is a technology and the other is an architecture design, a CMT cpu could possibly have SMT if implemented).
It is about you saying false statements and me fixing you, like:
- A overclocked intel will not have significant gain in terms of performance. Wrong and I’ve proved, again think in a relative perspective 12-18% out of 22% OC.
- Intel is capable of processing all of gw2’s data, any further increase of speed will make little gain. The statement is not wrong, but the assumption intel is capable of procssing all gw2 data.
- The game is not multi core optimized so it will never use more than 2 full cores. One thing is the game and another one the main threads, the game can use more than 2 threads.
- Likewise, since 2012, there isn’t any major changes to the CPU architecture thus all in all, the article will continue to be relevant. Like I said before the issue is not in the correlation of performance, but the inadequate methods of the test.
You should have stayed with my first quoting to you, and yet you brought amd vs intel ipc, thread management and other things to the discussion that, while they are true, they are not related to the fact:
A overclocked intel will not have significant gain in terms of performance.
Intel has a higher IPC compared to AMD thus if the current intel is capable of processing all of gw2’s data, any further increase of speed will make little gain.
These statements are false.
PD: You should never assume the knowledge of the other people. That makes you ridiculous and presumptuous.
i7 5775c @ 4.1GHz – 12GB RAM @ 2400MHz – RX 480 @ 1390/2140MHz
(edited by Ansau.7326)
You know, everything you said are base on your words, not base on any 3rd party data.
Strictly speaking, my i7 disagree, the gain is not significant.
Edit: Btw, the article I provided, they did retest on the released version, you really do not read eh.
Henge of Denravi Server
www.gw2time.com
(edited by SkyShroud.2865)
Just because it is mine it doesn’t make it less valuable than others. At least I bring own experience and personal testing, not a text I’ve read from some random guys.
The fact you don’t find it worth doesn’t mean it isn’t. In fact, it’s the most detailed comparison existing on the internet. But you insist on taking it personal… So good luck, the evidence is there.
About Tomshardware’s test, it’s still not relevant nowadays. Not the proper testing, in a pve map, and too close to release, anet didn’t do all the optimization in the first month and then forgot about it…
Just the first fact is enough to make the test invalid no matter how the game changes.
i7 5775c @ 4.1GHz – 12GB RAM @ 2400MHz – RX 480 @ 1390/2140MHz
I was actually going to upgrade my HTPC today, but as result of this thread I tried to run GW2 on it.
Spec is a A6-6400 with 8GB of 1600MHz Ram (was 4GB) and a 64GB SSD. Prefectly fine for HD streaming and audio – which all it was used for. The A6 has two cpu cores and 6 gpu cores, and I think it runs from a base of 3.1 to a turbo of 3.9. It’s not a processor I would select for 3D gaming.
Loading Gw2 from a USB 3.0 memory stick on a USB 3.0 port, resolution 1920×1080 on low settings:-
Pvp Lobby/VB/WvW: 20 to 30 fps. Only LA caused fps to drop too 15 to 20. Fps were determine from the GW2 options page.
I played a single round of Pvp and spent about 90mins playing in VB – including two rounds of the bug hero thing where you fly around popping bug bubbles and a half complete small zerg event chase. The game was playable and the graphics where fine – there was no stutters at all. Even I was a bit surprised!!!
Average cpu load was 80%, Av cpu temp was 64, max 68, physical memory usage was 41%.
What was even more amazing was that the HTPC remained quiet. It turned out this was due to a fan fault – thing never shifted from minimum 25% loading. This meant that AMD must have been trolling the CPU to keep temps below 70. The case was completely open thru all the game play as I was intending to switch the cpu out after giving GW2 a try. HTPC has only one fan, admittantly it’s way over spec-ed at 150 watts for the 65 watt APU.
OK we’re not talking amazing graphic quality here, although I was very surprised at this as well, but the game is very playable, imo.
My point is for a very low 3D gaming rig, I saw nothing like the variation in fps I would have expected – the fps was fairly consistently in the 20 to 30 range thru quite a wide variety of activities. My conclusion is that large drops in fps are mainly server side in origin., caused by bottleneck due to high population processing.
BTW: I noticed from running a Direct9.0c benchmark – 3DMark06 – that adding the 4GB RAM caused a 50% jump in measurement – HQ Graphics but default res of 1280x 960. There was no similar improvement with a Direct11 benchmark – Unigine Valley – HQ graphics but much higher res.
Just because it is mine it doesn’t make it less valuable than others. At least I bring own experience and personal testing, not a text I’ve read from some random guys.
The fact you don’t find it worth doesn’t mean it isn’t. In fact, it’s the most detailed comparison existing on the internet. But you insist on taking it personal… So good luck, the evidence is there.About Tomshardware’s test, it’s still not relevant nowadays. Not the proper testing, in a pve map, and too close to release, anet didn’t do all the optimization in the first month and then forgot about it…
Just the first fact is enough to make the test invalid no matter how the game changes.
You seems to miss the point completely, I am saying that your personal test can be bias since you do not have two machines to make comparison. How detail can be it when you did not stat the min FPS and max FPS of different clock rate, how detail can it be when you didn’t stat the average FPS, how detail can it be when you are merely testing one type of machine. Your test is much more bias and much less creditable than a dated article provided by them. Again, like I said, they did a retest on released version but you again keep saying it is too close to release date, they did make comment that the release version is a few FPS better and so is for amd but far lesser extend. They did say they will revisit if there is a major optimization for FX processor and since I have a amd machine, I know there isn’t much optimisation for AMD part.
You are simply stating that intel overclocking can be significant but I am saying it isn’t significant. You may think a couple of FPS makes a huge difference, but what difference does it really makes when it is already running at comfortable FPS level.
Henge of Denravi Server
www.gw2time.com
(edited by SkyShroud.2865)
To be continued . . .
I think if the serversided calculation or server client communication would be the issue, the frames should stay stable, but skills would be delayed, you get teleportet, stuff like this. Your client is most likly able to calculate your movement and the server calculates stuff around it. That is why you still got your frames and ppl running before dcs.
I tend to think this is bandwidth problem on your cpu, ram connection. This is very hard to measure, but from my experience in benchmarking my codes (one of my main jobs is to optimize code for parallelization in big clusters but also on single multicore cpus, so this single thread prob doesnt happen a lot to me) this could well be the real bottleneck. Thats why I wondered, why ddr4 did not bring much improvement, but then again ddr4 is not really on the point yet where it will get in future and in many circumstances still is not faster than ddr3.
If i am really correct, cpus with very big cache could bring some improvement.
I personaly run the game on a 3570k at 4.1ghz. I might be able to test it on a cluster with xeon e5 1620 v3 and k80 gpu. But last time i tried some packages from whine and imagemagick colided… This System has a slightly bigger cache, so if we see improvement there it might be the bottleneck.