Low Fps Problems
“8 core processor” alone doesn’t tell enough as there are many of those. However I do believe it’s an AMD (probably FX-8000 series, like FX-8350) and if that’s the case, then your culprit is right there. AMD has very weak single core performance compared to that of Intel’s CPUs, which is what this game requires beyond limits. You only need 4 really strong cores for this game, any extra isn’t too much of a help. That being said, where do you experience 15-20? In Lion’s Arch during busy hours, world boss events, big WvW zerg fights, that’s completely normal I’m afraid. If it is even PvE soloing, then you might have some deeper issues. First try turning your Shadows, postprocessing, reflections, character limit/quality and FXAA down, those are the most CPU hungry gfx options.
Thats not totaly true
Amd has good ST performance but intel has better…
If AMD deeps to 15-20 FPS then INTEL will deep to 20-25 FPS…
I have both setup and AMD and INTEL and actually FX is better P/P but intel is faster.
To get better performance you can try win 8.1
Well you’re right I guess, concept of “good” (or decent, adequate, let’s not get into linguistics) is subjective, of course AMD processors are doing calculations faster than we can on paper, based on that Pentium 4’s were really fast, but does that really mean anything?
It’s pretty much like it is in the GPU market, AMD is more bang for the buck while Intel for CPUs and nVidia for GPUs have the most expensive and most powerful option too.
And we can’t really speak what AMD and what Intel brings since we don’t know the models. 4670K has twice more powerful cores than a FX-8100, granted this is the best vs. the worst comparison.
Intel 1 core will never beat AMD module…
Why?
pentium vs FX 4300
2 cores vs 2 modules
1 module is much mor epowerful than you think.
Problem is optimization! Some games run on FX really great some not.
In some case FX 4300 is faster than i3 4330 but is some case is same as penitum…
If you buy FX CPUs you need to run Win8.1….
I know that, of course there are situations where both of them shine. I’m just talking about GW2 POV since we’re on GW2 forums.
I also recommend 8.1 (even moreso over regular 8 ) anyway, don’t know about the added benefits for FX’s but sure if there are.
Yep… i will try it later need to finnish some games also i will do comparison between win 8.1 and win 7 on FX CPU maybe even on Intel i7
Intel 1 core will never beat AMD module…
Why?
pentium vs FX 4300
2 cores vs 2 modules1 module is much mor epowerful than you think.
Problem is optimization! Some games run on FX really great some not.
In some case FX 4300 is faster than i3 4330 but is some case is same as penitum…If you buy FX CPUs you need to run Win8.1….
No, 2 Intel HT cores (ie i3-4xxx) are faster than 2 AMD Modules (ie FX-43xx) at their standard clocks. But throw a 2 Module AMD against an i5, forget about it, Intel wins hands down in GW2. Of course you’ll be spending more for that quad core no HT Intel.
http://www.xbitlabs.com/articles/cpu/display/core-i3-4340-4330-4130_5.html
And it’s not an optimization problem. Both AMD and Intel processors get assigned the same number of threads to run. It just an Intel dual HT core can do the job a bit faster than an AMD dual module. (I found it interesting that AnandTech’s review of the Steamroller based APU that they no longer refer to the AMD as a quad core but as a 2 module/4 thread setup which is at least more honest than AMD’s marketing department.)
RIP City of Heroes
You did not understand!
Yep FX 4300 is faster than i3 4330 in some games – THAT IS TRUE – dont be a fool!
http://gamegpu.ru/rpg/rollevye/divinity-original-sin-test-gpu.html
http://gamegpu.ru/rpg/rollevye/wasteland-2-test-gpu.html
I did benchmark in starswarm
i3 4330 (60FPS) vs FX 4300 (61FPS) 3.5Ghz
and here is slower
http://gamegpu.ru/rts-/-strategii/total-war-rome-ii-patch-2-test-gpu.html
http://gamegpu.ru/racing-simulators-/-gonki/next-car-game-alpha.html
happy now?
I say you that FX 4300 = i3 4330
I did benchmark multithreading – gaming+ streaming
i3 4330 vs FX 4300 3.5Ghz
both were same just like in starswarm (MANTLE) benchmark
in GW2 i dont know – no latest benchmark
But still if you are playing on win7 then choose i3 4330 over FX 4300.. but price is to high and you can get FX 6300/8320 for lower or same price
(edited by XFlyingBeeX.2836)
while I love to read a good AMD vs Intel argument, it’s not helping the op.
OP: have you tried to unpark your cores? that alone should help allot.
personally I like to install gamebooster and use its game performance power setting to unpark cores while gaming. Then use process lasso to force gw2 to use that power setting + give it high priority.
until the OP reveals his CPU we won’t know how to help him.
and Mr FlyingBee, STOP with your banter. you are doing NOTHING but Hi-Jacking threads at this point.
Man I wish there was an ignore/Block Feature on this forum.
Laptop: M6600 – 2720QM, AMD HD6970M, 32GB 1600CL9 RAM, Arc100 480GB SSD
tom hardware already did a comparison for gw2 when the game first launched isolating both clock speeds and core counts.
http://www.tomshardware.com/reviews/guild-wars-2-performance-benchmark,3268-7.html
This was 2 gens ago, but the relative hardware disparity hasn’t really changed.
while I love to read a good AMD vs Intel argument, it’s not helping the op.
OP: have you tried to unpark your cores? that alone should help allot.
personally I like to install gamebooster and use its game performance power setting to unpark cores while gaming. Then use process lasso to force gw2 to use that power setting + give it high priority.
Hey man, thanks for jumping back to the main topic at hand lol. I’m unsure of how to “unpark” my cores. Also to answer that question about which processor I’m using: AMD FX™-8150 Eight-Core Processor, 3600 Mhz, 4 Core(s), 8 Logical Processor(s). Hope this helps to figure out whats going on. Anything else you guys need, just let me know! Thanks again!
In Windows 7 there are two patches to the thread scheduler to help with FX class processors. Windows 8 already has them applied.
http://support.microsoft.com/kb/2646060
http://support.microsoft.com/kb/2645594
There are also utilities that can help as well. Those tweak the thresholds used by windows to power up and down additional cores/modules based on overall CPU usage. They muck with register entries in the power management settings to insure that high performance keeps all cores awake regardless of actual usage.
RIP City of Heroes
Unparking your CPU cores doesn’t need editing of the registry library. Just set your computer to High Performance power profile. You can also easily tweak the other settings in advanced settings in Processor Power Management. “Minimum Processor State” to 100%. Problem solved. Be sure Maximum is 100%. But basically, because of that, I’ve never had any ‘parked core’ issues.
Unparking your CPU cores doesn’t need editing of the registry library. Just set your computer to High Performance power profile. You can also easily tweak the other settings in advanced settings in Processor Power Management. “Minimum Processor State” to 100%. Problem solved. Be sure Maximum is 100%. But basically, because of that, I’ve never had any ‘parked core’ issues.
this is true. you don’t need extra software to unpack cores. But… some users aren’t comfortable editing power settings, and I have seen version of Windows where some settings in power settings in control panel were locked. Plus just choosing a high power setting won’t un park cores. lastly both programs listed offer a lot more than a few tweeks you could do yourself.
OP: download a free program called gamebooster.
http://www.razerzone.com/gamebooster
install it.
Then download a free program called process lasso.
http://bitsum.com/processlasso/
install it. Then run gw2. minimize gw2 and open process lasso. find gw2 in the process lasso window. right click on it and assign it both default high priority, and the gamers power setting gamebooster gave you.
and Mr FlyingBee, STOP with your banter. you are doing NOTHING but Hi-Jacking threads at this point.
Man I wish there was an ignore/Block Feature on this forum.
Thank you!
Unparking your CPU cores doesn’t need editing of the registry library. Just set your computer to High Performance power profile. You can also easily tweak the other settings in advanced settings in Processor Power Management. “Minimum Processor State” to 100%. Problem solved. Be sure Maximum is 100%. But basically, because of that, I’ve never had any ‘parked core’ issues.
Not(always?) true.
I had my performance profile set to 100/100% min/max and you could tell the difference in responsiveness,loading times and FPS between 100% constant and software managed speed. But after downloading and running this the difference again is night and day.
Not(always?) true.
I had my performance profile set to 100/100% min/max and you could tell the difference in responsiveness,loading times and FPS between 100% constant and software managed speed. But after downloading and running this the difference again is night and day.
I’ve used that one before. It does the job just fine. I’m not sure about running 100% unparked 100% of the time. core parking is a power saving feature. if your not gaming (or some other processor intense operation) you shouldn’t need to be 100% unparked.
But to each there own
the reason I prefer gamebooster is because it does a bunch of different fun things.
until the OP reveals his CPU we won’t know how to help him.
and Mr FlyingBee, STOP with your banter. you are doing NOTHING but Hi-Jacking threads at this point.
Man I wish there was an ignore/Block Feature on this forum.
Heh? Hi-Jacking threads?
Look i just get mad when users say that AMD is really bad – because it is not true
And if you have bulldozer and Gigabyte board you can do this
1. Unparking cores
2. Bios- settings
1 Core per Module (1 core per CU)
OC to 4.8-5.2Ghz and GW2 will run really good…
3. Buldozer fix for Win 7
Also you can do (1 core per CU) with other boards but you need to do it other way….
until the OP reveals his CPU we won’t know how to help him.
and Mr FlyingBee, STOP with your banter. you are doing NOTHING but Hi-Jacking threads at this point.
Man I wish there was an ignore/Block Feature on this forum.
Heh? Hi-Jacking threads?
Look i just get mad when users say that AMD is really bad – because it is not true
And if you have bulldozer and Gigabyte board you can do this1. Unparking cores
2. Bios- settings
1 Core per Module (1 core per CU)
OC to 4.8-5.2Ghz and GW2 will run really good…
3. Buldozer fix for Win 7Also you can do (1 core per CU) with other boards but you need to do it other way….
You just hi-jacked this thread to start a AMD vs Intel War.
Please stop, your looking very childish now.
Laptop: M6600 – 2720QM, AMD HD6970M, 32GB 1600CL9 RAM, Arc100 480GB SSD
until the OP reveals his CPU we won’t know how to help him.
and Mr FlyingBee, STOP with your banter. you are doing NOTHING but Hi-Jacking threads at this point.
Man I wish there was an ignore/Block Feature on this forum.
Heh? Hi-Jacking threads?
Look i just get mad when users say that AMD is really bad – because it is not true
And if you have bulldozer and Gigabyte board you can do this1. Unparking cores
2. Bios- settings
1 Core per Module (1 core per CU)
OC to 4.8-5.2Ghz and GW2 will run really good…
3. Buldozer fix for Win 7Also you can do (1 core per CU) with other boards but you need to do it other way….
You just hi-jacked this thread to start a AMD vs Intel War.
Please stop, your looking very childish now.
I’m sorry, but I kinda see FlyingBee’s point here. If the processor has an AMD logo on it it doesn’t mean it’s gonna suck for GW2 (and I don’t mean literally the letters would have an effect on performace, I’m not that stupid). For example FX-9590 isn’t even that much worse in single thread performance than 2500K. Granted 9590 is the top of the line and 2500K is getting outdated, but it has been proven many times that even 4770K doesn’t bring adequate performance for the most demanding situation (this is just info for those that don’t know) so that specific AMD can hold it’s ground. Heck, my old Q6600 performed pretty nicely, excluding zergs, and it has about half of the ST performance over that of 9590.
sirsquishy is mainly right every time and when I see his posts I know there’s gonna be some cold hard facts, but this is just something to consider. AMD is a valid option, although not even as adequate for the smoothest gameplay as Intel. And this is coming from an Intel guy.
I did?
please read again
all i said i first post is that AMD is not that bad …. so …
until the OP reveals his CPU we won’t know how to help him.
and Mr FlyingBee, STOP with your banter. you are doing NOTHING but Hi-Jacking threads at this point.
Man I wish there was an ignore/Block Feature on this forum.
Heh? Hi-Jacking threads?
Look i just get mad when users say that AMD is really bad – because it is not true
And if you have bulldozer and Gigabyte board you can do this1. Unparking cores
2. Bios- settings
1 Core per Module (1 core per CU)
OC to 4.8-5.2Ghz and GW2 will run really good…
3. Buldozer fix for Win 7Also you can do (1 core per CU) with other boards but you need to do it other way….
You just hi-jacked this thread to start a AMD vs Intel War.
Please stop, your looking very childish now.
I’m sorry, but I kinda see FlyingBee’s point here. If the processor has an AMD logo on it it doesn’t mean it’s gonna suck for GW2 (and I don’t mean literally the letters would have an effect on performace, I’m not that stupid). For example FX-9590 isn’t even that much worse in single thread performance than 2500K. Granted 9590 is the top of the line and 2500K is getting outdated, but it has been proven many times that even 4770K doesn’t bring adequate performance for the most demanding situation (this is just info for those that don’t know) so that specific AMD can hold it’s ground. Heck, my old Q6600 performed pretty nicely, excluding zergs, and it has about half of the ST performance over that of 9590.
sirsquishy is mainly right every time and when I see his posts I know there’s gonna be some cold hard facts, but this is just something to consider. AMD is a valid option, although not even as adequate for the smoothest gameplay as Intel. And this is coming from an Intel guy.
I cant really comment on the FX-9xxx series CPUs. I really want to, just dont have any samples to testify against.
BUT, I do have a FX-8350 that was actually upgraded (heh) to an i3-2120. That FX-8350 would never get more then 65FPS under any circumstances. I disabled the 2nd Core per module, enabled Core unparking, even pinned the gw2.exe process at high. And the Performance was locked at 65FPS. The GPU at the time was an R9-270 (which is in my media center now…Glory/awesome 4k-LCD at 65"!).
But as soon as I swapped in the Intel i3-2120 and Motherboard, and re-ran my tests with the same ‘fixes’ the system would jump to 90FPS. Then I swapped in a R7-260x, so I could rebuild my media center…thats where the FX8350 went, and the performance with the i3 remains at about 90fps.
So, that is what I have been going on here.
FX8350 vs i3-2120.
Phenom-ii 965BE (OC to 4.1) vs i5-4670K(not OC)
an in EVERY Scenario Intel wins, 2:1 and in the lows 3:1 (Low on the FX was 8FPS, on the i3 pins at 18FPS on fully queued maps)
So, im sorry, but you can keep your AMD cpus for single threaded applications. Intel wins.
And this is coming from a long time hard AMD fan. Heck I still only run AMD GPUs…so what does that tell ya?
Laptop: M6600 – 2720QM, AMD HD6970M, 32GB 1600CL9 RAM, Arc100 480GB SSD
I shouldn’t be talking about the 9xxx series either as I don’t have firsthand experience with them, therefore all I said about it could be complete garbage, but I’m counting on the statistics here.
However that’s pretty much as I suspected with the FX-8350, my Q6600 would usually stay at ~45 fps in solo PvE/dungeons. For me 45 fps is “enough”, as in enough to bring fluid motion impression. Let’s not derail this thread into how many fps the human eye can recognize, but I’m pretty confident in saying that vast majority of gamers would be fine with 65 fps. That was the whole point of my post, some AMD chips can bring gameplay that is smooth enough (exluding zergs and whatnot). If the player is reaally tight on money (student without a job and possibly even a ton of loan or a kid whose parents can’t afford to give him much, for example) and AMD system is to be found way cheaper, there’s no denying that you can get away with AMD system. I know this, you know this too, I’m sure. Of course there’s also no denying that Intel is still most likely going to win, maybe even on bang for the buck section. I personally am a student but I have a job so I could save up some to get a 4670K to get away from most of the trouble.
And seeing your Intel rig in your sig, I have no clue what that tells me
(edited by locx.6412)
That’s pretty much as I suspected, my Q6600 would usually stay at ~45 fps in solo PvE/dungeons. For me 45 fps is “enough”, as in enough to bring fluid motion impression. Let’s not derail this thread into how many fps the human eye can recognize, but I’m pretty confident in saying that vast majority of gamers would be fine with 65 fps. That was the whole point of my post, some AMD chips can bring gameplay that is smooth enough (exluding zergs and whatnot). If the player is reaally tight on money (student without a job and possibly even a ton of loan or a kid whose parents can’t afford to give him much, for example) and AMD system is to be found way cheaper, there’s no denying that you can get away with AMD system. I know this, you know this too, I’m sure. Of course there’s also no denying that Intel is still most likely going to win, maybe even on bang for the buck section. I personally am a student but I have a job so I could save up some to get a 4670K to get away from most of the trouble.
And seeing your Intel rig in your sig, I have no clue what that tells me
Currently, my ONLY complaint about AMD is the poor low end performance we see in ‘zerg’ like activity. like you said, i feel that 60~fps is more then acceptable for highs.
But, I find that 20FPS (even low) is about my tolerance point. It still feels fluid and I dont see that ‘jerking’ sensation. On AMD this is just not possible.
Now, that is not ‘exactly’ AMD’s fault. Anet decided to build GW2 using a single thread to render ALL dynamic event content (Player activities). And that is what drives this game. AMD just doesnt do this well.
If they were to break that render thread into 2 or 3 processes, there would be minimal differences between AMD and Intel for this game.
Laptop: M6600 – 2720QM, AMD HD6970M, 32GB 1600CL9 RAM, Arc100 480GB SSD
That’s pretty much as I suspected, my Q6600 would usually stay at ~45 fps in solo PvE/dungeons. For me 45 fps is “enough”, as in enough to bring fluid motion impression. Let’s not derail this thread into how many fps the human eye can recognize, but I’m pretty confident in saying that vast majority of gamers would be fine with 65 fps. That was the whole point of my post, some AMD chips can bring gameplay that is smooth enough (exluding zergs and whatnot). If the player is reaally tight on money (student without a job and possibly even a ton of loan or a kid whose parents can’t afford to give him much, for example) and AMD system is to be found way cheaper, there’s no denying that you can get away with AMD system. I know this, you know this too, I’m sure. Of course there’s also no denying that Intel is still most likely going to win, maybe even on bang for the buck section. I personally am a student but I have a job so I could save up some to get a 4670K to get away from most of the trouble.
And seeing your Intel rig in your sig, I have no clue what that tells me
Currently, my ONLY complaint about AMD is the poor low end performance we see in ‘zerg’ like activity. like you said, i feel that 60~fps is more then acceptable for highs.
But, I find that 20FPS (even low) is about my tolerance point. It still feels fluid and I dont see that ‘jerking’ sensation. On AMD this is just not possible.
Now, that is not ‘exactly’ AMD’s fault. Anet decided to build GW2 using a single thread to render ALL dynamic event content (Player activities). And that is what drives this game. AMD just doesnt do this well.
If they were to break that render thread into 2 or 3 processes, there would be minimal differences between AMD and Intel for this game.
Right on. Minimum point (as well as the maximum) is of course formed by what the individual is used to. When using manually optimized in-game settings for my preference and getting 7 fps in zergs vs. turning it to best performance and getting 14, the game felt smooth as a baby’s bottom. Now after the upgrade and getting ~20, going back to 14 would perhaps feel like a nightmare.
Of course it’s not AMD’s or Intel’s fault we can’t get 60 fps in zergs. ST performance is getting near it’s maximum limit so there should be something done differently. Having the render thread split up would be a great start. We just have to wait and see what the future and ANet brings us (I seem to like saying that a lot).
People say AMD is bad not because of it’s performance, but performance to price. With intel’s new aggressive pricing its hard to recommend an amd processor at this time.
As sirsquishy said, his 8350 ($189 atm)has slightly less performance than an i3 but at current prices is $75 more expensive (haswell i3-4130 3.4 ghz at $115 ). To reach intels price levels, you have to go with a low end (or older gen) 6000 or a mid range 4000 from amd, and the performance gap of those chips to the new i3’s is even greater than that of the 8350.
Even with the tweaks mentioned above, AMD just can’t hit the price/performance at the moment which in laymans terms makes them “bad”.
People may interpret what I’m saying as bashing AMD but I’m not. But I’m not going to sit on the side lines when someone says the FX-8350 is only as fast as an i3 either without saying “in this game”. The charts I linked to earlier does show the FX-8350 performing fine in other games. It all depends how many compute intensive threads the game can use. GW2 simply doesn’t have many so all those extra cores and modules simply doesn’t help.
I’ve also always contended that people have underestimated the advantage that Intel CPUs have with the PCIe controller for the graphics card so tightly integrated into the rest of the CPU where AMD still uses there hypertransport bus to talk to not only the PCIe controller in the NB but all other peripherals in the SB.
RIP City of Heroes
sirsquishy.8531
You still dont understand.
EDIT:
Main problem of AMD CPUs is:
Intel has better support in games/windows
4 cores are the best for these time why?
Compare in Crysis 3
pentium (2cores) vs i5 (4Cores)
FX4300 (4cores) vs FX 8350 (8cores)
GW2 need to add directX 11.2 support – that woudl be great for AMD Cpus
(edited by XFlyingBeeX.2836)
sirsquishy.8531
You still dont understand.EDIT:
Main problem of AMD CPUs is:
Intel has better support in games/windows4 cores are the best for these time why?
Compare in Crysis 3
pentium (2cores) vs i5 (4Cores)
FX4300 (4cores) vs FX 8350 (8cores)GW2 need to add directX 11.2 support – that woudl be great for AMD Cpus
I think your the one who doesn’t understand. It’s not that a game is ‘designed’ for Intel over AMD (yes there are SOME Considerations here, but very very few). It’s that there are still a lot of games/apps that use a single ‘core’ thread to run themselves. GW2 is a PRIME example of this with its crappy rendering thread. And we ALL know that AMD has 1/3rd of the performance against Intel for single threaded performance.
That is all there is to this. Single threaded performance. Not DX11.2, Not ‘Intel has better support in games/windows’, and most certainly not Mantle.
If that single thread was broken into 2-3 components and shared its polling across 2+ physical cores, I PROMISE you that AMD would hold its own against Intel in those scenarios.
You wanna know how I know?
my 10+ years of experience working with VMware ESX/ESXi. You get to see exactly how multi-threaded OS’s perform against single threaded applications, and how they effect the different architecture on the different CPU generations. Which is why I pulled all my 6128 AMD based servers (8core CPUs, 4 Sockets = 64cores) and replaced them with e5-2680 (8core+HT, 2 sockets = 16Cores/32Threads), the performance gain was 6:1. Let me repeat that, Performance gain was 6:1 from the latest 6128 AMD Cpus compared against e5-2680 Intel CPUs, for SINGLE THREADED performance.
Now, If 90% of our environment wasn’t single-threaded, I wouldn’t have replaced 25 R815’s fully loaded with 6128 AMD CPUs. As with Multi-threaded Performance, the 6128 is 18%~ faster then the e5-2680 in my tests. And that’s under ideal multi-threaded scenarios (such as Parallel Processing, running the ENTIRE application equally across each core evenly). But unfortunately most of our services are custom in house (Linux/Unix) and are designed by Morons who do not understand the benefit of Multi-threaded processes.
But, that directly translates to the FX-8350 vs the Intel i5-4670(Not K). The Per Core performance when comparing AMD to Intel = there is no real comparison. As Intel does things faster. But if we compare them in a multi-threaded fashion, they are with in single digit %’s from each other.
Laptop: M6600 – 2720QM, AMD HD6970M, 32GB 1600CL9 RAM, Arc100 480GB SSD
AMD has good single thread performance! problem is their design ….
If Games use 4 cores sometimes FX 4300 is faster sometimes i3 4330 is faster
GW2 runs good on FX CPUS – LA (never goes under 45 FPS with settings that i use)
- that is all
Just Intel-fanboy…. i bought intel 3770K oced to 4.6GHz … i wasted my money
- problems is Network and crappy directX
…..
YOu dont understand MANTLE
- do you know why consoles are faster? ….
Mantle doesnt mean that it will push 8 core CPU to 100% usage in games it means that ST will be faster and MT will scale better…
Right now PC SW bottleneck HW!
I hope that Nvidia will support MANTLE and MMOs will support mantle…
I am not fanboy .. i just want the best for costumers!
If you want to run GW2 perfectly – you need atleast i5 or even i5 K models.
So actually, because of SW on PC you have to spend much more money…
(edited by XFlyingBeeX.2836)
AMD has good single thread performance problems is their design ….
If Games use 4 cores sometimes FX 4300 is faster sometimes i3 4330 is faster
GW2 runs good on FX CPUS – LA (never goes under 45 FPS with settings that i use)
- that is allJust Intel-fanboy…. i bought intel 3770K oced to 4.6GHz … i wasted my money
- problems is Network and crappy directX
…..YOu dont understand MANTLE
- do you know why consoles are faster? ….Mantle doesnt mean that it will push 8 core CPU to 100% usage in games it means that ST will be faster and MT will scale better…
Right now PC SW bottleneck HW!
I hope that Nvidia will support MANTLE and MMOs will support mantle…
I am not fanboy .. i just want the best for costumers!
If you want to run GW2 perfectly – you need atleast i5 or even i5 K models.
So actually, because of SW on PC you have to spend much more money…
Man you just go all over the place don’t ya?
First, you cannot compare a game console to a PC. Its completely different Architecture, and the games are designed to SPEC. Apples and Oranges, while they go good together you cannot really compare them.
PC’s are ALWAYS bottle necked via their Hardware. This has been a fact since 86’ and will be a fact after quantum computing is realized.
If a game uses 4 cores, then the different between AMD and Intel is a moot point. As I stated they both exceed at multi-threaded applications about equally.
Mantle is a pipe dream. It requires to much of software firms to implement. Unless that software company is already designing with mantle in mind, you’ll never see it in their software titles. No matter what mantle may or may not bring to the table.
Laptop: M6600 – 2720QM, AMD HD6970M, 32GB 1600CL9 RAM, Arc100 480GB SSD
Completely different architecture? Of course
- use GCN and jaguar 8 core 1.6Ghz …
- But with GDDR5 there is no memory bandwidth bottleneck
PC always bottlenecked by hardware??
You are totally wrong!
Mantle requires to much of software firms to implement??
DirectX is requires much more!
Do you know how much directX bottleneck your i5?
I bet with Mantle i3 destroy your i5 ….
Okey .. enough of arguing we will see what will happen in future…
(edited by XFlyingBeeX.2836)
Mantle is not going to accelerate an Intel processor. It’s to accelerate AMD CPU and graphics and Nvidia graphics if they choose to support it. It’s a low level API that let’s a developer get closer to the hardware than Direct X ever will. Direct X isn’t going to change the game. Mantle might change the game but for Guild Wars 2, it’s not going to happen. Simple as that. ANet’s already got a working platform that they can improve on and make it the best it can be. Why scrap it for some stupid new API that so few know how to work with?
Why is he wrong with PC always being bottlenecked? No explanation = no proof = Sirsquishy is right. One of the points of a debate is to explain why you disagree with the other’s point.
Are you sure?
Mantle is low level API – it is almost same lvl API that conzoles USE.
Mantle is going to eccelerate AMD/INTEL – every CPU because … Actually this is wrong
– problem is directX that bottleneck every CPU!
I would like to see MMOs with mantle any low level API!
I did starswarm benchmark
RTS-LOW
DirectX 21FPS (FX 4300 4.0GHz)- usage 50%
Mantle 61FPS (FX 4300 3.5GHz) – usage 100%
GPU was bottlenecking at the start – using Mantle
He is wrong with PC hardware bottleneck – Software is bottlenecking Hardware!
Are you sure?
Mantle is low level API – it is almost same lvl API that conzoles USE.Mantle is going to eccelerate AMD/INTEL – every CPU because … Actually this is wrong
– problem is directX that bottleneck every CPU!I would like to see MMOs with mantle any low level API!
I did starswarm benchmark
RTS-LOW
DirectX 21FPS (FX 4300 4.0GHz)- usage 50%
Mantle 61FPS (FX 4300 3.5GHz) – usage 100%
GPU was bottlenecking at the start – using MantleHe is wrong with PC hardware bottleneck – Software is bottlenecking Hardware!
Thats why with every MAJOR OS upgrade we always see improvements with top end hardware, hu?!
Please, just stop.
Laptop: M6600 – 2720QM, AMD HD6970M, 32GB 1600CL9 RAM, Arc100 480GB SSD
Dont be a fool
Why are conzoles use low end CPUs… WHY?
Mantle is great for low end/old CPUs such us Athlon x4/i5 750…
Athlon x4 and R9 280X with mantle runs BF4 perfectly…
(edited by XFlyingBeeX.2836)
@Avelos – Mantle is an API to the AMD GCN video cards. Has nothing directly to do with the CPU. It helps AMD CPU performance more than Intel because AMD is slower. By reducing the CPU overhead in the API and driver code a slower processor can communicate all the info to render the frame in a shorter amount of time, but it still needs the card to render that data into a frame. Intel is so much quicker to begin with that the CPU overhead in terms of elapse time was smaller to begin with. So going from 10t CPU/40t GPU per frame to 5t CPU/40t GPU isn’t as much of a boost as going from 30t CPU/40t GPU to 15t/40t GPU (t is some time unit, didn’t want to use an actual unit here).
@XFlyingBeeX – Starswarm isn’t a real game yet, but a tech demo. Just like Unigine is used just to show off tessellation. All Starswarm shows is the possibilities that Mantle can provide. That said various site’s benchmark on BF4 using Mantle have come up way short compared to the press release from AMD. I chalk that to marketing department hype and the selective filtering of marketing people’s hearing.
Consoles are an entirely different can of worms. They know at an OS level exactly what hardware is in the system. This means you do not have to abstract from from API to driver to hardware. That can provide an enormous advantage when talking to hardware in terms of performance. Mantle drivers still have to adapt to a wide range of similar hardware so it’s not quite as good as a console in terms of eliminating CPU overhead. The CPU utilization is higher because it can loop through the renderer faster without all the translation between abstraction layers. It was taking so long in Dx11 that the rest of the Starswarm demo was waiting on the renderer.
Right now this game uses Dx9. Dx9 is not thread safe so you can’t have multiple threads calling the Dx9 API and expect it to work properly. That is the current limiting factor in this game, the renderer must be, at the point of calling Dx9 routines, a single thread. And it’s because of this that FX class CPUs aren’t the best at this game as their performance of a single core in a multi-threaded program just isn’t as fast as Intel’s.
And don’t bring streaming into this. That’s like comparing 0-60 times between cars but only when they are towing an RV. The rest of us are talking about performance just playing the game. Not the game and some other CPU intensive task.
RIP City of Heroes
(edited by Behellagh.1468)
Why do PS4 and Xbox One both use a ‘low end’ CPU? 1.8 GHz 8 core Jaguar CPU… Well, that’s probably because AMD paid to be there just like whoever paid to be in the PS3 or just like IBM paid to be in the Game Cube, Wii, and Wii U. And so now developers get to code for it in which means many good things for AMD FX on desktops.
Like any company designing a new tech device both Sony and Microsoft put out a bid of what they wanted. Whoever had to meet performance, price, power and cooling requirements and had to be in production in quantity x months ahead of the console’s launch. Sony dug a hole with the PS3 CPU thinking it could have been used elsewhere and wasn’t. Even built a costly chip fab to make it. Microsoft understood that maybe embracing and x86 based CPU could help with getting a jump on game development as well as porting to or from the PC plus they used AMD graphics in the 360.
Both happen to like AMD’s combo of 8 lite-weight x86 cores (1.6GHz Kabini core is about as fast as a downclocked to 1.6GHz original Core 2 core; it’s not using the FX architecture) coupled with AMD’s current graphics architecture over anything Intel or nVidia had available at the time of the console’s design. One chip is cheaper than two.
RIP City of Heroes
(edited by Behellagh.1468)
It use Jaguar cores
I did benchmark with jaguar cores (laptop) vs piledriver in cinebench R11.5 and jaguar was about 16% faster.
The problem is why does conzoles use 8 cores instead of 4 faster cores? – it is onlay one answer – MT is bettter! Better P/W and low level API scales 8 cores very well…
Starswarm benchmark – use mantle (low level API) shows that FX 4300 4.5Ghz will never be as fast as FX 6300 3.5Ghz or even less
GW2 need to use MANTLE or directX 11.2
Tree, Kabini is a particular core’s name. Jaguar is the architecture name.
The A6-5200 is a quad core Jaguar (Kabini) at 2.0GHz.
However this review doesn’t show it’s performance in Cinemark even close to either an older 3.0GHz K10 or 4.1GHz Steamroller APU quads. The PS4 and XBone uses 8 Jaguar cores at roughly 1.6GHz.
http://adrenaline.uol.com.br/biblioteca/analise/784/amd-a6-5200-kabini.html?pg=03
I don’t disagree with the point that multiple cores are better. But it’s only better if the app takes advantage of it, or if multiple apps are running. XBox 360 had 3 cores and the PS3 had one main core and eight simpler cores. Multicores isn’t a new thing for either of those console manufacturers.
But to embrace Dx11 or Mantle, this game’s renderer will need to be reorganized to divide and balance the load across multiple threads to get the most advantage out of these APIs. Just using them doesn’t automatically make the game faster, that’s the point we’re making.
RIP City of Heroes
Getting this too on my son’s PC.. Used to be my PC and no hardware has changed.
Last 2-3 days it was fine but last night and today it’s like his character is in slow motion. Attacking, running and interactions with NPC’s are all slow yet we are getting from 30-60fps on the graphic settings windows.
It’s unreal and of course thought it was on our side first but I’ve checked all I know (been in IT for 20+ years) and it’s nothing that I can put a finger on OTHER than GW2. All other games, even at Ultra settings are crisp and fast yet GW2 in the last 48 hours has gone to down into the toilet.
Again, I ran GW2 on this same PC now since launch without issues with the hardware. No new drivers have been installed between when it was running great and last 48 hours.
Intel Core 2 Duo – E8500 (overclocked to 4.25gh on custom water cooling)
4g RAM and 7200 RPM Western Digital HDD
Windows 7 SP1
Hoping to hear what comes out of this because being the resident IT guy here at the homestead it’s driving me nuts hearing him complain about this issue and to me and now, all signs point to something on the game side (coding, net code, latency)
Thanks!
~Icky
Jaguar x4 2.0Ghz = 1.99 cb
Fx 4300 2.0Ghz = 1.7 cb
that is my benchmark
Jaguar x4 2.0Ghz = 1.99 cb
Fx 4300 2.0Ghz = 1.7 cb
that is my benchmark
You didn’t say same clock speed. Of course the FX is worse, the IPC (instructions per clock) is terrible on the Bulldozer/Piledriver/Steamroller architecture. Why else does a 3.2GHz Phenom II 955 beat a 3.8GHz FX-4300 by a couple percent in CineBench?
http://www.anandtech.com/bench/product/88?vs=700
This is a flashback to Intel where you needed a 30% boost in clockspeed on the Pentium IV to get the same result on the Pentium III.
If you going to make bold statements at least define the parameters of the comparison first.
Tom’s has to do another one of these architecture reviews, one core, no turbo, same clock speed. Really shows off the IPC.
http://www.tomshardware.com/reviews/processor-architecture-benchmark,2974-15.html
RIP City of Heroes