AMD Optimization

AMD Optimization

in Account & Technical Support

Posted by: sobe.4157

sobe.4157

I see you are a big intel fan. I dont care what CPU I use…… I just want it cheap and strong.
It is not the point which CPU is better.

It just make me sad that this game use only about 45-55% of my CPU.
Anyway like i said if i just lower detail my FPS are 45+ even really huge fight it does not go under 40FPS.

But I know there wont be any optimization for AMD CPUs…. i realized that long ago…

Only thing that i can do switch to win8.. that improves CPu perfromance

Hey, don’t take what I said the wrong way, AMD has a lot to offer in majority of their processors, as I’ve said, they are powerful, but running around telling everyone that a budget cpu tops out an i5 like the 3570 is misleading.
http://www.anandtech.com/bench/product/699?vs=701

You can play around with that if you want. I do prefer Intel as this current time yes, Intel’s clock for clock beat AMD offerings (I am an AMD adopter back in the Duron/AthlonXP days when AMD was laughing at Pentium4’s Netburst architecture). If AMD makes a comeback like they did back in their Athlon glory days, I’ll switch to them in a heartbeat for my main rig and my test bench! But I actually hate seeing so many people on this forum complaining because they have an AMD setup, quite frankly it’s unfair…. Fact of the matter is not everyone wants to spend the extra money associated with an Intel setup, and that’s what makes it discerning that people would have to basically BE FORCED into the Intel camp based on the poorly implemented threading capabilities, even with one thread. The engine needs more integer based calculations so AMD can step it up a little, but as I don’t have much experience with game engines…. I don’t even know if that’s particularly possible for just “adding”.

I’ll say it again, if anyone is ballsy enough to attempt to get an interview with a lead tech, see where that road leads.

3770k 4.9ghz | Koolance 380i | NexXxoS XT45 | XSPC D5 Photon | ASUS MVFormula |
Mushkin Black 16gb 1600 | 500GB Samsung 840 Evo |2×2TB CavBlack| GALAX 980 SoC |
NZXT Switch 810 | Corsair HX850 | WooAudio WA7 Fireflies | Beyerdynamic T90

AMD Optimization

in Account & Technical Support

Posted by: XFlyingBeeX.2836

XFlyingBeeX.2836

Okey.
I play at 1650×1050 — 1050P
My GPU is 5870 stock

Why i did that bench?
Because i could switch CPU with i3 or even with i5 (+50€ – almost 70$). So i had to do…

Penitum 2/2 is as fast as i3 in games that use 2 cores — okey i3 might be like 2-5% faster
I3 is 2/4…. same difference between i7 4/8 and i5 4/4…
If the game use only 2 then pentium is best solution… But still i3 is more powerfull

Same difference is between FX 6350…. Using 2 core i3 wins … 4 cores Fx is faster… 6 cores is much faster…

But there is always optimization… Intel with 4 core will have 90% of usage AMD will have 60% usage…

Could you just check your CPU usage in lions arch?

Sobe
if i would get i5 for the same price then FX 6300 i would go with 5.

For streaming i would recommend FX 8320 even over i7 4770K because it is much cheaper and you might have better performance… Because only that time i saw my FX 6300 at 100% usage…

(edited by XFlyingBeeX.2836)

AMD Optimization

in Account & Technical Support

Posted by: XFlyingBeeX.2836

XFlyingBeeX.2836

AMD Optimization

in Account & Technical Support

Posted by: Fermi.2409

Fermi.2409

Because i could switch CPU with i3 or even with i5 (+50€ – almost 70$). So i had to do…

Where are you buying an i3 for it to be $70 more then a 6300? You’re looking at i5s there, which demolish 6300s across the board.

Could you just check your CPU usage in lions arch?

http://i.imgur.com/X7lYsBU.jpg

Pretty much between 70 and 80%.

HAF 912 | i7-3770k @ 4.5 GHz | MSI GTX 1070 GAMING 8GB | Gigabyte Z77X-D3H
EGVA SuperNOVA B2 750W | 16 GB DDR3 1600 | Acer XG270HU | Win 10×64
MX Brown Quickfire XT | Commander Shaussman [AGNY]- Fort Aspenwood

AMD Optimization

in Account & Technical Support

Posted by: Behellagh.1468

Behellagh.1468

Gee let see. First none of the CPUs between the two reviews are the same. The second link features the FX-6350, FX-4350, Athlon X4 750 (FM2 without a GPU), Phenom II X4 965 and the Athlon II 640. The first link test none of those.

Second, game settings are different between the reviews. First link Ultra Details but no AA. Second link High Details with no AA and Ultra Details with 2xAA.

Lastly first link, GTX 680 and second link HD 7970.

Apple
Orange.

We are heroes. This is what we do!

RIP City of Heroes

(edited by Behellagh.1468)

AMD Optimization

in Account & Technical Support

Posted by: sobe.4157

sobe.4157

I just want you to be aware the 2 links you posted don’t mean much as you were in a rush for FPS number differentiation that you disregarded 3 things, #1 the dates of the 2 reviews, #2 the different hardware used and #3 game settings. The dates are important because of driver maturity for the newer titles as well from both camps….

That said, just drop it lol. You should work on legitimate sources and actual reading of what you find. The tech community is seemingly aware that Tom’s is bias, but over the past few years they have gotten better.

3770k 4.9ghz | Koolance 380i | NexXxoS XT45 | XSPC D5 Photon | ASUS MVFormula |
Mushkin Black 16gb 1600 | 500GB Samsung 840 Evo |2×2TB CavBlack| GALAX 980 SoC |
NZXT Switch 810 | Corsair HX850 | WooAudio WA7 Fireflies | Beyerdynamic T90

AMD Optimization

in Account & Technical Support

Posted by: XFlyingBeeX.2836

XFlyingBeeX.2836

AMD Optimization

in Account & Technical Support

Posted by: Behellagh.1468

Behellagh.1468

Again, different setups in every review. Different graphic cards, different settings, different resolutions, etc. You can only judge relative strength within a review.

Techspot – FX-6300 beats i3-3220 by only 5.5% while the i5-3470 beats the FX-6300 by 12%. All CPUs are at stock speed. HD 7970 at 1920×1200 at “very high quality”.

PCGamesHardware – FX-6300 beats the i3-3220 by 4.7% but the i5-3470 beats the FX-6300 by 41.6%. All CPUs are at stock speed. HD 7970 at 1280×720, no AA/AF at “max details”.

As for the two Russian reviews, neither had the i3-3220 or the i5-3470, used a GTX 690 which is a dual GPU card and tested at 1920×1080.

What I draw as a conclusion from the Techspot and the German review is that FarCry 3 becomes more GPU dependent at higher resolutions and AA settings and that’s about it.

But you aren’t going to listen are you?

We are heroes. This is what we do!

RIP City of Heroes

AMD Optimization

in Account & Technical Support

Posted by: SolarNova.1052

SolarNova.1052

Just want to cut in here and point out that, if ur gunna look at reviews on how CPU’s affect gaming performance, u need to look at reviews that set the game settings so that the limitation is the CPU. So any review with a game like farcry with settings at 108op or higher with very high/max settings …ignore. Specialy ignore them if they only use a single GPU.

Those that use low settings, those are the ones u want to look at.

3930k 4.6ghz | NH-D14 Cooler | P9x79 Pro MB | 16gb 1866mhz G.Skill | 128gb SSD + 2×500gb HDD
EVGA GTX 780 Classified w/ EK block | XSPC D5 Photon 270 Res/Pump | NexXxos Monsta 240 Rad
CM Storm Stryker case | Seasonic 1000W PSU | Asux Xonar D2X & Logitech Z5500 Sound system |

(edited by SolarNova.1052)

AMD Optimization

in Account & Technical Support

Posted by: TinkTinkPOOF.9201

TinkTinkPOOF.9201

Just want to cut in here and point out that, if ur gunna look at reviews on how CPU’s affect gaming performance, u need to look at reviews that set the game settings so that the limitation is the CPU. So any review with a game like farcry with settings at 108op or higher with very high/max settings …ignore. Specialy ignore them if they only use a single GPU.

Those that use low settings, those are the ones u want to look at.

Correct, when you drop down the rez etc on a game it removes how many frames the GPU can render and puts the load on the CPU for world updates etc. HardForum does this all the time when testing CPU’s.

Also, the whole “AMD optimization”…It has nothing to do with AMD, it has to do with how parallel a process is and how many threads the CPU has. When looking at multi threaded renders, AMD and Intel in the same price bracket are very close, but renders are able to make use of all the threads, AMD for the price are awesome still, it is just that AMD went the high thread count route before everything else caught up. Most games for some time only used up to two threads and the same with software, even today the amount of software that can make real use of more than 2 threads are pretty small. When we start seeing games coded to make use of 8+ threads we will see AMD fair much better, but we are not at that point and is why Intel is king with the gaming market right now.

6700k@5GHz | 32GB RAM | 1TB 850 SSD | GTX980Ti | 27" 144Hz Gsync

AMD Optimization

in Account & Technical Support

Posted by: Stormcrow.7513

Stormcrow.7513

Over a month ago there was a dev post regarding performance that indicated that they are rolling out performance patches within the 4-6 week timeframe.
We hopefully will be seeing one shortly and perhaps even with the next patch.
I doubt that there will be a miracle patch but any amount of incremental performance boost would be appreciate.

Can you provide the link?

https://forum-en.gw2archive.eu/forum/game/gw2/Why-is-Performance-Never-Rarely-Addressed/first#post2716646

i7 3770k oc 4.5 H100i(push/pull) 8gb Corsair Dominator Asus P877V-LK
intel 335 180gb/intel 320 160gb WD 3TB Gigabyte GTX G1 970 XFX XXX750W HAF 932

AMD Optimization

in Account & Technical Support

Posted by: Insanityflea.4957

Insanityflea.4957

GW2 loves Intel chips, I had an fx-8350 but now I have an i7 4770k, GW2 runs far better on the Intel. (same gpu)

AMD Optimization

in Account & Technical Support

Posted by: Behellagh.1468

Behellagh.1468

It’s not love or hate it’s a matter of splitting similar amounts of work into multiple threads. If you have that then six or eight cores is a help. Right now they’ve only divided the workload so you need around three cores at most.

In that scenario the fewer faster Intel cores will win out over the more numerous but slower AMD cores in the FX.

We are heroes. This is what we do!

RIP City of Heroes

AMD Optimization

in Account & Technical Support

Posted by: Stormcrow.7513

Stormcrow.7513

Previous
BillFreist
Gameplay Programmer
Hello again!
I disappeared into my Bat-Cave again and finally emerged, and with some good news to share. I’ll start off with what exactly that is.
Since my last updates, we’ve been hard at work prepping some serious server-side optimizations to relieve the bottle-necks during heavy combat. We’ve made some large steps and are almost ready to start rolling out these changes. The first batch of changes have been in testing and we hope to have them start trickling in as soon as the release on Nov. 12th. We’ll know for sure once we’re closer to that date, and so will you.
I can’t stress enough how dangerous it is to optimize a live game. I know how upsetting it can be to be in the thick of a choppy battle, but things are going to be getting better, and soon. We’ve done some temporary things on the back-end to try to ease the influx of players in Wvw, but those changes only go so far.
On to answering some of the common questions.
A lot of you have noticed what seems like an increase in skill-lag in Wvw since the beginning of Season 1. Really, this is just a large influx of players playing Wvw, mostly in servers that usually don’t have queues to get into the map. A lot of the higher tiered servers are pretty used to large battles running into this issue, but obviously that isn’t a valid excuse for it happening. We’ve only increased the focus on relieving this since the season start, and rest-assured, its a top priority.
Some other common suggestions/questions is about a method other games used, which is known as “time-dilation” or “time-scaling”. Well, for starters, this method is extremely risky. We’ve discussed this, among quite a few other alternatives, and it has boiled down to causing more problems than it’ll solve. I know you might ask, “but the current experience is bad enough, how could it be worse?”. Well, to be completely honest, it would just open another can of worms that would end up breaking the game and causing things much worse than a couple seconds of input delay. We opted instead to focus on fixing the issue by simply making the game run better, instead of sweeping it under the rug by watering down the experience. Internally we have the ability to slow down time-scale of the game, and it just feels terrible. Not to mention breaks key mechanics of the game, such as the physics simulation.
Some other assumptions I’m seeing is about our server hardware and inefficient use of communication between them and the game’s database. I’ll go ahead and put these assumptions to rest. Our server hardware was actually purchased new right around the launch of the game, rest assured its not out-dated. And you can sleep at night knowing that our combat doesn’t connect out to the database for information. All of the information it needs is already loaded into memory. The game database is simply for storing persistent character and account information, which is for the most part only accessed when loading in/out of a map or periodic saves which are handled asynchronously. I suggest checking out the links below for more information on our servers.
As far as skills just not executing (noticed some people claiming utilities are more susceptible to this), its mostly just a race-condition as far as processing on the server. You’ll notice that your auto-attack skill seems to process more reliably than other skills. This is mostly due to the fact that we process things like auto-attack timers before player input. Obviously that sounds like a bug (and honestly now that I think about it, I want to look into doing something about it), but the reality is that under normal circumstances, the player input would process before the auto-attack timer triggers. Something you can try to verify this is disabling your auto-attack and see if your other skills become more responsive.
For you tech savvy folks, here’s some good links to help understand our server infrastruture and some other pretty neat things: Link – Another Link!
Well, that’s it for today. I’ll be sure to update you all on our progress and when you can start looking for major improvements. Like I mentioned above, we’re pretty optimistic that this could be as early as our next release. Have a good weekend!

i7 3770k oc 4.5 H100i(push/pull) 8gb Corsair Dominator Asus P877V-LK
intel 335 180gb/intel 320 160gb WD 3TB Gigabyte GTX G1 970 XFX XXX750W HAF 932

AMD Optimization

in Account & Technical Support

Posted by: loseridoit.2756

loseridoit.2756

Hmmm, the skill lag is exactly what I thought it will be a specialized race condition. Skill lag problems didnt become worse.

Stromcrow, Can you post the link where you found it?

AMD Optimization

in Account & Technical Support

Posted by: SolarNova.1052

SolarNova.1052

Unless i missed it. That whole post is about server side issues and lag. Nothing about Client side performance and FPS.

:(

3930k 4.6ghz | NH-D14 Cooler | P9x79 Pro MB | 16gb 1866mhz G.Skill | 128gb SSD + 2×500gb HDD
EVGA GTX 780 Classified w/ EK block | XSPC D5 Photon 270 Res/Pump | NexXxos Monsta 240 Rad
CM Storm Stryker case | Seasonic 1000W PSU | Asux Xonar D2X & Logitech Z5500 Sound system |

AMD Optimization

in Account & Technical Support

Posted by: Kavring.4763

Kavring.4763

After reading some threads about performance in GW2 I still can’t figure out why some people get FPS of 100+ and others are stuck with ~30.

First of all I would like your comments on my current PC and if my FPS is in normal range what I can expect:
CPU: AMD FX-6100
MoBo: ASUS M5A78L-M
GPU: Frozr GTX-760
RAM: 16GB 1066mhz
Resolution: 1920×1080
… and max of 30 FPS in GW2 with setting on all low!

I bought me the GTX-760 recently because of the low FPS and tbh I can’t see any difference to the GT-650 I owned before.
I now tried to OC the FX-6100 from 3,3 GHz to 4 GHz which should give me a boost of 20%… in theory. But still max 30 FPS and 25 FPS average.

I am kind of clueless since my GPU should be good mid-range as well as my CPU. I don’t expect to play at ultra-settings but at least med or lower setting with acceptable FPS. I appreciate any hints what the bootle neck might be.

Thanks in advance

AMD Optimization

in Account & Technical Support

Posted by: SolarNova.1052

SolarNova.1052

If ur max is 30 in a PVE environment then it may be a software issue thats capping ur FPS. do you EVER get above 30 FPS? is it exactly capped at 30 fps ? if so then its highley likely that somewhere on ur rig u have a 30 FPS cap in place for GW2.

If not then if ur getting 30 fps even in pve environments in GW2 then i still think its likely a software issue, possibly a hardware one if u have ur GPU in the wrong PCI-E slot for example.

Low FPS in LA and WvW with a AMD CPI isnt abnormal, but if its in empty PVE environment aswell, i would say that is abnormal.

3930k 4.6ghz | NH-D14 Cooler | P9x79 Pro MB | 16gb 1866mhz G.Skill | 128gb SSD + 2×500gb HDD
EVGA GTX 780 Classified w/ EK block | XSPC D5 Photon 270 Res/Pump | NexXxos Monsta 240 Rad
CM Storm Stryker case | Seasonic 1000W PSU | Asux Xonar D2X & Logitech Z5500 Sound system |

AMD Optimization

in Account & Technical Support

Posted by: Kavring.4763

Kavring.4763

There was no limitation on 30 FPS. It just happend to never go above this value in LA.

I OC’ed my system that seem to run amazingly stable until now. I kept the core speed at 4GHz and set the memory speed from 1066 to 1600(!). I also turned of Cool’n’Quiet.
20% plus in CPU speed + 50% plus in ram speed.
That in total gave me additional ~8-9 FPS.

I tested in LA with following result:
FPS 15-25
CPU % 80-90 (most utilized core)
GPU % ~25%

Queensdale:
FPS 50-60
CPU % 70-80 (most utilized core)
GPU % ~25%

So this result is what I may expect from my rig anyway?
Because I have seen some posts mentioning FPS values from 100+ in PVE and 30 in WvW (with better HW of course; and I would not ask if they got CPUs with double the GHz of mine but apparently they don’t so it’s not the pure power of CPU that matters.).

When I am in the middle of zerg battle in WvW the FPS drops to about 0.3-1 FPS
But I there was a time when my system gave me ~10 FPS in big WvW battles. So something changed and I am pretty sure I didn’t change any HW settings on my rig.
The only changes I can remember were Nvidia driver update and GW2 updates.
One thing or the other really destroyed my game experience.

(edited by Kavring.4763)

AMD Optimization

in Account & Technical Support

Posted by: XFlyingBeeX.2836

XFlyingBeeX.2836

It may improve FX perfromance.
Go to the bios
CPU core – manually
disable 2,4,6,8 core…

Then try it again…

Edit: i forgot you will be able to get higher OC …. 5Ghz+

(edited by XFlyingBeeX.2836)

AMD Optimization

in Account & Technical Support

Posted by: Aza.2105

Aza.2105

What Anet needs to do with rework shadows and reflections so they are gpu dependant. Right now, both shadows and reflections seem to be draw by the cpu. This causes a massive fps hit.

I have two gaming machines, one is a i7-920 with a geforce 470, the other is a Amd fx 8350 with a Radeon r9 280×. The intel machine outperforms the 8350 by 10fps at equal settings. When shadows and reflections are disabled the Amd can pump out the same fps.

Fixing shadows and reflections would be a boost to intel and amd users.

Amd Ryzen 1800x – Amd Fury X -64GB of ram
Windows 10

AMD Optimization

in Account & Technical Support

Posted by: XFlyingBeeX.2836

XFlyingBeeX.2836

The game is not optimized for AMD cores… And it will newer be…

I did Cinebench R15 3C/3T cores 305 point same as i3 3*** with 2C/4T
It use 85-99% of my CPU usage when i am in LA – I should be getting same perfroamnce or even better than i3….

1 Core per module modules 4C/4T 4 cores – may be faster than 2 cores per modules 2C/4T – 4 cores … You will be able higher OC because cores will be cooler

(edited by XFlyingBeeX.2836)

AMD Optimization

in Account & Technical Support

Posted by: sobe.4157

sobe.4157

1 Core per module modules 4C/4T 4 cores – may be faster than 2 cores per modules 2C/4T – 4 cores … You will be able higher OC because cores will be cooler

I see you are trying to take from what I said, but modules are AMD’s use, not Intel’s which is quite different…. And cooling has little to do with overall OC’ing…

3770k 4.9ghz | Koolance 380i | NexXxoS XT45 | XSPC D5 Photon | ASUS MVFormula |
Mushkin Black 16gb 1600 | 500GB Samsung 840 Evo |2×2TB CavBlack| GALAX 980 SoC |
NZXT Switch 810 | Corsair HX850 | WooAudio WA7 Fireflies | Beyerdynamic T90

AMD Optimization

in Account & Technical Support

Posted by: XFlyingBeeX.2836

XFlyingBeeX.2836

With Mugen 4 or macho HR-02 or true spirit 120/140 should be able to put over 5,1-5,2Ghz if you use only 1 core per module…

offcourse at least 1.55-1.58V
I didnt try it may require less Vcore for higher OC.

Any1 with 3C/3T BF3 Mp64 min FPS 50 – very impressive

(edited by XFlyingBeeX.2836)

AMD Optimization

in Account & Technical Support

Posted by: Kavring.4763

Kavring.4763

Any ideas why some systems out perform others in GW2 by the factor of 3 and more?
Is it really just the AMD vs Intel architecture?

At least I could get much better performance in WvW when having C’n’Q disabled.
I also have run with 3D Mark with and without C’n’Q and it makes a huge difference.
Is there a way to enable/disable C’n’Q in Windows without restarting my PC?
I got a tool from Asus but it says my CPU does not support C’n’Q!
AMD seems to focus on GPU’s only since all download links for CPU drivers point to CCC. As far as I know there is a performance tab in CCC but tbh I don’t want to have this app installed on my PC.

AMD Optimization

in Account & Technical Support

Posted by: Aza.2105

Aza.2105

Here is a good quote I found describing the architectural differences between Intel cpus and Amd FX 8 core cpus. I feel the individual did a good job trying to make it very simple and visual.

A software setup with data fed in a mostly serial manner favors intel, because intel’s instruction execution protocol for their CPUs are 90% serial data…which means intel chips break down a serial stream of data faster (single threaded performance). AMD’s instruction execution protocol for their CPUs are setup to run parallel streams of data (heavily threaded performance), which most software out right now is not designed to feed data to the CPU in this manner. So, data being fed serially to a CPU designed to run parallel streams of executions is inefficient, and favors one designed for that type of data streaming.

For example…

Picture you’re at Wal-Mart (or where ever), and there are 8 checkout lanes open…the first lane has a line a mile long, and they will only allow 4 of the other 7 lanes to have a line 1 person long. It doesn’t make any sense right? For starters, they’re not even using all of the lanes available, and the ones they are, aren’t being utilized efficiently.

That’s what’s happening inside an AMD architecture FX8350 with current software…

With Intel chips right now…it’s more like the line at best buy…where you have 1 line a mile long, but the front person has 4 different cashiers to go to when they arrive at the front of the line.

So, having 1 line a mile long doesn’t slow them down, they’re designed that way…

However, once information is fed in a parallel manner to the CPU…AMD will have all 8 lanes at Wal-Mart open for business and the lines will be distributed equally with people (instructions for the CPU), but Intel will still have the Best buy type line with 4 people running a cash register…except that now there will be 4 or even 8 lines forming into that one line, which makes things slow down because they are not designed to execute like that.

I hope the analogy makes this very complicated architecture discussion make sense.

Amd Ryzen 1800x – Amd Fury X -64GB of ram
Windows 10

(edited by Aza.2105)

AMD Optimization

in Account & Technical Support

Posted by: sobe.4157

sobe.4157

Props to you, that is a rather well thought out analogy.

3770k 4.9ghz | Koolance 380i | NexXxoS XT45 | XSPC D5 Photon | ASUS MVFormula |
Mushkin Black 16gb 1600 | 500GB Samsung 840 Evo |2×2TB CavBlack| GALAX 980 SoC |
NZXT Switch 810 | Corsair HX850 | WooAudio WA7 Fireflies | Beyerdynamic T90

AMD Optimization

in Account & Technical Support

Posted by: XFlyingBeeX.2836

XFlyingBeeX.2836

Look here is thing
AMD 3 core are better than Intels 2….
I say it like this FX 3,8Ghz 3 cores = i3 3240 3,4 Ghz + hyperthr.same score at cinebench

All is about AMD or Intel

Set 1 core per module… bulldozer is faster with that… Vishera is not….

AMD Optimization

in Account & Technical Support

Posted by: XFlyingBeeX.2836

XFlyingBeeX.2836

Sobe.4157
1 core per module is faster – total 3 cores…
Cinebench R15
3C/3T = 310cb Fx-4,5Ghz
2C/4T = 360cb FX-4,5Ghz

Core per core
2C/3T = 90cb per core
3C/3T = 103cb per core
And because GW2 will use only 3 cores about 85-95%, 3 cores will be faster by 15%

And because of less heat on CPU you will easily get your CPU with 1.55V (30$ air cooler) under 60C

DID U GET IT?
——————————————————
http://www.extremetech.com/computing/100583-analyzing-bulldozers-scaling-single-thread-performance

(edited by XFlyingBeeX.2836)

AMD Optimization

in Account & Technical Support

Posted by: XFlyingBeeX.2836

XFlyingBeeX.2836

http://www.extremetech.com/wp-content/uploads/2011/10/CB-Scaling1.png

Look at the picture
highest score is FX 8150 with 1 core per module! so 4C/4M
Second score is FX 8150 with all 8 cores up but only 4 threads – set in cinebench R15
third score is FX 8150 with 2 cores per module on 2 modules = 2C/4T

I sorry for that reaction. FX 6100 (higher clock) is not same as FX 6300
So do you understand now?

(edited by XFlyingBeeX.2836)

AMD Optimization

in Account & Technical Support

Posted by: sobe.4157

sobe.4157

I advise you to delete your posts so as to not add useless spam to this thread than what has already been added. I understand what I THINK you are trying to mention on Turbo Core ratios, but that is not what you mention… All we’ve gathered from your posts were that 2 core Intel = same score as 3 core AMD.

3770k 4.9ghz | Koolance 380i | NexXxoS XT45 | XSPC D5 Photon | ASUS MVFormula |
Mushkin Black 16gb 1600 | 500GB Samsung 840 Evo |2×2TB CavBlack| GALAX 980 SoC |
NZXT Switch 810 | Corsair HX850 | WooAudio WA7 Fireflies | Beyerdynamic T90

AMD Optimization

in Account & Technical Support

Posted by: XFlyingBeeX.2836

XFlyingBeeX.2836

Yep.
FX 4300 beats i3….
FX 6300 with all 6 cores under 100% load should do better then i5 3***/2*** under 100% load…
There is no game that would use all of 6 cores 100% – maybe BF4 after support mantle
http://www.bf4blog.com/battlefield-4-retail-gpu-cpu-benchmarks/

That is unimportant… every game has its own game engine and a lot of them runs better on Intel…

AMD Optimization

in Account & Technical Support

Posted by: Behellagh.1468

Behellagh.1468

1st – Module is AMD’s term for their dual integer/single FP core unit found in their Bulldozer CPU Architecture.

2nd – It was AMD’s goal to make a module performance equal to an Intel i3/i7 hyperthreaded core when both are running two threads.

3rd – When an AMD module and Intel i3/i7 hyperthreaded core are running only one thread, Intel smokes AMD.

Why? It’s simply really. Hyperthreading on an Intel i3/i7 core is simply a way to squeeze a bit more efficiency out of the underlying hardware. When you run one thread (ignoring any kind of turbo mode that ups the clock frequency) it can do 100 instructions in time X. However when running two threads, due to improved efficiency it can do 120 instructions in time X, but on average only 60 instructions per thread.

AMD decided to approach the problem by using two integer cores to accomplish the same task. That is when two threads are running it can do 60 instructions in time X per thread. So what happens when it has only one thread? It can do 60 instructions in time X per thread (actually it’s more like 75 because the two cores aren’t sharing certain common resources in the module but you get the point). EDIT: That’s what that extremetech article is showing, not the turbo but the improved performance when one thread per module doesn’t have to share common resources within the module. That’s one of the Window 7’s FX patches, assigning one threads to each module first and then start doubling them up, the same way i3/i7 cores get assigned one thread per core before being doubled up due to hyperthreading.

AMD’s module uses two slow cores in an attempt to get similar performance to an Intel’s hyperthreading when running two threads but when running a single thread it uses one slow core where Intel can devote the nearly the full performance power of it’s core to that single thread.

The Bulldozer architecture started it’s life when the i7-920 first came out. That was the performance target to beat. Problem is Intel didn’t stand still and with the -2xxx, -3xxx and now -4xxx they have stayed ahead of AMD in overall performance. The FX-8350 can keep up with an i7-2600K in some fully threaded benchmarks but as soon as you aren’t using 8 threads it falls behind. Ivy Bridge and Haswell improved over Sandy Bridge at the same clock speed so AMD is even further behind now (but only 18 months or so). Bulldozer needs a feature size reduction. It’s still built using 32nm while Intel is at 22nm (translates to faster clock at lower power).

We are heroes. This is what we do!

RIP City of Heroes

(edited by Behellagh.1468)

AMD Optimization

in Account & Technical Support

Posted by: Aza.2105

Aza.2105

/long text.

In short AMD FX cpus have totally different architecture and are being forced to try to run like a intel equivalent cpu would. When programmers utilize all 8 cores on a FX cpu the difference is massive.

A example of that is Planetside 2. On live I average around 40fps. On the pts I average around 90fps+. The pts has many of the amd optimizations present on it for testing. I don’t believe SOE took the time to make their engine multi threaded because they felt bad for AMD users. But they did it simply because the PS4 is a Amd 8 core, to which the pc version benefits directly from.

I expect this to be a repeatable pattern in the future, more developers will begin to code for AMD 8 core architecture. Only because consoles are a primary focus for their development. Its a big win for AMD I’d say. I wouldn’t be surprised some time in the future when Intel users start complaining about poor performance while AMD users do not.

Amd Ryzen 1800x – Amd Fury X -64GB of ram
Windows 10

AMD Optimization

in Account & Technical Support

Posted by: SolarNova.1052

SolarNova.1052

Indeed future game development linked to consols will help alot with multithreaded CPUs lik the AMD FX8 series.

But i also personaly like this becouse the minority of peopel that use Intel 6core 12 thread i7’s will also see a very nice boost

3930k 4.6ghz | NH-D14 Cooler | P9x79 Pro MB | 16gb 1866mhz G.Skill | 128gb SSD + 2×500gb HDD
EVGA GTX 780 Classified w/ EK block | XSPC D5 Photon 270 Res/Pump | NexXxos Monsta 240 Rad
CM Storm Stryker case | Seasonic 1000W PSU | Asux Xonar D2X & Logitech Z5500 Sound system |

AMD Optimization

in Account & Technical Support

Posted by: XFlyingBeeX.2836

XFlyingBeeX.2836

Indeed future game development linked to consols will help alot with multithreaded CPUs lik the AMD FX8 series.

But i also personaly like this becouse the minority of peopel that use Intel 6core 12 thread i7’s will also see a very nice boost

Your CPU will get boost in some games that will support AMD mantle… ANd if GW2 would support 8 threads/cores …..

AMD Optimization

in Account & Technical Support

Posted by: Corvi.3278

Corvi.3278

Here is a good quote I found describing the architectural differences between Intel cpus and Amd FX 8 core cpus. I feel the individual did a good job trying to make it very simple and visual.

A software setup with data fed in a mostly serial manner favors intel, because intel’s instruction execution protocol for their CPUs are 90% serial data…which means intel chips break down a serial stream of data faster (single threaded performance). AMD’s instruction execution protocol for their CPUs are setup to run parallel streams of data (heavily threaded performance), which most software out right now is not designed to feed data to the CPU in this manner. So, data being fed serially to a CPU designed to run parallel streams of executions is inefficient, and favors one designed for that type of data streaming.

For example…

Picture you’re at Wal-Mart (or where ever), and there are 8 checkout lanes open…the first lane has a line a mile long, and they will only allow 4 of the other 7 lanes to have a line 1 person long. It doesn’t make any sense right? For starters, they’re not even using all of the lanes available, and the ones they are, aren’t being utilized efficiently.

That’s what’s happening inside an AMD architecture FX8350 with current software…

With Intel chips right now…it’s more like the line at best buy…where you have 1 line a mile long, but the front person has 4 different cashiers to go to when they arrive at the front of the line.

So, having 1 line a mile long doesn’t slow them down, they’re designed that way…

However, once information is fed in a parallel manner to the CPU…AMD will have all 8 lanes at Wal-Mart open for business and the lines will be distributed equally with people (instructions for the CPU), but Intel will still have the Best buy type line with 4 people running a cash register…except that now there will be 4 or even 8 lines forming into that one line, which makes things slow down because they are not designed to execute like that.

I hope the analogy makes this very complicated architecture discussion make sense.

This is a very good way of explaining dude. Thank you for sharing this.

AMD Optimization

in Account & Technical Support

Posted by: Behellagh.1468

Behellagh.1468

Actually that description is wrong in many ways.

We are heroes. This is what we do!

RIP City of Heroes

AMD Optimization

in Account & Technical Support

Posted by: Artos.4563

Artos.4563

I upgraded my computer with:

Intel 4930K (running at 4.5GHz, 6 core)
32GB PC-2400 DDR3
>>DUAL<< GTX 770’s (4GB each)

and now it runs all right, except it still crashes under zerg with animation set to high.

Hi,
I am having the same issue. I think the lag is due to 5 things, CPU, GPU, Ram, SSD and upload/download rate.

I had to set my settings to ‘Best performance’ and set anything else to lowest. I play on an xps 13. Not ideal, but I have no more lag in wvw zerging. I got upset by the numerous updates and the size the game was taking on my SSD drive. It also gets my laptop to extreme temperatures. I decided to quit the game just because of that.

It is sad because it is a great game. I might get back to it in 30 years when I am retired and can afford a top range gaming pc then.

So long,

Artos