short answer your guild decision your guild responsibility, sort it out with your guild.
go to lower tier server or ask them to pay the cost.
aaaand another bump!
Any news?
We need that optimisation for 60fps at all sicuation
Use more GPU and less CPU.thanks in advance
Yes it would be nice and until a new engine would be available it will not happen.
And you want more CPU threads for the game executable and the engine so the GPU could be used for max use….SO: use more CPU and GPU would be wiser…
and until a miracle happens and a new engine is made available and implemented for all area’s of the game nothing will change.
I would like to see improvement, BUT I’d prefer new LS’s, expansions, the blocked legendaries, 4 (,maybe 5,) more dragons, the implementations of wvw and more FIRST.
Content > optimalisation,
no content= no players,
no optimalisation= present situation with players who want content, to be content…
yes but content that lack optimization is not as enjoyable, when it stutter or crash. just look at hot at release it was very good content but it crashed 10 time per hour until they released the 64 bit client optimization.
and if you look in the devs answer in the past they did not see a need for 64 bit client.
it is not the gospel as the other devs said.
do you know that with direct x 12 when you make pipeline state object for graphic. you can send 10 or more graphic to be draw as only 1 draw call to the gpu?
do you know that the driver the high level api dx 9 10 11 the driver handle the hazard and validation and cause a lot of over head(cpu bound issue)?
at the place of more thread I would say a better validation and less hazard that need to be validated will send the job faster to the gpu at the place of staying idle on the cpu to be validated and solved by the driver and the api.
@ Stephani Wise: (and others)
Direct x 12 =/= performance for older applications, DX 12 is completely the wrong thing to ask for. You need a new game engine. And it will (most likely) NOT be made.
This game’s engine was written with CPU speed in mind: This is also why at present this game can run faster on a 4.6-5.0 Ghz OC-ed Pentium G3258 with 2 cores then on a i7 3930K hexcore at stock (3.2ghz). The engine doesn’t need the threads just pure core speed. And as it is developed for DirectX 9 all videocards for DX 10(+ ) tend to be OP.
Lastly upgrading the engine w-/could force players to a new computer if the present set isn’t up to spec.
Guild wars 2 consists of many parts.
But in general you have 3 main parts of this game
1) being the program itself: gw2.exe the use of the objects ingame, the interaction, movies, database, security, and 2d flats, like the character select and the character build screens, I/O for map placement, and gamecontrols, like volume and so on.
You are interacting with this part, this part also needs to process all network data and provide the engine data to build the screen.2) the 3D engine ( whcih is creating your landscape from mesh/ or other graphicshapes, placing 2d and 3d elements, placing objects & characters) being an old modified engine being an extremely customized version of the Unreal Engine whcih was dx 7.0/8.0 but modified to dx 9.0c, it uses an Havok add on to create physics effects (waves, wind, knockbacks, and so on, waving of armor and gear)
You are looking at this part. It builds your screen…3) I/O with the databases and synchronisation of the game. (network services)
The servers regulate your placement and interaction with others in the game, this requires network communications. You only make use of the small bits of data you need to be able to interact with others, but this info is used by the Game executable.
This is neccesary to make your RPG into a MMO-RPGThen there is the OS with directX which have nothing to do with the game
1) The OS makes the computer run and provides you with an interface to use it.
2) DirectX will allow programmers to write code which is universal and makes sure different components will function with the game… This is mostly hardware support.
Problem with the GW2 engine is
The game has problems to construct enough character models and effects in crowded area’s mostly because the engine was originally used to be used for up to 10-16 people. GW2 as a game is now cramming 100 people in a map and in WVW 200 even, this overstresses the game and communication to the servers. Which in turn bottlenecks the game executable, which bottlenecks the engine. The GPU and the CPU have no real problems and still have additional power, as seen by the fact the game uses only 2 cores (likely 1 for the excutablke and the handling of I/O and 1 for the Graphics) and a few added support apps (network services)
Problem is the old graphics engine of the game CANNOT HANDLE "MULTI"THREADED operation and/or communications as it was never a problem when the engine was developed. (We are talking a few days ago here)
Timeline:
1998 (Unreal v1.0 optimized for 3dfx., various versions of Direct3d 6.0+)
2000 (directx 8.0)
2001 32 bit single core CPU’s @ ~1-2 Ghz, 256Mb-1Gb Ram just so you know
2002 (directx 9.0)
2002 (Unreal v 2.0 @ dx9)
2003 (64 bit CPU’s) (AMD)
2004 (dual cores)
2004 (directx 9.0c)
2004 (unreal v3.0 @ dx9.0c)
2005 (release of GW1) I have no clue which engine they use but my best guess would bea derivative of Unreal v2.0 modified, though talk was heard they changed a lot to fit a nwer engine whcih could have been the unreal 3.0, whcih was modded.
2006 (quad cores)
2007 (release of EotN)
2007 (windows vista,directx10)
2008 (windows 7, directx 11)
2008 (windows vista SP1, directx 10.1)
2011 (windows 8, directx 11.1)
2012 (release of GW2)
2012 (Unreal v 4.0, Dx 11&12) <- you need an engine availalble to create your game, as the game has had a development track Unreal 4.0 was to late. Actually gw2 was pretty late itself… and thus the previously used gw1 engine was stripped and upgraded.
2013 (windows 8.1, directx 11.2)
2015 (announced: directx 11.3)
2015 (windows 10, directx 12)
2015 (release of GW2:HoT) Problem is an engine upgrade would have been nice, but extremely costly… we should feel lucky as the sales of HoT were lower then expected and it could have been a deathblow for the game….
2015 Unreal 4.0 engine made avaialable for free…. so no initial buy, but use is costly… I do not want to pay a monthly fee to be able to use DX 12. Would you?
As Multithreading isn’r an option and upgrading is not possible to allow for directx and multithreads and thus the game is only able to deliver 1 thread.
The question if a upgrade from directx9 to directx 12 is worthwhile is eaily handled:1) It’s useless and a waste of time and development at this time to allow the use of DX 12.
2) The first change which should be made is a rebuild of a newer UNREAL engine to a new Guild Wars 2 engine to allow allow for multithreaded CPU rendering use and multithreaded GPU calls this asks a complete rebuild of the heart of the guild wars 2 program which might be worthwhile in a long run, but would be extremely costly due to the changes being horrendously timeconsuing and the neccesity to reverse engineer the old engine and rebuild the new engine to allow for multithreaded use and DX 12. 1 remark: the cost of using a new engine like Unreal 4.0 is free (since 2015), however a percentage of all PROFIT needs to be paid when using it! For a company completly reliant of gamestore purchases this could be a bit much…. And when they are modding it, it isn’t instantly a MMO as GW2… It is a FPS engine…Investments to make a new game engine will run into multiple, likely even dozens of, millions of dollars, even if the engine itself has no cost. For this game it is a bit over the top… And how much you may want this change, it is most likely not happening. Devs already stated it isn’t that easy and you should consider the fact DX 12 is NO PART OF ANY GAME. It allows for easier use of available hardware, but in the case of GW2 all hardware it needs is accessed already by directx 9. The old engine cannot make use of more.
DirectX is used by programmers to allow a program to work on different build computers, but using directx12 on a PC doesn’t allow a directX 9 program to work miracles. In effect the new features directx12 provides isn’t compatible and/or neccesary and therefore not used and only the legacy directx 9 components will be used.
SO:
We have the game in 32 and 64 bit which accesses a game engine to create graphics
We have a really old game engine, updated, but still LEGACY! running on directx 9.0c
We will not have an updated game-engine anywhere soon (well likely, as devs said so) and thus we will be locked into directx 9.We might have acces to directx 12 compatibility on computers 2-4 years old, allowing for partial directx 12 support but not by default
We might have acces to directx 12 True on computers 1 -2 year old for full directx 12 support, but not by default
We have acces to directx 12 software when running Windows 10, by default.
Direct X 12 only helps with the acces and ways to acces hardware, but if the program doesn’t use the features, any investments into allowing DX12 compatibility is/are useless…
Even if we have DX12 in all previous 3 questrions the game still is DX9 due to its game engine and it will stay directx 9.0….for now.The End. At least for now.
Would utilizing a VM to create 2 virtual cores (from the 6 physcial or 12 virtual ones in my PC) work to improve performance? and have a virtual machine handle the datastreams?
You say that this game was made for speed in mind in the ghz of the processor.
I do not agree with that.Since multithreading and
multiple coreDUAL CORE was out when the game was released. Yes, for gw1 dual cores were just the new thing, many players still played on single core, single thread machines, The engine developed for gw1 was reused in gw2, the game itself now uses 1 thread, and the rendering queue uses 1 thread, for a total of 2 cores, with some services delegated to other cores when available. this also makes the game compatible with i3 (laptop)processors, whcih are still dual core in naturecpu heat problem was all ready know before that. Well yes, limits of corespeed were kown in the time devs made gw2…, unfortunatly the ancient engine is mostly limited to 1 core
I can understand that they said the new dual core as communication issue and does not solve the speed issue. and they have continue to build it single core. Yes as at the time of release of Gw1 quadcores would likely be introduced and machines were at 2Ghz… ,but were not available at the time, most people still ran windows 32bit on single core machines.
I don’t think that they have not plan a head and say what if we need a multicore engine in the future and made the engine to be able to be upgraded since the industry was moving in that direction when they where building gw .2. and the cpu communication with the gpu is the api. ArenaNet owns it’s own game engine whcih was used for gw1 and gw2, buying a new one or making a neew one would be a gigantic investment.
That is what dx 12 will fix. DX 12 will not fix anything, you’ll need a new engine first since the devs will write smarter code that will not have to stop at the api to be validated since the devs will have validated it before it reach the api. that is the improvement that dx 12 brings. only DX12 will not provide better communication between 1! cpu core , due to an old engine and all gpu core*s*.
Less slow down and cpu bound (problem will still exists as the engine delivering the data is still MAXIMUM 1 core) issue do to less job on the api. right now the api need to validate the code and direct it. same as a cop(api) doing traffic that would have some car stopping to ask him if they are going at the right place. and would need to validate all the info and send them in the right direction. by removing that job from the api there is less slow down and having multiple cpu core (you would still only use 1 as the engine CANNOT make use of more) to multiple core gpu communication (my 2 old 780’s have 2 times 2304 cores, and the game is using them, at 30-50% but still) also help the flow of information. 1 to 4608 or 1 to 4608 doesn’t matter… in te end as long as ther isn’t a new engine
of course the complexity for the devs is higher and the responsibility also will be higher since the api will not validate the code of the devs, the devs will validate is own code before hand. meaning more testing.
Devs will not need to make dx12 , they will need to build a new engine first which can INTERACT WITH DX 12…..No New Engine: No need for DX 12
If they were to rework the engine, I’m pretty sure they’d add
- DX 9,10,11,12 compatibility.
- And options to use 2,4,6,8,10,12(+) cores.
- And maybe RAM caching for PC’s with more then 8 Gb/16 or 32Gb RAM, or more…
- And improved SLi/X-fire support,
- And options to customize the interface on a 3840*2160/2400 or 5760*1080/1200 and 11520*2160/2400 and VR sets/screens
Well that’s about it….
pax
just a question: if you have data car that run all at the same speed and they do not stop is there a traffic jam?
that there is 1 lane or 4 lane if they do not stop is there a traffic jam?
that is dx 12 data not stopping at the api because the devs validate where it is going, no more need to stop at the api to be validated. yes all core communicate on dx 12. but the main factor is that what solve the cpu bound issue is there is no traffic jam. since the work the api was doing before is now done by the devs before release.
now can the game use only 1 core and use dx 12. that is totally another story.
one thing is sure dx 12 is what solve cpu bound issue. if they ever decide to upgrade dx 12 is the best pick.
I am talking about the over head issue. state pipeline in graphic that reduce the over head.
Direct3D 12 cuts out the metaphorical middle man. Game code can directly generate an arbitrary number of command lists, in parallel. It also controls when these are submitted to the GPU, which can happen with far less overhead than in previous concepts as they are already much closer to a format the hardware can consume directly. This does come with additional responsibilities for the game code, such as ensuring the availability of all resources used in a given command list during the entirety of its execution, but should in turn allow for much better parallel load balancing.
In DX12, state information is instead gathered in pipeline state objects. These are immutable once constructed, and thus allow the driver to check them and build their hardware representation only once when they are created. Using them is then ideally a simple matter of just copying the relevant description directly to the right hardware memory location before starting to draw.
Direct3D 12 will allow—and force—game developers to manually manage all resources used by their games more directly than before. While higher level APIs offer convenient views to resources such as textures, and may hide others like the GPU-native representation of command lists or storage for some types of state from developers completely, in D3D12 all of these can and must be managed by the developers.
This not only means direct control over which resources reside in which memory at all times, but also makes game programmers responsible for ensuring that all data is where the GPU needs it to be when it is accessed. The GPU and CPU have always acted independently from each other in an asynchronous manner, but the potential problems (e.g. so-called pipeline hazards) arising from this asynchronicity were handled by the driver in higher-level APIs.
http://www.pcgamer.com/what-directx-12-means-for-gamers-and-developers/2/
since you seams to like information and technical detail. you can also go read this:
https://software.intel.com/sites/default/files/managed/4a/38/Efficient-Rendering-with-DirectX-12-on-Intel-Graphics.pdf
(edited by stephanie wise.7841)
@ Stephani Wise: (and others)
Direct x 12 =/= performance for older applications, DX 12 is completely the wrong thing to ask for. You need a new game engine. And it will (most likely) NOT be made.
This game’s engine was written with CPU speed in mind: This is also why at present this game can run faster on a 4.6-5.0 Ghz OC-ed Pentium G3258 with 2 cores then on a i7 3930K hexcore at stock (3.2ghz). The engine doesn’t need the threads just pure core speed. And as it is developed for DirectX 9 all videocards for DX 10(+ ) tend to be OP.
Lastly upgrading the engine w-/could force players to a new computer if the present set isn’t up to spec.
Guild wars 2 consists of many parts.
But in general you have 3 main parts of this game
1) being the program itself: gw2.exe the use of the objects ingame, the interaction, movies, database, security, and 2d flats, like the character select and the character build screens, I/O for map placement, and gamecontrols, like volume and so on.
You are interacting with this part, this part also needs to process all network data and provide the engine data to build the screen.2) the 3D engine ( whcih is creating your landscape from mesh/ or other graphicshapes, placing 2d and 3d elements, placing objects & characters) being an old modified engine being an extremely customized version of the Unreal Engine whcih was dx 7.0/8.0 but modified to dx 9.0c, it uses an Havok add on to create physics effects (waves, wind, knockbacks, and so on, waving of armor and gear)
You are looking at this part. It builds your screen…3) I/O with the databases and synchronisation of the game. (network services)
The servers regulate your placement and interaction with others in the game, this requires network communications. You only make use of the small bits of data you need to be able to interact with others, but this info is used by the Game executable.
This is neccesary to make your RPG into a MMO-RPGThen there is the OS with directX which have nothing to do with the game
1) The OS makes the computer run and provides you with an interface to use it.
2) DirectX will allow programmers to write code which is universal and makes sure different components will function with the game… This is mostly hardware support.
Problem with the GW2 engine is
The game has problems to construct enough character models and effects in crowded area’s mostly because the engine was originally used to be used for up to 10-16 people. GW2 as a game is now cramming 100 people in a map and in WVW 200 even, this overstresses the game and communication to the servers. Which in turn bottlenecks the game executable, which bottlenecks the engine. The GPU and the CPU have no real problems and still have additional power, as seen by the fact the game uses only 2 cores (likely 1 for the excutablke and the handling of I/O and 1 for the Graphics) and a few added support apps (network services)
Problem is the old graphics engine of the game CANNOT HANDLE "MULTI"THREADED operation and/or communications as it was never a problem when the engine was developed. (We are talking a few days ago here)
Timeline:
1998 (Unreal v1.0 optimized for 3dfx., various versions of Direct3d 6.0+)
2000 (directx 8.0)
2001 32 bit single core CPU’s @ ~1-2 Ghz, 256Mb-1Gb Ram just so you know
2002 (directx 9.0)
2002 (Unreal v 2.0 @ dx9)
2003 (64 bit CPU’s) (AMD)
2004 (dual cores)
2004 (directx 9.0c)
2004 (unreal v3.0 @ dx9.0c)
2005 (release of GW1) I have no clue which engine they use but my best guess would bea derivative of Unreal v2.0 modified, though talk was heard they changed a lot to fit a nwer engine whcih could have been the unreal 3.0, whcih was modded.
2006 (quad cores)
2007 (release of EotN)
2007 (windows vista,directx10)
2008 (windows 7, directx 11)
2008 (windows vista SP1, directx 10.1)
2011 (windows 8, directx 11.1)
2012 (release of GW2)
2012 (Unreal v 4.0, Dx 11&12) <- you need an engine availalble to create your game, as the game has had a development track Unreal 4.0 was to late. Actually gw2 was pretty late itself… and thus the previously used gw1 engine was stripped and upgraded.
2013 (windows 8.1, directx 11.2)
2015 (announced: directx 11.3)
2015 (windows 10, directx 12)
2015 (release of GW2:HoT) Problem is an engine upgrade would have been nice, but extremely costly… we should feel lucky as the sales of HoT were lower then expected and it could have been a deathblow for the game….
2015 Unreal 4.0 engine made avaialable for free…. so no initial buy, but use is costly… I do not want to pay a monthly fee to be able to use DX 12. Would you?
As Multithreading isn’r an option and upgrading is not possible to allow for directx and multithreads and thus the game is only able to deliver 1 thread.
The question if a upgrade from directx9 to directx 12 is worthwhile is eaily handled:1) It’s useless and a waste of time and development at this time to allow the use of DX 12.
2) The first change which should be made is a rebuild of a newer UNREAL engine to a new Guild Wars 2 engine to allow allow for multithreaded CPU rendering use and multithreaded GPU calls this asks a complete rebuild of the heart of the guild wars 2 program which might be worthwhile in a long run, but would be extremely costly due to the changes being horrendously timeconsuing and the neccesity to reverse engineer the old engine and rebuild the new engine to allow for multithreaded use and DX 12. 1 remark: the cost of using a new engine like Unreal 4.0 is free (since 2015), however a percentage of all PROFIT needs to be paid when using it! For a company completly reliant of gamestore purchases this could be a bit much…. And when they are modding it, it isn’t instantly a MMO as GW2… It is a FPS engine…Investments to make a new game engine will run into multiple, likely even dozens of, millions of dollars, even if the engine itself has no cost. For this game it is a bit over the top… And how much you may want this change, it is most likely not happening. Devs already stated it isn’t that easy and you should consider the fact DX 12 is NO PART OF ANY GAME. It allows for easier use of available hardware, but in the case of GW2 all hardware it needs is accessed already by directx 9. The old engine cannot make use of more.
DirectX is used by programmers to allow a program to work on different build computers, but using directx12 on a PC doesn’t allow a directX 9 program to work miracles. In effect the new features directx12 provides isn’t compatible and/or neccesary and therefore not used and only the legacy directx 9 components will be used.
SO:
We have the game in 32 and 64 bit which accesses a game engine to create graphics
We have a really old game engine, updated, but still LEGACY! running on directx 9.0c
We will not have an updated game-engine anywhere soon (well likely, as devs said so) and thus we will be locked into directx 9.We might have acces to directx 12 compatibility on computers 2-4 years old, allowing for partial directx 12 support but not by default
We might have acces to directx 12 True on computers 1 -2 year old for full directx 12 support, but not by default
We have acces to directx 12 software when running Windows 10, by default.
Direct X 12 only helps with the acces and ways to acces hardware, but if the program doesn’t use the features, any investments into allowing DX12 compatibility is/are useless…
Even if we have DX12 in all previous 3 questrions the game still is DX9 due to its game engine and it will stay directx 9.0….for now.The End. At least for now.
Would utilizing a VM to create 2 virtual cores (from the 6 physcial or 12 virtual ones in my PC) work to improve performance? and have a virtual machine handle the datastreams?
you say that this game was made for speed in mind in the ghz of the processor. I do not agree with that. since multithreading and multiple core was out when the game was release. cpu heat problem was all ready know before that. I can understand that they said the new dual core as communication issue and does not solve the speed issue. and they have continue to build it single core. I don’t think that they have not plan a head and say what if we need a multicore engine in the future and made the engine to be able to be upgraded since the industry was moving in that direction when they where building gw .2. and the cpu communication with the gpu is the api. that is what dx 12 will fix. since the devs will write smarter code that will not have to stop at the api to be validated since the devs will have validated it before it reach the api. that is the improvement that dx 12 brings. better communication between cpu core and gpu core. less slow down and cpu bound issue do to less job on the api. right now the api need to validate the code and direct it. same as a cop(api) doing traffic that would have some car stopping to ask him if they are going at the right place. and would need to validate all the info and send them in the right direction. by removing that job from the api there is less slow down and having multiple cpu core to multiple core gpu communication also help the flow of information. of course the complexity for the devs is higher and the responsibility also will be higher since the api will not validate the code of the devs, the devs will validate is own code before hand. meaning more testing.
(edited by stephanie wise.7841)
A better engine will simply means that anet will be less restricted by the type of contents and bling bling they can introduce to the game. Furthermore, they can advertise gw2 having improved graphic since dx11 or 12 does have better graphical capability than dx9.
Currently, HOT maps compare to the Tyria maps have way more bling bling and stuffs going on. Likewise, desert borderland map (excluding the blobs) too reported by players having performance issue on it.
Anet has to think about the performance issue if they want to continue adding more blings bling into the game.
Doesn’t have to be dx12, dx11 itself is a significant leap forward from dx9.
They do have people working on engine but performance enhancement isn’t the priority. The engine is being improved to handle more game features, most likely for the next expansion.
well the thing is dx 11 does not solve the cpu bound issue dx 12 does. dx 12 is the big leap that was needed. if they really are going to upgrade they should do it to dx 12. dx 12 is what is going to be in use. dx 9 was in use in 2003 the game age of mythology was using it. yes people always want more content and more stuff. hot needed to have 64 bit to run properly. and before they saw no need to have 64 bit saying it will not improve the game. it seams that it does after all. now it is the same scenario with dx 12. they plan more living world and another expansion so more blings, bling like you said.
that is why I think that optimize the game to dx 12 would help this game a lot, they could put more blings, bling in to it. and it will run better(no fps drop). and since that dx 12 is less fault tolerant means that the devs will need to do more testing to solve error before they release new content. so that also means for us that we should not have any crash of the game since it will be tested to be bug free before release. so that the user can get the best experience out of it. the game will probably be in use a longer time. meaning we will enjoy playing it for a longuer time also and since it put back some limitation for the devs they can bring more blings , bling and people like new blings, bling more content and wounderful graphic art. is it worth it for anet to invest in the success gw 2 is I think so.
But people are reading articles on Dx12 and not understanding what it actually does and how it goes about impacting performance. They are also ignoring or writing off the game engine dev who stated the thing that is primarily limiting performance isn’t where the game is calling Direct X.
don’t forget the other devs said that what the first devs said is not the gospel. will also say that in gw 2 gaming review they state that the game suffer from cpu bound issue. you can also see it in the game in big zerg fight and big meta event where the fps drops. will also say that dx 12 is very good for game that suffer from cpu bound issue.
Dx12 is not some silver bullet that would automagically fix the game’s performance issues or let you get 120 fps if you drop a GTX 1080 into your system.
dx 12 is a very big improvement. nope it is not magical people have work to make dx 12 happen and devs that will code game to use dx 12 will have more work to do dx 12 give more control to the devs for them to write smarter code it also means more responsibility and testing.
This game engine did not start as a First Person Shooter game engine where frames per second is all that matters. Those engines are very dependent on GPU performance and you can significantly improve performance with a faster GPU or by adopting a multithreading friendly, CPU optimized graphics API which will help on systems with lower performing CPUs. That’s why GPU cards use them for benchmarking and why those game engines, and 3D APIs, use GPU cards to benchmark against one another.
actually dx 12 help more game that are very complex with many character game that suffer from cpu bound issue mmo more like gw 2. as for faster cpu or faster graphic card it does not solve the issue. the issue is the api doing the communication between the cpu and gpu. that is why the cpu gets a big load and the gpu is waiting on the job. that you get a faster cpu or gpu will give minimal gain in that scenario. just like when there is a traffic jam on a bridge cause by a street corner cop directing the traffic. putting a higher speed limit on the bridge or after the bridge will not solve the issue.
Whoa, 29 out of several million keys bought for the game. Great statistical significance there.
well most people do not know what is the difference between the dx version and what it means for the game in the first place. so to have a big number of people to vote on this with out knowing what it is about is kind like of hard. you would have to first explain that the games run on dx 9 what is the drawcall limit and limitation of dx 9. then do the same with dx 12 and what it would help the game with like the cpu bound issue. when there is lots of animation in big zerg fight or big meta event in the game. when the fps drops pretty low. some see dx 12 as something new that they might not be able to follow with their current hardware. but the free upgrade to windows 10 ends in 2 month from now and the only thing is the graphic card. you need one that is not more old then 6 years ago NVidia and asus have the list of compatible graphic card I have all ready posted this before.. and even if you have one that is to old it does not mean that you cannot upgrade it price of old graphic card from 6 year ago have drop a lot. will also say that pc are not eternal and eventually you will need to change your pc so your old xp or vista system will need a replacement in the near future. the only people that cannot upgrade to windows 10 right now seams to be people that have Samsung device Samsung are late on compatible driver. was on the news this week. for those old system they could still run the old version of the software for the time being. they all ready have a 32 bit and 64 bit client and are able to support both. and lets face it 32 bit will eventually go away. if you look in the past you add 16 bits system after 32 bit system and after 64 bit system. do you still see many 16 bits system on the market? the next one to come out will probably be 128 bit since it usually double. 64 bit is not so recent it was out in 2005 on windows xp pro system. 64 bit is all ready 11 years old. same for os the free upgrade to windows 10 is to have only 1 os to support in the future. also pc are slowing down in hardware improvement making component smaller and smaller to have more speed and power as reach the limit. most improvement that will come now is mostly software related(bug fix, smarter software, better way to do the same thing that we are doing, coding that require not as many line of code to do the same thing ). good example of that is dx 12, asms.js,html5. what it means for people with pc you can expect the hardware to not change much in the coming year and the software will evolve faster. so someone that will need a new system can expect to keep it for a longer time.
also for some other people technical stuff is like Chinese to them ( expression meaning a language that they do not talk or understand) so you will probably never get them to come try to understand this and vote on it.
(edited by stephanie wise.7841)
The posters who are all ‘We want DX12..Blah Blah’ Have no idea what they are talking about and are misdirecting the root issue with the game.
We should be demanding that the games control engine be moved from Single Threaded to 2+ threads.
that will solve 99% of this with out even touching DX12.
not really since the over head is cause by the api. and only dx 12 solve this. also only dx 12 use multiple core. dx 11 still use only one core and does not solve the over head issue. of course using more then one core would help the game. but to do that it seams they need to add couple of component to the engine. and write the content code smarter to use multiple core. so more job more testing.
How many games currently/have utilized multiple cores in the past? The number of cores used doesn’t depend on which version of DX is used.
DirectX 11: Your CPU communicates to the GPU 1 core to 1 core at a time. It is still a big boost over DirectX 9 where only 1 dedicated thread was allowed to talk to the GPU but it’s still only scratching the surface.
DirectX 12: Every core can talk to the GPU at the same time
http://www.littletinyfrogs.com/article/460524/DirectX_11_vs_DirectX_12_oversimplified
You also keep ignoring something that gets highlighted to you time after time;
Which brings us to GW2. GW2 does a lot of processing, and much of it is done on the main thread. That is also where its bottleneck tends to be: The main thread. There are conscious efforts in moving things off the main thread and onto other threads (every now and then a patch goes out that does just this), but due to how multi-threading works it’s a non-trivial thing that take a lot of effort to do. In a perfect world, we could say “Hey main thread, give the other threads some stuff to do if you’re too busy”, but sadly this is not that world.
As for DX9 and 32bit: Moving off of DX9 wouldn’t buy us a whole lot performance wise, as all interaction with DirectX is happening on the render thread, which is generally not the bottleneck. Moving from 32-bit to 64-bit also does not really buy us a lot performance-wise. There are some optimizations the compiler is able to do with 64-bit that it can’t do otherwise, but the actual FPS gain is minimal at best.
https://www.reddit.com/r/Guildwars2/comments/3ajnso/bad_optimalization_in_gw2/csdnn3n
look this is not new in dx 9 there is only 1 core. we are talking about the game optimization to dx 12 multiple core.
DirectX 9, the CPU, being 1 core in those days, would talk to the GPU through the “main” thread.
DirectX 10 improved things a bit by allowing multiple cores send jobs to the GPU. This was nice but the pipeline to the GPU was still serialized. Thus, you still ended up with 1 CPU core talking to 1 GPU core.
yes thread 0 look at example of the thread in dx 11 and dx 12. this should explain all you need to know. follow this link and look in the middle of the page. picture with thread http://www.trustedreviews.com/opinions/directx-12-vs-directx-11-what-s-new
The posters who are all ‘We want DX12..Blah Blah’ Have no idea what they are talking about and are misdirecting the root issue with the game.
We should be demanding that the games control engine be moved from Single Threaded to 2+ threads.
that will solve 99% of this with out even touching DX12.
not really since the over head is cause by the api. and only dx 12 solve this. also only dx 12 use multiple core. dx 11 still use only one core and does not solve the over head issue. of course using more then one core would help the game. but to do that it seams they need to add couple of component to the engine. and write the content code smarter to use multiple core. so more job more testing. to give example: it is like a bridge(cpu) with 4 lane but your car know only to use 1 lane. and on the other side of the bridge you have a cop(api) doing the traffic on street corner. and car stop and ask him am I going to gpu city do I have the right direction. the cop need to validate the info and tell them where they are going. of course if the car would know where they are going in the first place and would not ask the cop. it would make a lot less over head. and of course if all those data car could use 4 lane on the bridge(cpu) that is what dx 12 brings to the table but it means more job and testing done by the devs also more responsibility to the dev because the cop(api) will not direct the lost data car that do not know where they are going.
FPS issue magically goes away when you reduce the number of player models on screen.
look if you do not believe what we say to you. what you can do go in wvw. on rush hours when there is lots of zerg happening. open the option panel look at the fps in the lower right corner then move on the map and look at the fps when you are alone when people or graphic npc comes near and when the big zerg 30+ comes over to fight another or 2 zerg of about the same size and keep your eyes on the fps. and you will see that many animation really does drop the fps. and when the animation go away from you it go back up again.
and to answer your answer no it is not magic it is the drawcall limit of dx 9. you have to understand that when the cpu is bound it cannot send the job fast enuff to the gpu to do the graphic on screen and that is when your fps start to drop. same as when you take your car and there is lot of traffic on the bridge. even if you put the speed limit higher on the bridge(cpu) it is jam so you will not go faster. there is a cop (api) that is stopping car on the other side of the bridge to regulate the traffic to gpu town. it cause over head. the over head cause the traffic on the bridge. even with a sport car you will get stuck on the bridge, you will arrive sooner in the trafic jam, it can help you clear the bridge a little faster once the bridge clear up. but it does not solve the traffic jam.
This thread keep getting more replies, even if developers said that GW2 won’t ever support multithreading or DX upgrades or any kind of new engine.
They even crushed our hopes for GW3 to come out with better graphic and overall optimisation.Note from futue: It’s 2025, we still have DX9c, game is still CPU heavy, it can’t utilize GPU or CPU to max performance.
If GW3 uses the same crap engine that GW2 uses, I will not buy it. its not 2004 anymore, its time to retire old technology.
There won’t be GW3.
They will just keep updating GW2 with new expansions like HoT for example.
There won’t be GW 2.5 or anything … just more updates to GW2.And to update engine of game, they would need to rewrite almost all codes, so we can just forget about it.
I sometimes dream about GW2 running with Unreal Engine 4. Many orgasms.
from what we know there is 3 software component that needs to be added to the game engine. for the game engine it is a secret of state what they modified to make gw 2 possible, so they might have all ready some of the needed component to do the job but we do not know. and of couse for the content each thread of code will need to be modified to give them direction to where they are going at what time they will arrive etc… this after need testing to see if every thing is working properly, if not correct mistake . test again once it is working update. yes it is more job.
one dev as give word on this and I have ear that another one said it was not the gospel, for the moment they do not plan on doing it. it could be to soon for them to do it. it is like 64 bit they did not plan or see the advantage of it. but when hot expansion came out, and the game was crashing a lot. the 64 bit client came and it solved the game crashing issue. so I think that it solved the issue and there was advantage to implement it after all.
they plan another expansion more living world they started to put server together in wvw. the game with many animation happening at same time suffer from dx 9 limit. causing the fps drop. so it is only a matter of time before they need to add something that they did not plan to or did not see the advantage of it.
so maybe they will change their mind the future will tell.
Not really. I know some prominent Twitch streamers don’t don’t play on 10. Maybe due to stream software compatibility issues. Even here we have a weekly thread in Technical asking about Win 10 and this game.
Until we are have a series of predominate games that show significant improvement with Dx12, we aren’t going to see a rush to upgrade.
Yeah, that’s why I think more devs may be into vulkan, since it can run on Windows 7 machines.
After just a year and a half, AMD appears to be sunsetting its “revolutionary” original Mantle gaming API as we know it, according to a blog post written by Raja Koduri, VP of Visual and Perceptual computing at AMD.
“…[I]f you are a developer interested in Mantle “1.0” functionality, we suggest that you focus your attention on DirectX 12 or GLnext,” Korduri writes.
Korduri said AMD will no longer release Mantle 1.0 as a public SDK as originally intended, which many will take to mean that that is the end of it as an alternative to DirectX12 and the new OpenGL standards.
you can read it : http://www.pcworld.com/article/2891672/amds-mantle-10-is-dead-long-live-directx.html
And the problem with using steam is that it only gives you players who use that service and actually took the survey. Not to mention it’s based on whatever they consider to be “active”.
Survey is simply permission but at least that sample starts with those who game on PCs rather than using every PC on the planet as a starting point. Since nearly all new PCs ship with 10 that skews the “adoption rate”.
you say it is simple permission. yes from the user but there is confidentiality agreement connected to this. it is like anet in your bug report the system info are collected and they have those system information but it is confidential. you did not see people complain and being afraid that windows 10 collect some data? it is not only windows 10 all system do even the web does collect info. what is sure is that all that have windows 7 and up will upgrade to windows 10. even if some does not upgrade with the free upgrade and want to stick to their old os. eventually they will change pc and buy a new one. same for people that have older os. look on the market share of os. and compile all the windows os together. that will be windows 10 os market share in the future world wide. and yes steam survey give you a good idea of what pc os are use by gamers. gamers usely go with performance machine. the prefer os right now is windows 10. of course it brings dx 12 multithread solve the overhead issue and gives your pc more graphic power under the hood if you have more then 1 graphic card on board like sly or cross fire. but better since it takes all capacity or brand. and all the new pc with a intel ship all ready have some hd graphic power apart from the NVidia or amd or other brand graphic card that the gamer put to play game since the fps is higher on those card for gaming per pace. hard ware is at a limit right now. the time of making cpu smaller to get more speed (ghz) is pretty much over with the technology that we use today intel as laid off employee recently because of that. moore’s law is ending.
http://www.cnet.com/news/end-of-moores-law-its-not-just-about-physics/ onless they find new process or new material to keep on going smaller and faster. evolution of cpu will stay the same. look at cpu heat issue after 4.3 ghz. that is why they came out with multicore in the first place. and yes I can understand that anet did not follow that since when they started double core cpu they add some issue it was new and communication between core and the splitting of the task and reassemble of the task together was crippling the benefit of having 2 core(you would have minimal gain of having 2 core at that time). so many at that time have stick with the old way of putting more ghz to solve the problem, but now multicore as evolve the communication and process work better then in its start and you now have quad core, 4 core per cpu. so the cpu can now make 4 time more work with out the need to go high in ghz to do that work. just like having 4 employee to work on the same job does the job done faster then having only one. another example could be of having a car on the road to make delivery having 4 car on the road to make the delivery go faster. of course it is more complex for writing the software to use that hardware. before there was 1 place to send the data in now there is 4 so you need to program your data smarter.
Sorry but using one website (e.g. Steam) to represent all PC’s is wrong. All you do is record those who took the time to complete the survey who happen to use Steam. I’m pretty sure it’s not representative for anything but Steam users.
look it is the steam survey and article from steam saying that windows 10 is the most used by stem and as a steady grow of minimum1.37% per month and last month 2.96%
will also add what it means for devs to switch to dx 12 article is from 451 days ago
http://www.pcgamesn.com/reality-check-what-developers-really-think-of-directx-12
means more job and more risk if they make error means more responsibility.Clearly, there are significant incentives from both the hardware and software perspective for a clean slate, low-level approach to graphics API design, and the current engine ecosystem seems to allow for it. But what exactly does “low-level” entail? DirectX12 brings a plethora of changes to the API landscape, but the primary efficiency gains which can be expected stem from three central topics:
•Work creation and submission
•Pipeline state management
•Asynchronicity in memory and resource managementAll these changes are connected to some extent, and work well together to minimize overhead, but for clarity we’ll try to look at each of them in turn isolated from the rest.
Work Creation and SubmissionThe ultimate goal of a graphics API is for the program using it to assemble a set of instructions that the GPU can follow, usually called a command list. In higher-level APIs, this is a rather abstract process, while it is performed much more directly in DirectX 12. The following figure compares the approaches of a ‘classical’ high-level API (DirectX 9), with the first attempt at making parallelization possible in DirectX 11 and finally the more direct process in DirectX12.
http://www.pcgamer.com/what-directx-12-means-for-gamers-and-developers/2/
what they are afraid of is the direct process in direct x 12 because they will have to do a very good job on the load balancing and testing. also because it is new and there is not many experience dev for dx 12 yet.
I don’t really feel like getting into this again as there’s nothing left of the horse and all of its future generations. All that I was pointing out was that you cannot use a survey conducted to a specific community and apply it across the board to everyone else.
Sorry, but as far as sample sizes, variability and anything else, steam community is by far the best viable sample for gathering statistics about pc gamers at large. Or what do you really think that any statistic in existence actually goes to everyone and check individually everyone?
Most data in the world is gathered through samples, and if you can think of a better sample than steam, i’d be freaking amazed.
What percentage of players use steam? A better route would be to go with the adoption rate of Windows 10 among all PC’s.
only problem what that is that it would give you the os system in the world not the percentage of gamer pc. think about it many company not in science stuff or gaming do not need performance pc. that data all ready exist market share of os. https://www.netmarketshare.com/operating-system-market-share.aspx?qprid=10&qpcustomd=0
but for gaming statistic steam collected data but there is no other gaming place that collect such data to our knowledge unless you know a place.
…
You still need to learn a few things, and please, do it before you answer with a new fantasy false theory.
- Multi-threading an existing single-threaded game engine isn’t as easy as making one from the ground or adding just 2 or 3 missing software components.
Words of an anet dev:
“Software engineers are forced to properly design their applications to work well in parallel. Doing this after the fact is usually on the range of non-trivial to very hard.”
“There are conscious efforts in moving things off the main thread and onto other threads, but due to how multi-threading works it’s a non-trivial thing that take a lot of effort to do.”- GHz has everything to do. Bottleneck comes from the massive amount of calculations needed to do on the run.
- Why do you relate the API with the storage bandwidth bottleneck? Don’t you realize how bad and noobish you sound?
One thing is the draw call overhead from the API and another completely different is the bandwidth bottleneck from the main storage.- Gw2 engine won’t be multithreaded because the real world is not Tyria. Magic doesn’t exist in the real world and a game engine cannot become multi-threaded with an ABRACADABRA. It requires a lot of time (probably months if not over a year), stopping the delivering of content, an entire big team dedicated to it and, essentially, a lot of money at the start and the willing to lose a lot of it in the mid-long term.
Chances to happen are very low to non-existent.So instead of such amount of: “If the game engine was multithreaded…”, enjoy the game as it is, and if you have performance issues, upgrade the potato specs you have and play with lowered settings.
here from the same discussion that you add before: CPU and System memory will always be the biggest bottlenecks it’s worth noting here there are certain things that can literally only run on a single cpu due to how some algorithms work(this goes for all games) and given what the ANet dev has stated most of our issues is how threading is currently being handled and probably needs to be refactored(massive undertaking).
do you agree that if there would be no over head the cpu would not become a bottleneck like it is now?
do you agree that if it would use multithreading you would have more lane to send the data through?
well that is what dx 12 will bring.
telling people that cpu ghz will solve the issue is false since it is the overhead that is slowing the cpu sending the job to the gpu.
it is not only me that is saying it Microsoft is saying it, intel is saying it, the devs they asked about. in the link that I have posted before. what do you think about dx 12 from a devs perspective. they all say it will solve the overhead and it will give devs a better control and understanding of the code and how it works at a level they usely do not see. negative point for devs more job more testing to make sure it works good.
as for your comment I enjoy the game and like the game. when it works good. but you have to agree that single thread and dx 9 limitation with lots of overhead bottle neck the cpu.
as for my patato spec I run windows 10 pro 64 bit i7 (turbo mode up to 4.2ghz). 32 gb ssd maxize mode.
even if you could say you can overclock your system to have more ghz example to 4.6ghz. it does not solve the problem. and I will not turn my system in a welding machine to have minus 0.4ghz gain. overclock cause heat problem and you burn the cpu faster at 4.3 ghz. and like I said before if a raid 0(A RAID 0 array of n drives provides data read and write transfer rates up to n times higher than the individual drive rates, but with no data redundancy. As a result, RAID 0 is primarily used in applications that require high performance and are able to tolerate lower reliability, such as in scientific computing or computer gaming.)
with ncq 32 strip per drive turn (https://en.wikipedia.org/wiki/Native_Command_Queuing ) can run on a 3.6 ghz cpu and the 3.6ghz cpu can handle even more data send by the storage adding 0.4ghz does not help much.
since the cpu will still wait for the data and will still be bottleneck because of the over head. and if I am having that problems with a good system imagine the people that do not have a good system.
how I know this my old pc add that set up.
adding more ghz does not solve the issue you get minimal gain.
if you really are a dev and can’t understand this.
Don’t you realize how bad and noobish you sound?
when I run gw2 on my system my fps is around 65 fps to 104 fps. when I get it view of a zerg it drops to 58 fps. when I get in to a zerg figth in wvw it drop to 19 fps. and that is with a good system. it is not a hardware issue since the hardware is there but the software optimization for the game is not there to use it. the hardware as evolve but the game software did not. apart from the 64 bit client that you did not see the improvement that it would bring and that as now arrive after the launch of hot . it helped a lot since now the game stopped crashing.
(edited by stephanie wise.7841)
Not quoting cause some of these replies are like a page from a dictionary.
DX12 will do nothing for this game in regards to performance. It will however update the graphics API to take advantage of some of the newer features of DX12.
The game engine is the problem, and the fact that is uses a single control thread to handle all the calls. That is the ONLY bottleneck that ties GW2 to a single core in regards to performance.
Source – https://www.reddit.com/r/Guildwars2/comments/3ajnso/bad_optimalization_in_gw2/csdnn3n
DX12 cannot fix that issue.
Only a Game Engine update can.
AND, for the record here, Anet has already formally said ‘there are no plans to update the game engine to support DX12 or Mantle’ On reddit.
So all this is moot, unless you want to throw money at NCSoft in hopes that they will force Anet to redo the game engine to fix the bottleneck.
But with CPUs being as fast as they are, and the fact you can get 40FPS on a 4790k@4.6ghz running a GTX950 at 2560×1600 during all but 2 boss events, just means they do not need to do anything about the game engine ‘right now’.
cpu ghz as nothing to do with it. let me explain something to you your software is store on hard drive or ssd, the cpu ask for data that comes from the hard drive or ssd. and they are always more slow then the cpu. you can run raid 0 (2 hardrive in parallel simultaneous)ncq (32 stripe per turn of disk) with a 3.6 ghz cpu and it can take it and take even more. but it can’t get more because that is the max data you will get to it. so having more ghz means only that the will do the job faster when it gets to the cpu. the real issue is the over head. since it slow down the data that get to the cpu. because the api needs to slow down that traffic and validate transaction. that is why there is over head in the first place. dx 12 bring multithreading and remove the over head since the gaming devs will do that job before it reach the api and will control the over head to make the game run as good as it can with no issue. right now it is like a bridge call cpu that as more then one lane multithreading but the engine know only to use 1 lane and send all is car on the bridge using only one lane and right after the bridge you have the api traffic cop that slow down the traffic and validate so that no accident happen (overhead) that is why gpu is waiting for those car that will make delivery. if the engine would be multithreaded and the gaming dev would do a good job car could use all lane to cross the bridge and the api cop would not need to validate and make over head or not as often. meaning no traffic stop on the bridge cpu.
nevermind my $3000 pc doesn’t support dx12 rofl
just make it dx11
Windows 10 is free, I would upgrade if I were you.
The thing is, DX9 was a success since most developers utilized it…however, I hear it was a total PITA to code using it, and that dx10 and 11 were even less dev friendly. DX12 is suppose to be better on code monkies, at least from what ive heard.
Also the advantages of the jump from 9 to 12 is huge, dx10 gets you…?? and 11 tessalation…which is ok but you know…
DX12 is going to be the new standard, so the sooner they get on board the better….
But yeah I don’t see it happening…its a huge undertaking that wont yield new players really…maybe a few would check it out just to see how it all works if they get it done before it becomes standard issue…which wont happen.
We will have to wait for GW3 for that….and to be honest…they should do the WoW thing, just keep making the game you have that people like better rather than trying to reinvent the wheel. So maybe if they see fit to not make a 3rd game and just keep gw2 going forever, then dx12 would make sense.
I was always on Windows 10. DirectX 12 only works on very new cards, whether they’re running Windows 10 or not.
Intel: Intel Haswell (4th gen. Core) and Broadwell (5th gen. Core) processors
AMD: Radeon HD 7000-series graphics cards, Radeon HD 8000-series graphics cards, Radeon R7- and R9-series graphics cards, and the following APUs (which meld CPU and GPU on a single chip): AMD A4/A6/A8/A10-7000 APUs (codenamed “Kaveri”), AMD A6/A8/A10 PRO-7000 APUs (codenamed “Kaveri”), AMD E1/A4/A10 Micro-6000 APUs (codenamed “Mullins”), AMD E1/E2/A4/A6/A8-6000 APUs (codenamed “Beema”)
Nvidia: GeForce 600-, 700-, and 900-series graphics cards, GTX Titan series
Of particular note, Nvidia promised DirectX 12 compatibility for older graphics cards based off its Fermi GPUs—namely, the GeForce 400- and 500-series.
The GeForce 400 Series is the 11th generation of Nvidia’s GeForce graphics processing units, which serves as the introduction for the Fermi (microarchitecture) (GF-codenamed chips), named after the Italian physicist Enrico Fermi. The series was originally slated for production in November 2009,1 but, after a number of delays, launched on March 26, 2010 with availability following in April 2010.
7 year I think is not that very new.
This game is very single thread locked in regards to performance. There is 1 master thread that controls everything in the game, and that IS the bottleneck and why DX12 will not do anything for the game in regards to performance.
This is not a graphical API issue, its not a worker thread issue, its the main control thread. And this was explained on Reddit some months back by an Anet Game Engine Dev.
yes single thread is dx 9. dx 12 is multi core all core can talk to each other and send the job more fast to the gpu. also the dev can have a more and deeper control. on the part that was control by the api. dx 12 will make more fast the game that are single threaded have lots of detail lots of complexity and that are cpu bound or cpu heavy. but the important factor with dx 12 is that it all depends on the gaming devs job the implementation is in their hand to code the game and send the job through all the core to the gpu. handle the memory etc…
to help you understand if you did not read all the article that talk about cpu heavy game the problem is that there is to much computing on the cpu using 1 thread of a core with over head. the cpu fill up to quick and gets to 100 percent and the gpu is waiting at about 30 or 40 percent to receive the job. difference with dx 12 use all core all thread on the system. over head is remove the api control is given to the game dev for them to implement those interaction the same as when a game dev code for console gaming. right now pc have high end graphic card and multicore and thread cpu a lot more powerful then console but because of the coding it is equivalent with console because console coding is done better they know for what hardware it is use and do not have to let the api run that part of the software for them like in dx 9 dx 10 dx 11. when dev do a good job of coding in dx 12 you will get all those gain.
even if you come back with the main single thread control over and over it is normal the game is single thread dx 9 api. it is normal in those day it was the cpu ghz that gave the speed to the system and the new graphic card api is the bridge or the cop that handle the trafficic between cpu and gpu and memory allocation. now ghz as reach the limit and pc use multiple core and thread to gain speed it is like splitting a job so that it gets done more fast between the core and thread and they can all talk together and the dev decide where they send the job with coding.
hope this help you. because I have seen that line about the main control thread is master thread is single thread pass by often enuff. it is normal in those time there was only 1 thread. so you would all code the software to go by one main thread. dx 11 support multiple thread but still use only 1 thread. dx 12 use all thread they can talk to each other to do the job and it is all in the hands of the game dev.so that single thread can be split by the dev to use different thread. and dx 12 can solve the cpu bound heavy that the game suffer from. that is performance that it bring to the game if the game dev do a good job.
hope it answer that part. that many do not seams to get.
One thing are the threads of the API and another are the ones related to the game engine. In Gw2, both are single threaded, but the game engine ones are the source of performance issues. They are independent and a separated junk of software.
While it is true dx12 would bring multithreading, ONLY FOR THE API, it wouldn’t solve the performance issues as the game engine thread is still single-threaded and the multithreading of dx12 has no effect on the behaviour of the game engine.Hope this helps to finally make your brain to understand why dx12 wouldn’t solve the performance issues in Gw2.
my brain as already understand all that the issue with the game engine is that it is not multithreaded. if it was dx 12 would solve the cpu bound heavy issue. that is exactely what we are talking about. the game engine is missing a 2 or 3 software component to be multithreaded. like you can see in the page below of a example with a havoc engine to have parelle computing. multithreading. the things is people say it can’t be done. and there is example of how to do it. it was the same before with 64 bit client it can’t be done well now it is done. maybe it will get done you never know. if it can solve the issue of the game and make it run longer and the devs can do more with it having less limitation could make the game run longer and could be a good investment for long term. they will make the decision. the future will tell.
This game is very single thread locked in regards to performance. There is 1 master thread that controls everything in the game, and that IS the bottleneck and why DX12 will not do anything for the game in regards to performance.
This is not a graphical API issue, its not a worker thread issue, its the main control thread. And this was explained on Reddit some months back by an Anet Game Engine Dev.
yes single thread is dx 9.
NO, the engine uses a single thread to prepare information to be passed on to the GPU by dx9, where it is calcualted by the GPU’s threadsdx 12 is multi core
NO dx12 is a programextension that can help implement I/O and basic OS capabilities for easier integration of 3rd party progrmas on an OS, as such it handles the ways information can be send to CPU core’s, GPU, audio, HID and more.
It works as a library so the game writers can write a progrma which can be played on any PC with sufficient capability, ( when the hardware compatibility is ok).All core can talk to each other and send the job more fast to the gpu.
- Yes this is called hypertransport, and is an exchange mechanism so 1 processor can use data from another core or CPU. it’s completely dx12 independant and part of the CPU infrastucture(as it is made by the CPU manufacturer and needs to be implemented on motherborads for CPU to CPU communication).
- The speed to GPU will not increase, speed is governed by bandwidth whcih is governed by card bitwidth, bus link speeds and bus width. The datatransfer to the GPU stays the same. as DX 12 doesn’t change the maximum transferrates over the bus. It could however improve multithreaded tasks, and relieve the CPU bottleneck. But only if the game engine allows for this. Which in the case of GW2 isn’t so.
When using dx12 also the dev can have a more and deeper control.
The dev makes a program and is already in full control over the program, He’s allowed more resources; Yes. But this is not really relevant on an older software platform or game engine.On the part that was control by the api.
Dx 12 will make more fast the game that are single threaded have lots of detail lots of complexity and that are cpu bound or cpu heavy.
NO, -single- threaded engine have no benefit whatsoever from DX12, these engines must be completely rewritten to be able to use multicore for the graphis rendering threadbut the important factor with dx 12 is that it all depends on the gaming devs job the implementation is in their hand to code the game and send the job through all the core to the gpu. handle the memory etc…
Yes and if a program’s engine cannot accept multithreading you can implement dx12 without rebuilding an engine from scratch, but it would not improve the performance. The engine is the communication between the program and the directx components whcih are handling the acces to the hardware responsible for building the frames you see on your PC. If it’s single threaded, it is single threaded, even though dx12 gives it acces to as many threads as the PC has available.to help you understand if you did not read all the article that talk about cpu heavy game the problem is that there is to much computing on the cpu using 1 thread of a core with over head. (= game engine limit, and not easily changed)
the cpu fill up to quick and gets to 100 percent and the gpu is waiting at about 30 or 40 percent to receive the job. *(Correct, this is the symptom)*
difference with dx 12 use all core all thread on the system.
Problem is: Even if directx 12 could be accessed the game engine is only capable of filling 1 rendering thread. This would make the program run on directx 12 but still only acces the 1 rendering thread, as the programs limitation being the game(-engine)) cannot distribute data to more cores….over head is remove the api control is given to the game dev for them to implement those interaction the same as when a game dev code for console gaming.
Which is very useful when you are writing a new game engine. but with a 15 years old engine this causes some problems.right now pc have high end graphic card and multicore and thread cpu a lot more powerful then console but because of the coding it is equivalent with console because console coding is done better they know for what hardware it is use and do not have to let the api run that part of the software for them like in dx 9 dx 10 dx 11. when dev do a good job of coding in dx 12 you will get all those gain.
Again the engine, so the original program isn’t capable of distributing the data to more cores.even if you come back with the main single thread control over and over it is normal the game is single thread dx 9 api. it is normal in those day it was the cpu ghz that gave the speed to the system and the new graphic card api is the bridge or the cop that handle the trafficic between cpu and gpu and memory allocation. now ghz as reach the limit and pc use multiple core and thread to gain speed it is like splitting a job so that it gets done more fast between the core and thread and they can all talk together and the dev decide where they send the job with coding.
Yes this is the problem now… but 15 years ago when this engine was coded for dx8 originally this wasn’t a problem.hope this help you. because I have seen that line about the main control thread is master thread is single thread pass by often enuff. it is normal in those time there was only 1 thread. so you would all code the software to go by one main thread. dx 11 support multiple thread but still use only 1 thread. dx 12 use all thread they can talk to each other to do the job and it is all in the hands of the game dev.
You still expect people to remake the whole engine. writing an average engine takes 2-5 years, for a dedicated companyso that single thread can be split by the dev to use different thread. and dx 12 can solve the cpu bound heavy that the game suffer from. that is performance that it bring to the game if the game dev do a good job.
No it cannot, without rewriting the whole game enginehope it answer that part. that many do not seams to get.
It seems there are many seams, and I hope I have answered the questions which arise
- You have the program, in this case a game Game (gw2.exe) This controls acces to the game engine. security, account acces, acces to the BLTP and so on, and allows for acces to all image, sound and other datalibraries used ingame
- You have the Game engine (Unreal modified engine, with Havok Physics engine) this controls the build up of the world, the placement of objects and the management of objects in the world, this also places the objects and tells the GPU skins and options and how the game world should be build up, You are interacting ingame with the engine. This also places the information in a stream to the GPU, directX 9 does this in order, DX12 does this parallel, while retaining order. problem is if the engine is written for dx9 it cannot send it to different threads. The different processes can be different threads though, (read the program, the engine and I/O’s). The engine is started by the program but is a program in it’s own right. As are the networking services and other I/O’s (including the database(s) with sprites, textures and skins)
- You have the API layer (directX,Vulkan, OpenGL) this allows the input and output to all hardware without so implementation of the game can be simplified. This will take over some hardware control from the OS for direct acces. This transfers data to the hardware and allows communications with the engine. so you type and direct X sends this info to the engine directx allows the use of any keyboard or mouse, or GPU, or CPU audio, I/O and so on.
- You have the OS which handles I/O and runs the software. it is the platform the program runs on which handles the engine and the platform whcih allows for the API to run so this works together with API. This is also why directX versions are often coupled with OS-es nowadays.
- You have the hardware which is the physical limits to the games capabilities. (RAM, GPU type, CPU’s , audio, LAN, HID’s) which are unaccessible without an OS
The speed to GPU will not increase, speed is governed by bandwidth whcih is governed by card bitwidth, bus link speeds and bus width. The datatransfer to the GPU stays the same. as DX 12 doesn’t change the maximum transferrates over the bus. It could however improve multithreaded tasks, and relieve the CPU bottleneck. But only if the game engine allows for this. Which in the case of GW2 isn’t so.*_
it gain speed because of multithreading and the over head being remove also since the devs will make better code to replace the election of the api that need to validate and control the load balancing will also save cpu power and process. since the code will do that job. in dx 9 dx 10 dx 11 devs have no access to that. dx 12 bring direct control of that level.(The primary feature highlight for the new release of DirectX was the introduction of advanced low-level programming APIs for Direct3D 12 which can reduce driver overhead. Developers are now able to implement their own command lists and buffers to the GPU, allowing for more efficient resource utilisation through parallel computation. Lead developer Max McMullen, stated that the main goal of Direct3D 12 is to achieve “console-level efficiency on phone, tablet and PC”)(and what tell you that gw2 does not? I have not read any where that devs have try to optimize gw 2 to dx 12 or try to do something similar. any source for this?
Even if directx 12 could be accessed the game engine is only capable of filling 1 rendering thread. This would make the program run on directx 12 but still only acces the 1 rendering thread, as the programs limitation being the game(-engine)) cannot distribute data to more cores….*_
what is needed for the engine to be able to use more then one thread or core? is it a coding language limitation or game engine structure limitation(like it as only one x to use one thread and you would need to add more X to the engine to use more thread? game engine is a program a program you can add stuff to it I guess since you said that it is the same then gw 1 but modified if they modified it once to add what they needed they could probably do it again.
- You have the API layer (directX,Vulkan, OpenGL) this allows the input and output to all hardware without so implementation of the game can be simplified. This will take over some hardware control from the OS for direct acces. This transfers data to the hardware and allows communications with the engine. so you type and direct X sends this info to the engine directx allows the use of any keyboard or mouse, or GPU, or CPU audio, I/O and so on.
yes you are right it is simplified so that the dev do not see what is happening behind the api door this is changing in dx 12. dev have more control. yes often devs do not want to play with hardware. dx handle the hardware part. depending on what program language they use sometime devs will also have abstraction layer keeping them in the dark on the process happening under the layer. also some coding language have some limitation.
but the fact was that yes it is normal that the thread is single since it use dx 9 and it was in use when cpu where single core.(at that time you could not tell the data to go to another core there was only one.) and yes dx 12 is multicore(meaning you can send data using coding to multiple core to do more job at once meaning a speed gain.) you can use more then 1 core and it can solve the bottle neck of cpu heavy software. saying no and that the game engine is not able to use more thread in its current state (using only one thread)with dx 12 does not means that dx 12 does not support it and that it does not bring this improvement. it just means that the game engine would need a upgrade to be able to use multiple core.
we all ready know that we are talking about single core game that need a upgrade and that can benefit of dx 12 improvement. Dx 12 will make more fast the game that are single threaded have lots of detail lots of complexity and that are cpu bound or cpu heavy.
_*NO, -single- threaded engine have no benefit whatsoever from DX12, these engines must be completely rewritten to be able to use multicore for the graphis rendering thread
so we agree that with single core game that have no optimization will not gain the improvement but if they have the optimization it will solve the bottle neck issue.
we are saying the same thing it is just that we understand it differently. most of the no becomes yes if the engine is optimized. we could spend day adding stuff that say the same thing and the issue would always be that people do not understand what is said. and some complain that we talk to much about it. yet many repeat the same thing and often do not understand what they are saying or talking about or they would not copy paste things written like this (that IS the bottleneck and why DX12 will not do anything for the game in regards to performance.) with out more explanation about it. since read like that this is absolutely false.
The key takeaway from all of this is section 2, “Parallel Execution State”. Designing systems for functional decomposition, coupled with data decomposition will deliver a good amount of parallelization and will also ensure scalability with future processors with an even larger amount of cores. Remember to use the state manager along with the messaging mechanism to keep all data in sync with only minimal synchronization overhead.
The observer design pattern is a function of the messaging mechanism and some time should be spent learning it so that the most efficient design possible can be implemented to address the needs of your engine. After all, it is the mechanism of communication between the different systems to synchronize all shared data.
Tasking plays an important role in proper load balancing. Following the tips in Appendix D will help you create an efficient task manager for your engine.
As you can see, designing a highly parallel engine is manageable by using clearly defined messaging and structure. Properly building parallelism into your game engine will give it significant performance gains on modern and all future processors.
https://software.intel.com/en-us/articles/designing-the-framework-of-a-parallel-game-engine/
also example: with havoc engine.
(edited by stephanie wise.7841)
This game is very single thread locked in regards to performance. There is 1 master thread that controls everything in the game, and that IS the bottleneck and why DX12 will not do anything for the game in regards to performance.
This is not a graphical API issue, its not a worker thread issue, its the main control thread. And this was explained on Reddit some months back by an Anet Game Engine Dev.
yes single thread is dx 9. dx 12 is multi core all core can talk to each other and send the job more fast to the gpu. also the dev can have a more and deeper control. on the part that was control by the api. dx 12 will make more fast the game that are single threaded have lots of detail lots of complexity and that are cpu bound or cpu heavy. but the important factor with dx 12 is that it all depends on the gaming devs job the implementation is in their hand to code the game and send the job through all the core to the gpu. handle the memory etc…
to help you understand if you did not read all the article that talk about cpu heavy game the problem is that there is to much computing on the cpu using 1 thread of a core with over head. the cpu fill up to quick and gets to 100 percent and the gpu is waiting at about 30 or 40 percent to receive the job. difference with dx 12 use all core all thread on the system. over head is remove the api control is given to the game dev for them to implement those interaction the same as when a game dev code for console gaming. right now pc have high end graphic card and multicore and thread cpu a lot more powerful then console but because of the coding it is equivalent with console because console coding is done better they know for what hardware it is use and do not have to let the api run that part of the software for them like in dx 9 dx 10 dx 11. when dev do a good job of coding in dx 12 you will get all those gain.
even if you come back with the main single thread control over and over it is normal the game is single thread dx 9 api. it is normal in those day it was the cpu ghz that gave the speed to the system and the new graphic card api is the bridge or the cop that handle the trafficic between cpu and gpu and memory allocation. now ghz as reach the limit and pc use multiple core and thread to gain speed it is like splitting a job so that it gets done more fast between the core and thread and they can all talk together and the dev decide where they send the job with coding.
hope this help you. because I have seen that line about the main control thread is master thread is single thread pass by often enuff. it is normal in those time there was only 1 thread. so you would all code the software to go by one main thread. dx 11 support multiple thread but still use only 1 thread. dx 12 use all thread they can talk to each other to do the job and it is all in the hands of the game dev.
so that single thread can be split by the dev to use different thread. and dx 12 can solve the cpu bound heavy that the game suffer from. that is performance that it bring to the game if the game dev do a good job.
hope it answer that part. that many do not seams to get.
Sorry but using one website (e.g. Steam) to represent all PC’s is wrong. All you do is record those who took the time to complete the survey who happen to use Steam. I’m pretty sure it’s not representative for anything but Steam users.
look it is the steam survey and article from steam saying that windows 10 is the most used by stem and as a steady grow of minimum1.37% per month and last month 2.96%
will also add what it means for devs to switch to dx 12 article is from 451 days ago
http://www.pcgamesn.com/reality-check-what-developers-really-think-of-directx-12
means more job and more risk if they make error means more responsibility.Clearly, there are significant incentives from both the hardware and software perspective for a clean slate, low-level approach to graphics API design, and the current engine ecosystem seems to allow for it. But what exactly does “low-level” entail? DirectX12 brings a plethora of changes to the API landscape, but the primary efficiency gains which can be expected stem from three central topics:
•Work creation and submission
•Pipeline state management
•Asynchronicity in memory and resource managementAll these changes are connected to some extent, and work well together to minimize overhead, but for clarity we’ll try to look at each of them in turn isolated from the rest.
Work Creation and SubmissionThe ultimate goal of a graphics API is for the program using it to assemble a set of instructions that the GPU can follow, usually called a command list. In higher-level APIs, this is a rather abstract process, while it is performed much more directly in DirectX 12. The following figure compares the approaches of a ‘classical’ high-level API (DirectX 9), with the first attempt at making parallelization possible in DirectX 11 and finally the more direct process in DirectX12.
http://www.pcgamer.com/what-directx-12-means-for-gamers-and-developers/2/
what they are afraid of is the direct process in direct x 12 because they will have to do a very good job on the load balancing and testing. also because it is new and there is not many experience dev for dx 12 yet.
I don’t really feel like getting into this again as there’s nothing left of the horse and all of its future generations. All that I was pointing out was that you cannot use a survey conducted to a specific community and apply it across the board to everyone else.
I have answer someone that made a statement about this. and if my memory is correct they add about 98% or 99 % participation on the survey. yes it is another community of gamer. and from one community to the other there can be some change. but you also have to understand that this number will grown not only on 1 community since all windows will upgrade to windows 10 to get the free upgrade to windows 10 that ends on the 29 th of july, and many are waiting on the last month to upgrade what hold them back fear that something goes wrong since it is new. windows 7 and up can upgrade and it seams there is some loop hole for xp and vista as well to upgrade. also pc are not eternal and some will get new system with windows 10. it is like some people do not want to see that this is coming. the windows part of steam is 95% of their community. windows os have always been the biggest part of the os market across the world. we have to stop blinding our self to fact. is this game cpu heavy? yes, is it limited by dx 9? yes. would it benefit from dx 12? yes would it help the game performance ? yes. would it help it to stay longuer in long term? yes would it benefit anet gw2 and its community ? yes I agree with you that we probably all said what we could on the subject and made search about it. now it all depend on anet and the devs. we cannot do more then what we have done. they may decide at what time they ride the wave if they decide to take it. from a busines position to have a better game and more long lasting could be a good thing. as for those that do not want to see and want to stay blind and always try to say the opposite of what the fact state, it is their problem and their choice. even if they keep denying it it is written every where. maybe one day they will open their eyes. I am also done. I gave sight to those that cannot see and took away sight to those that claim they can see.
Sorry but using one website (e.g. Steam) to represent all PC’s is wrong. All you do is record those who took the time to complete the survey who happen to use Steam. I’m pretty sure it’s not representative for anything but Steam users.
look it is the steam survey and article from steam saying that windows 10 is the most used by stem and as a steady grow of minimum1.37% per month and last month 2.96%
will also add what it means for devs to switch to dx 12 article is from 451 days ago
http://www.pcgamesn.com/reality-check-what-developers-really-think-of-directx-12
means more job and more risk if they make error means more responsibility.
Clearly, there are significant incentives from both the hardware and software perspective for a clean slate, low-level approach to graphics API design, and the current engine ecosystem seems to allow for it. But what exactly does “low-level” entail? DirectX12 brings a plethora of changes to the API landscape, but the primary efficiency gains which can be expected stem from three central topics:
•Work creation and submission
•Pipeline state management
•Asynchronicity in memory and resource management
All these changes are connected to some extent, and work well together to minimize overhead, but for clarity we’ll try to look at each of them in turn isolated from the rest.
Work Creation and Submission
The ultimate goal of a graphics API is for the program using it to assemble a set of instructions that the GPU can follow, usually called a command list. In higher-level APIs, this is a rather abstract process, while it is performed much more directly in DirectX 12. The following figure compares the approaches of a ‘classical’ high-level API (DirectX 9), with the first attempt at making parallelization possible in DirectX 11 and finally the more direct process in DirectX12.
http://www.pcgamer.com/what-directx-12-means-for-gamers-and-developers/2/
what they are afraid of is the direct process in direct x 12 because they will have to do a very good job on the load balancing and testing. also because it is new and there is not many experience dev for dx 12 yet.
I never knew why my computer fans always blasted while playing GW2… Now I know why. Whenever I play any other games, my CPU does not really go that high, but whenever I play GW2, it uses almost 100% of my CPU prowess.
I thought I f-ed something up when I built my computer…. NOW I FEEL BETTER…
Yeah the game uses an outdated engine from gw1 with strong demands on your processor.
yep it need optimization. dx 9 limit is what is causing the fps drop when there is to much animation. the game is very cpu heavy pretty light on the gpu part. dx 12 will drop the cpu and gpu by 50% less over head since no more api traffic cop. dx 12 from what I have read so far it is the most easy optimization ever. the only problem is that it gives devs more job. like I have read in some forum about what it means for devs to go with dx 12 is that they will have to get their finger out of their buts. since at the place of the over head and the api handling the traffic for the game it will be for the dev to handle that part and decide of the work flow for the game so the devs take the place of the api trafic cop and decide of the trafic to make the game work as good as it should. it gives a direct acces to the devs on that part. meaning there is more risk. they will have to be carefull to not go through the layer because they will not have a trafic cop api to handle the trafic. they the dev will be in control of that part. it means more testing on their part to make sure there is no bug.
that is a good part why they do not plan to optimize the game more work. and since it is new and they do not have the experience yet they are very reticent to take the responsibility of being the trafic cop.
We understand that Steph, but it’s not an easy fix no mater what you have read about Dx12 or even Dx11. There are often design decisions made way back when a project is first started that severely hampers adopting new technology or design principles.
It has been said by the devs looking at and who have worked on the actual game engine code that it is not easy. Yet you do not believe them. They are hog tied by some of those decisions which at the time seemed perfectly fine to them so the core engine could get done quickly. NCSOFT once said that GW2 was coming in 2009-2010. That didn’t work out so well.
While what you suggest would benefit the player from ANet’s perspective it’s a matter of time, cost and if such a change is worth the time and cost. Will it bring in more players, today, if it was Dx12?
And if you check the Steam hardware survey you will see that users who have Win 10 AND a Dx12 compatible card isn’t as large as you think. Only 40% of players are running Win 10 and less than half of those are using a Dx12 compatible card. If less than 1 in 5 PC gamers can take advantage, that too factors into the decision to devote the manpower toward improving the engine.
look I know I am a gw1 veteran. a lot of gw1 player where waiting for gw2 and they though it would be link to the first game . it was said no it wont because gw2 will use a new engine. also a lot of gw1 player left because gw2 was taking to long to come out. now the answer is that it cannot be done because gw 2 is the same engine that gw1 with heavy modification.
as for the steam I must not read the same article then you do. the majority of their player are using windows 10.
Windows 10 64 bit
38.18%
+1.36%
Windows 7 64 bit
32.53%
-0.81%
Windows 8.1 64 bit
11.89%
-0.97%
Windows 7
7.01%
+0.04%
Windows XP 32 bit
2.03%
+0.04%
Windows 8 64 bit
1.60%
-0.04%
Windows 10
1.33%
+0.05%
Windows 8.1
0.34%
0.00%
http://store.steampowered.com/hwsurvey
Windows 10 is now the preferred OS for Steam users
https://www.mweb.co.za/games/view/tabid/4210/Article/25405/Windows-10-is-now-the-preferred-OS-for-Steam-users.aspx
Windows Vista 32 bit
0.22%
-0.01%
Windows 8
0.15%
+0.01%
Windows Vista 64 bit
0.11%
-0.01%
as for compatible dx 12 card The latest Steam statistics reveal that 56.35% of Steam users have Nvidia GPU’s, while 25.5% use an AMD GPU and 17.77% use Intel GPU’s.
most of those card are compatible with dx 12. NVidia card gt 430 and up very long list of dx 12 compatible card..
amd:
AMD Radeon R9 Series graphics
•AMD Radeon R7 Series graphics
•AMD Radeon R5 240 graphics
•AMD Radeon HD 8000 Series graphics for OEM systems (HD 8570 and up)
•AMD Radeon HD 8000M Series graphics for notebooks
•AMD Radeon HD 7000 Series graphics (HD 7730 and up)
•AMD Radeon HD 7000M Series graphics for notebooks (HD 7730M and up)
•AMD A4/A6/A8/A10-7000 Series APUs (codenamed “Kaveri”)
•AMD A6/A8/A10 PRO-7000 Series APUs (codenamed “Kaveri”)
•AMD E1/A4/A10 Micro-6000 Series APUs (codenamed “Mullins”)
•AMD E1/E2/A4/A6/A8-6000 Series APUs (codenamed “Beema”)
as for intel graphic it is in the processor of the pc and if you have 2 card dx 12 use both of them as one what ever the brand or capacity. so your intel graphic card power will be added to your NVidia or amd dx 12 compatible dx 12 card.
and if you tell me well some people could be using only the intel card on their system intel graphic is good for hd movie you can use it. but lack fps for game and most gamer have a NVidia or amd card. and if you do not have one you can get one . those old NVidia gt430 or old amd card are now cheaper then before around 46$. so it should not be a stopper for even beguinner gamer that do not know that having a dedicated card for gaming is needed for the fps.
I have been wondering, did anyone measure the drawcalls of GW2?
did not have to measure it. the game crash 10 time in 2 hours. on the last update they made with the return of the alpine borderland. in the error report standing still at the entrance of ebg. the drawcall was at 1112 with nothing moving so imagine how high it gets when you have 2 or 3 zerg of 30+ each doing battle. each new animation is a drawcall. add one drawcall per character. then you have all the skill that they will each use. apart from all the animal npc and surrounding environment that is not a copy will be another drawcall. at 1000 with dx9 fps is all ready starting to drop. that is why people ask if they are going to optimize the game to dx 12. would help the game a lot will also give the dev less limitation to do their art. could be good for every one. I think there is a lot of us that bought the game and the expansion so they cannot say they do not have money to invest in the game. they plan to make more living world and another expansion. the game being all ready heavy on the cpu and limited by dx 9 could be a good idea to optimize to dx 12 before adding more stuff to the game. any way they said they do not want or plan to do it. what do we know we are only the player that buy the game.
In the whole of this thread I don’t see a single major benefit DX12 would bring to GW2 being put forward as a reason Anet would want to do this, especially as it’s still only available to a minority of Windows gamers and the numbers of Win10 installations is only growing at a snail’s pace, in spite of Microsoft’s resorting to spamming under-hand ‘important’ updates to Windows 67/8 users attempting to coerce or trick them into ‘upgrading’ to a tablet O/S.
snail pace 300 million pc. in 10 month and you will see that double or triple in the next 3 month. since the free upgrade ends on the 29th july. solving the heavy cpu that this game suffer from, since dx12 use all core and thread at the place of one core thread removing the limit of dx9 that date back to 2003 when pc and only a single core. if 70% performance is not enuff for you to see. I will tell you there is no worse blind then someone that does not want to see.
Dev: It can’t be done
Forum Person : pages and pages of how I feel I know better than you./end
it is not that it can’t be done it is that they do not want to do it. I understand more fun to make new content. then to fix old stuff and look at code for long period. nothing is impossible if there is a will to do something.
How about Anet just sticks to spending money on making the game actually fun to play?
People throw their hands up in the air an wonder why AAA gaming is such kitten these days. Probably because gamers kitten and moan about graphical fidelity like that;s all that matters and when they get just what they asked for they only realize after the fact that they probably should have asked for a good game first.
what many people don’t seams to understand is that this game is already heavy on the cpu and adding more content and putting more people together just make it more and more heavy on the cpu. I understand people want more content and big event. I am not against more content and more event just that optimize the game to solve the cpu heavy issue could be better for the game in general even more if you want to add more stuff and more big event.
I would like to explain the idea of your twice as fast 64 bit as will but with smaller numbers
you have a 4 bit and an 8 bit machine.
you need to do something with data:
you need to add 1 and 1…..
for 4 bit the values would be 0001 for 8 bit 0000 00014 bit adds 0001 + 0001= 0010 (1clock)
8 bit adds 0000 0001 + 0000 0001 =0000 0010 (1 clock)
both get the same answer being 10 -> being the binary for 2
if the 64 bit machine would enter 0001 0001 this would be 17 for the 64 bit CPU and your system would fall apart.
With merging and decode:
- shift value 0000 0001 -> 0001 0000 (1 clock)
- and add new value 0000 0001 to value 0001 0000+ 0000 0001 = 0001 0001 (1 clock)
- 0001 0001 + 0001 0001 = 0010 0010 BEING 34….. (1 clock)
- extract the last 4 bits value -> (0000) 0010 leaving 0010 0000 (1clock?)
- shift valueback 0000 0010 (1 clock?)
It would seem handy to merge and decode the difference but this decode takes more clocks then actually making a second calculation. And in the end decoding this woud result in a speed loss. as you could make 2 seperate calculations in 2 clock and you would need at least 3 (provided the step 1 and 2, shift and add could be done in 1 clock, and 4 and 5 could be done in 1 clock) and likely 5 clocks to merge the info and decode it again….
so: No 2 connected 32 instructions….
I tried to find the exact page that talked about this. but it was 11 year ago. did not find it. it was in 2005 with em64t intel 64 bit processor. it said that a 32 bit processor takes chunk of 32 bit memory then process the 32 bit(100%) . if you use a 64 bit processor with no 64 bit os it takes a chunk of 64 bit and process it 32/32 at the processor level (legacy mode) it is 150% faster then a 32 bit. and a 64 bit processor and 64 bit os it takes a chunk of 64 bit memory and process it in one chunk of 64 bit and is 200% faster compare to the 32 bit process. but in 11 year trying to find the page it could have been moved or removed not able to find it any more.
the only thing I have find is this: Larger physical address space in legacy mode
When operating in legacy mode the AMD64 architecture supports Physical Address Extension (PAE) mode, as do most current x86 processors, but AMD64 extends PAE from 36 bits to an architectural limit of 52 bits of physical address. Any implementation therefore allows the same physical address limit as under long mode.[
https://en.wikipedia.org/wiki/X86-64
could be because of processor technology (PAE) I am not a chip maker and can’t find the info any more. so we will leave it at that.
Storage solutions can be made incredibly large by using software raid solutions. but I was trying to tell you the adresseble physical limits of files, as the max storage size is not the same as addressible filesize.
ok so for the storage it would use a vr trick. to multiply its basic adressable physical limit of file. since with vr you can multiply the number of virtual pc on the same system. my older pc was using raid 0 with queing. 2 drives that work in parallel at the same time sending each 32 que per turn of disk of data to the processor. the only problem with raid 0 is that it as no back up in case of disk failure. my other pc use a raid hybrid solid state drives with a hard drive for catching. it is about equivalent. since solid state is memory stick similar to ram some process are faster on one and some other process on the other. so I know about raid solution. but any way the game gw2 most not use over 8tb of adressable physical file memory? yep 32 gb of ram is nice. what is less nice is when the fps drop because there is to much animation for the dx9. you could have a pc with 256 gb of memory with 6 processor quad core. and the same would happen. since it is a dx 9 limit. I understand that they want the game to work on every system old and new. people with old system have lower fps then people with new system so they lag even more then us. but even with new system you experience lag. it is not because the technology is not there it is because the implementation is not done. before dx 9 every gaming company would optimize the game to the newest version. they stopped doing that at dx 10 because of the small benefit it offer compare to the cost and time. but now dx 12 bring over 70% improvement compare to dx 11. and solve the cpu bottle neck. so it would be good to optimize the game. every player would benefit from it in the long run. and people that do not want to upgrade their system can still use the dx 9 version. that way it still work with all the system. any way we are not the one that takes the decision. we will see what they will do.
I agree with you 64 bit fix those problem also. but the game with the hot expansion use more memory. I have seen some one in game having issue. I asked is your system and os 64 bit and did you try the 64 bit client? the answer was yes problem still there. I asked how much memory do you have on your system? the answer was 4gb. told the person to put more memory in the system since 64 bit use more then 4 gb of memory.
so if the 64 bit client with 4 gb of memory does not solve the issue it means that the game use more then 4 gb of memory. or need the extra memory to solve that fragmentation problem that you get with 32 bit. in one case or the other you need more then 4 gb of memory.
Well not true, as the system can rely on HDD swap space to make additional virtual RAM, your performance will drop, considerably as HDD acces and data read and write is significant slower, but in theory this should be no problem.
Here’s an analogy. Think of memory as pages in a book and program data words on a page. Going from 32-bit to 64-bit is like having a book printed using a larger font. Same number of words but needs more pages to hold it.
The “get more than 4GB” is due to how an OS handles a limited resource like system memory and how it’s shared among all the software that is currently running, assuming we aren’t also talking about shifting from a 32-bit to a 64-bit version of an OS, that the OS could always handle more than 4 GB of system memory.
An Operating System’s (OS) primary purpose is to let one program think it has access to the entire resources the PC has. But since other programs are also running, the OS supervises, to the best of it’s ability, the sharing of common resources. Like which thread gets a chance to run, sharing of I/O (keyboard, graphics, disk) and memory.
Now what happens when the programs you have running wanting more memory than you actually have in the system, the OS juggles what data is actually in system memory (those are the sticks of RAM by the way) and what data is temporarily stored on disk based on how frequently that data is accessed and if needed again will read it off the drive. It does this anyways even if you have plenty of memory but it gets very aggressive if you are close to filling all of the system memory all the time. All this reading and writing of data onto the drive will significantly slow software down, to a virtual standstill in extreme cases. All that said, adding more system memory means this worse case is less likely to happen. And since the 64-bit client will use more memory, cause “bigger font analogy”, you would encounter that worse case sooner than in the 32-bit client. It’s just the 32-bit client has that OOM issue that, yes, crops up more frequently on HoT maps.
ok just found the quoting system. will make this more easy. yes I know about 64 bit. if you have a 64 bits processor and not a 64 bit os you run in legacy
taking chunk of 64 bit and process it 32/32 makes your system 150% faster then a 32 bit system if you have 64 bit processor and 64 bit os process is 64 bit you run 200 % faster then a 32 bit system.64 bit provide more memoryand speedover 4gb of memoryand over 3 gb for speedthat are the limit of 32 bit system.
And yes, system usely go to ram (random access memory) for data that is in use.
That data stay in ram as long as you do not reboot the system.
As for data temporarily on the disk it is rather permanently on the disk on less that you are talking about the virtual disc space when you do not have enough ram on your system and the pc create ram memory on the virtual disc space.Totally agree with you more memory is better faster hard drives is also better, since the slowest component in system is always the hard drives and the system always run at the speed of the slowest component. that is why fast system usely use the fastest rpm hard drives or raid 0(hard drive x2) with queing(x32). or ssd (solid state drives) or hybrid raid ssd with hard drive to do the caching maximize mode.
Was just saying that some one was having the issue with the 64 bit client but is system add only 4 gb of memory. so basically he need more memory that was is problem for the game with the 64 bit client you need more then 4 gb of memory I would say at least 8gb as a minimum. since the system as always some process running in the back ground apart from the game. and since using a 64 bit system and 64 program it will use more then 4 gb of ram. 64 bit can use up to 128 gb of ram or more depending on your os and system limit. sorry again for the quoting system I add not seen it.
The 64 bit client was used to fix OutOfMemory errors due to the gw2 32 bit app trying to adress more then it’s max 4 GB of adressible RAM.
64 bit’s memory codes could adress up to 16 ExaByte of data, which would be a bit overkill, so 64 bit machines use lower amounts of bits to adress memory space, 48 bit to be exact in the AMD64 instruction set, which is standard nowadays on both AMD and Intel which allow machines to adress 256 TB of memory. Until windows 8.1 this wasn’t even supported by Microsoft and the used adresses limiting the user of the OS to 8Tb of adressable space. But tbh at 32 Gb I run a fairly powerfull PC, I do not use a swapdrive unless I need to videoedit.
So XP 64, Vista 64, Windows 7-64 and Windows 8 64bit were all limited to 8 Tb of actual adressable space.
And this was fast becoming a problem due to HDD’s growing bigger and bigger, with 6 TB HDD’s being on the market, whcih is likely a reason for Microsoft to push Windows 10. problems would arize when files being bigger then 8TB would be stored on a Drive. Now consider how often you use this size of files, 8 Tb…. this is a number only relevant for 4K video editing. In most other contecxt it’s beyond huge.
Speedgain?
No there is none…..
for 32 bit instructions on 64: Instead of a 64 bit instruction being proccessed 1 32 bit is entered, and if you look at it deeply you’ll gain nothing from 64 bit except some calculation precision, and the ability to work with bigger numbers. In effect it could be slower as the adresses & datapackets are longer causing a need for more data to be moved through all I/O’s ,pipelines and so on. But on a full 64 bit machine tis is mostly unnoticable.using 64 bit -data- on a 32 bit computer is also possible, but will be slower (about 3 clocks for a 64 bit argument, iunstead of 1 for a 32 bit), but the 32 bit computer cannot handle direct 64 bit adressing,, in the end the same precision could be aquired only at a timeloss due to inefficient workaround needed to use the for 32 bit long data values.
This would be the same when computing 128bit strings on a 64 bit PC.Speedgain comes from raising transferrates, Speed, or adding more of components, Parallel computing, Multitasking, or pooling resources.
2 CPU’s would be faster when the multithreading of a program allows it. 4 better still 8 or 16 better as well. Problem becomes the size of needeed components
2 Harddisk when used in tandem will allow them can both to add speed (so raid 0 is speed advantage when reading and writing, raid 1 only when reading, as both disks store the same (mirrored data)
2 networkcables when you can combine the datarateBut 32 and 64 bit no difference in speed.
They do differ in the maximum size of numbers and calculating precision.
for windows 8 it can address more then that. Each storage pool on the ReFS filesystem is limited to 4 PB (4096 TB), but there are no limits on the total number of storage pools or the number of storage spaces within a pool. windows 8 comes with storage space. https://en.wikipedia.org/wiki/Features_new_to_Windows_8#Storage
64 bit on the processor level takes chunk of 64 bit of memory. 32 kittenunk of 32 bit. so processing 64 bit is double the memory compare to 32 bit multiply that by the ghz and the core and thread and I think there is a speed benefit.
There is so much wrong with what you just said it’s clear now that there is no way to educate you about the factors involved with 64-bit code. Believe what you believe. It’s wrong but whatever.
what is wrong? and who said that I need educating? maybe you need some educating? nothing that I said is wrong. it is like your comment that dx12 is not backward compatible with older game using older dx version. because the dll for dx 9, 10 ,11 are in the pc of course they are. dll are: Dynamic-link library and yes they are in the system. it proves that it is backward compatible. just like when you update a driver to the latest version the old version are still in the system and you can use them or roll back to them. why is it that each time some one say something people say that they are wrong? or that they need educating? did you verify the info before making such comment? you are not even able to say on what I am wrong. because I am not wrong. it is not a question of belief from my part I am telling you the truth. my old system add a 64 bit processor and no 64 bit os was out yet so it was working in legacy mode. you think I did not search and educate my self about this to know how to set it up 11 year ago? intel talk about it Microsoft also at that time. do you know more then intel and Microsoft? to make such comment? I started on pc in 1984 at that time you add to write your own software to make game sound or image using basic or lotus. everything was done by typing. but of course everything I say is wrong to you. doh.
First @stephanie wise, learn to use the quote system so it’s clear what is a quote from another poster and what’s your comment. Walls of text is confusing.
I agree with you 64 bit fix those problem also. but the game with the hot expansion use more memory. I have seen some one in game having issue. I asked is your system and os 64 bit and did you try the 64 bit client? the answer was yes problem still there. I asked how much memory do you have on your system? the answer was 4gb. told the person to put more memory in the system since 64 bit use more then 4 gb of memory.
so if the 64 bit client with 4 gb of memory does not solve the issue it means that the game use more then 4 gb of memory. or need the extra memory to solve that fragmentation problem that you get with 32 bit. in one case or the other you need more then 4 gb of memory.
Here’s an analogy. Think of memory as pages in a book and program data words on a page. Going from 32-bit to 64-bit is like having a book printed using a larger font. Same number of words but needs more pages to hold it.
The “get more than 4GB” is due to how an OS handles a limited resource like system memory and how it’s shared among all the software that is currently running, assuming we aren’t also talking about shifting from a 32-bit to a 64-bit version of an OS, that the OS could always handle more than 4 GB of system memory.
An Operating System’s (OS) primary purpose is to let one program think it has access to the entire resources the PC has. But since other programs are also running, the OS supervises, to the best of it’s ability, the sharing of common resources. Like which thread gets a chance to run, sharing of I/O (keyboard, graphics, disk) and memory.
Now what happens when the programs you have running wanting more memory than you actually have in the system, the OS juggles what data is actually in system memory (those are the sticks of RAM by the way) and what data is temporarily stored on disk based on how frequently that data is accessed and if needed again will read it off the drive. It does this anyways even if you have plenty of memory but it gets very aggressive if you are close to filling all of the system memory all the time. All this reading and writing of data onto the drive will significantly slow software down, to a virtual standstill in extreme cases. All that said, adding more system memory means this worse case is less likely to happen. And since the 64-bit client will use more memory, cause “bigger font analogy”, you would encounter that worse case sooner than in the 32-bit client. It’s just the 32-bit client has that OOM issue that, yes, crops up more frequently on HoT maps.
ok just found the quoting system. will make this more easy. yes I know about 64 bit. if you have a 64 bits processor and not a 64 bit os you run in legacy taking chunk of 64 bit and process it 32/32 makes your system 150% faster then a 32 bit system if you have 64 bit processor and 64 bit os process is 64 bit you run 200 % faster then a 32 bit system. 64 bit provide more memory and speed over 4gb of memory and over 3 gb for speed that are the limit of 32 bit system. and yes system usely go to ram . random access memory. for data that is in use. that data stay in ram as long as you do not reboot the system. as for data temporarily on the disk it is rather permanently on the disk on less that you are talking about the virtual disc space when you do not have enough ram on your system and the pc create ram memory on the virtual disc space. totally agree with you more memory is better faster hard drives is also better, since the slowest component in system is always the hard drives and the system always run at the speed of the slowest component. that is why fast system usely use the fastest rpm hard drives or raid 0(hard drive x2) with queing(x32). or ssd (solid state drives) or hybrid raid ssd with hard drive to do the caching maximize mode.
was just saying that some one was having the issue with the 64 bit client but is system add only 4 gb of memory. so basically he need more memory that was is problem for the game with the 64 bit client you need more then 4 gb of memory I would say at least 8gb as a minimum. since the system as always some process running in the back ground apart from the game. and since using a 64 bit system and 64 program it will use more then 4 gb of ram. 64 bit can use up to 128 gb of ram or more depending on your os and system limit. sorry again for the quoting system I add not seen it.
(edited by stephanie wise.7841)
to behellagh.1468 you said:Yes and no about the game using more memory. From a layperson it certainly appears to use more memory but a portion of that is because a type of data called pointers, which are memory addresses, are now double the size and it’s the nature of Object Oriented Programming that it uses tons of pointers.
What 64-bit did was prevent Out Of Memory errors due to not being as concerned with memory fragmentation. I call it “swiss cheesing” (yes Quantum Leap reference) of the memory pool where you have chunks of unused memory available but the chunks are too small to satisfy the program’s request for a block of memory and due to 32-bits the memory pool can’t be extended anymore. It’s a natural result of software that requests and release memory frequently and/or is running for a long time. That’s all 64-bit bought, a memory pool that is substantially larger where having a failed memory request is virtually unlikely. Now whether ANet upped any internal limits to how much of what data type they can keep track up in the 64 Vs the 32 bit version, don’t know.
I agree with you 64 bit fix those problem also. but the game with the hot expansion use more memory. I have seen some one in game having issue. I asked is your system and os 64 bit and did you try the 64 bit client? the answer was yes problem still there. I asked how much memory do you have on your system? the answer was 4gb. told the person to put more memory in the system since 64 bit use more then 4 gb of memory.
so if the 64 bit client with 4 gb of memory does not solve the issue it means that the game use more then 4 gb of memory. or need the extra memory to solve that fragmentation problem that you get with 32 bit. in one case or the other you need more then 4 gb of memory.
as for those that say I said rubbish I don’t if you cannot understand the relevance of what I say. you have much more to learn. as for the uefi comment to pax the great one it was about is comment of having upgrade from vista up to windows 10. and wondering why dx 12 is not available on all os. and why graphic card when you buy a new pc they send you the cheap model that is low standard. as for the comment of Elden Arnaas.4870: windows 10 is free upgrade until july 29th. if you did not upgrade before that date then you will have to buy the windows 10 os or a new computer with windows 10 on it. it is not that they will make you pay for what was free. like you seams to think. I agree with you on the part of old system needing some investement to being kept current. you complain about article that provide information. is it because you cannot understand the relevance? and for what you said before it is not as simple as that to make a program multithread I agree with you there is some fonction that need to be properly design by the devs and tested. but it is part of a devs job. that is why I did not mention it. unless you want me to give you around 100 page on the subject. since there is many programming language that could be use by the software. just to give you example of what we are talking about.Many programming languages support threading in some capacity. Many implementations of C and C++ support threading, and provide access to the native threading APIs of the operating system. Some higher level (and usually cross-platform) programming languages, such as Java, Python, and .NET Framework languages, expose threading to developers while abstracting the platform specific differences in threading implementations in the runtime. Several other programming languages also try to abstract the concept of concurrency and threading from the developer fully (Cilk, OpenMP, Message Passing Interface (MPI)). Some languages are designed for sequential parallelism instead (especially using GPUs), without requiring concurrency or threads (Ateji PX, CUDA).
A few interpreted programming languages have implementations (e.g., Ruby MRI for Ruby, CPython for Python) which support threading and concurrency but not parallel execution of threads, due to a global interpreter lock (GIL). The GIL is a mutual exclusion lock held by the interpreter that can prevent the interpreter from simultaneously interpreting the applications code on two or more threads at once, which effectively limits the parallelism on multiple core systems. This limits performance mostly for processor-bound threads, which require the processor, and not much for I/O-bound or network-bound ones.
Other implementations of interpreted programming languages, such as Tcl using the Thread extension, avoid the GIL limit by using an Apartment model where data and code must be explicitly “shared” between threads. In Tcl each thread has at one or more interpreters.
Event-driven programming hardware description languages such as Verilog have a different threading model that supports extremely large numbers of threads (for modeling hardware).
https://en.wikipedia.org/wiki/Thread_(computing)#Multithreading
so of course I do not go in all the detail of course. the devs should know how to do is job. so I should not have to go in to detail like that. I am talking about the basic of it. and even staying very basic people say rubbish. I am telling you the truth. all the info I gave you, you can find on the web by your self. rather then trying to shoot me down and saying rubbish look at the information and how it is relevant. if you are not even willing to try to understand. why are you here? to make your opinion more valid then some other one? we are discussing here on a subject. that as many simple thing and many complex thing. if you cannot understand the simple thing why even try to go higher?
pax the great one.9472 people that are gamer usually upgrade the graphic card from the start when they buy or build their pc. and keep upgrading their system over the year, and over the year the price of graphic card also drop. so yes investment can be necessary. after 3 or 4 year a card that was 1200$ is about 50$to150$ as for vista why they do not support it in the free upgrade it is because there is a difference in the security. windows 7 and above use uefi secure boot. vista use hybrid bios uefi (transitional) windows xp bios.
UEFI (Unified Extensible Firmware Interface) is a standard firmware interface for PCs, designed to replace BIOS (basic input/output system). This standard was created by over 140 technology companies as part of the UEFI consortium, including Microsoft. It’s designed to improve software interoperability and address limitations of BIOS. Some advantages of UEFI firmware include:
• Better security by helping to protect the pre-startup—or pre-boot—process against bootkit attacks.
•Faster startup times and resuming from hibernation.
•Support for drives larger than 2.2 terabytes (TB).
•Support for modern, 64-bit firmware device drivers that the system can use to address more than 17.2 billion gigabytes (GB) of memory during startup.
•Capabililty to use BIOS with UEFI hardware.
http://windows.microsoft.com/en-CA/windows-8/what-uefi
if you got a green to switch to windows 7 and from there got a green to move to windows 10 you should be ok. there is 2 tool one for windows 7 upgrade and one for windows 10 upgrade run the tool first it can tell you of possible compatibility issue. but hardware also evolve with time and need investment. not every one as money to spend on pc. and those things cost lots of $ I get it . but divide the cost of your old pc by 8 year how much does it cost you per year? the number is lower I think. also newer system usely cost less. so to replace when you cannot follow eventually become a most.
pax the great one.9472 Will you also give all the ppl running this on “legacy computers” a computer with Directx 12 GPU’s then, or are you and the minority with high end systems going to keep this game going?
I have the simple answer for that. in 3 month the majority of pc will be windows 10. so having dx 12 would be nice for the majority. for people that do not have dx 12 the minority they can keep using the dx 9 version. problem solve. windows xp 10.63% if you look at vista 1.42% support is ending also so not many still use vista. and windows 7 47.82% windows 8 3.19% and windows 8.1 9.85% will all upgrade to windows 10 15.34%. what is left linux and mac? also very small number. mac as like 7% for all their os version. and linux 1.65% add all the windows 7 + to 10 together what does it gives for windows 10 66% of the market. I think it is the majority.
you can find those number here: http://netmarketshare.com/operating-system-market-share.aspx?qprid=10&qpcustomd=0
as for gw3 it is not in the plan they still want to keep gw2 and make expension. but if the devs cannot make a single core software in to a multicore software for gw2 they are not able to make a gw 3 software in multicore also. and like I said the difference for that part is some code that needs to be added to the existing software to tell to what core or thread to send that thread of code. that is the difference between a single thread software and multi thread software. some line of code to tell where to send the code. once that is done part 2 is : Developers are now able to implement their own command lists and buffers to the GPU, allowing for more efficient resource utilisation through parallel computation. they test it if it run like it should they realease it.
as for dx 14 in 3 or 4 year will not happen . in 13 year they add 3 dx 10 11 and 12. out of the 3 12 is the major improvement. that is why game company did not continue to implement dx optimization like they use to. because dx 10 improvement was minimal. and the time and cost related they decided it was not worth it. dx 12bring over 70% improvement compare to dx 11. and for vulkan mantle forget that idea they stopped working on it saying dx 12 was much better. also for dx 12 it use 50% less power on the cpu and gpu. and it is backward compatible with the older dx version. so it can run them your old game but you will not get the improvement of dx 12 if you use dx 9 you will be stuck with dx 9 limit.
look I play gw 2 on max setting get between 65 to 105 fps. but if there is big event or big zerg the dx 9 limit bring me down, how down sometime to 19 fps. if the game would be optimize for dx 12 no more bottle neck and dx 9 limit. so I could have a constant fps. also the performance on dx 12 should double the frame rate. so I should get around 130 to 210 fps with no drops. yes this game is cpu heavy because it is single threaded and new processor cannot have more ghz(raw power)because over 4.2ghz the chip start to melt., but they can have more core and more thread. what core and thread do they split the processing job. so if you have 100% job on one core and split that job by 4 core thread and by 4 hyper thread. you divide it by 8. should give around 12.5% per core and thread. a lot less heavy on the cpu would you not say.
look from what I have read so far it seams that gw2 was made single thread. build of from gw1 that was also single thread. even at that time multithreading existed. I know I was playing gw1 at that time. but to say there is nothing to be done does not help. when they started to talk about making game multi thread I was there also. to do this it is only some line of code that needs to be added to the end of the existing code. why not put a dev on it? after why not optimize the game to dx 12 and solve the dx 9 limitation? Developers are now able to implement their own command lists and buffers to the GPU, allowing for more efficient resource utilisation through parallel computation. some devs at anet should be able to do that? yes it is a job anet as devs working for them for this game. so give them the work. anet pay them any way, that they do nothing or that they do something. as for Cobrakon.3108 comment: the game work with dx api. and there is no alternative. if you think about open gl. it is good on the web but for game it is far from being there. as for the comment of windows 10 it is the same since windows 10 in 3 month will become the major os in use. july 29th is the last date to get the free update. windows 10 is all ready running on 300 millions pc. Vulkan is dead long ago also get your fact straight. for Elden Arnaas.4870 comment: yes dx 9 is the bottle neck for this game. the fps drop you get in game for the big event and big zerg where there is to much animation it is cause by the dx 9 limit of 1000 animation by fps in instance. there is a reason that dirext x version evolve and get better and push back some limit. the same is true with hardware. the direct x api usely is the link between the software and the hardware. look at the pc you have now and the pc you add in 2003. do you see that your new pc as much higher limit then your old pc? more fast more memory more storage more cpu power or cpu core. dx 9 was in use when cpu add only 1 core with hyper threading. they have now 4 core with hyper threading. the math is simple if you have 4 core and 4 thread at the place of 1core thread. it is 8 time that 1 core thread. to give you a simple example: if you have a company that ship stuff and you have only 1 car on the road if you put 7 more car on the road you can ship a lot more stuff in the same time. well it is the same in computer. and dx 12 is the first version that gives this even if multi core and thread as been here for a little while now.
I do not agree with this. look at it this way when hot came out it was crashing a lot. moving to 64 bit client solved the issue. why because 64 bit can use more memory. 32 bit is limited to 3gb of speed and 4 gb of memory. the fact is that gw2 is using dx9 that is limited in the graphic animation by fps that it can run in a instance. meaning that once that limit is reach you start to get fps drop. that limit is 1000 animation environment(castle ,tree, plant, wall, node, monster, npc ,character etc…) player(character),skill and skill effect. Any animation that is not a copy is a new animation. draw call limit of dx 9 is 6000. those number are much higher in dx 12, dx 12 draw call limit is 600000k. as for what you said: The primary feature highlight for the new release of DirectX was the introduction of advanced low-level programming APIs for Direct3D 12 which can reduce driver overhead. Developers are now able to implement their own command lists and buffers to the GPU, allowing for more efficient resource utilisation through parallel computation.
A multi-core processor is a processor that includes multiple processing units (called “cores”) on the same chip. This processor differs from a superscalar processor, which includes multiple execution units and can issue multiple instructions per clock cycle from one instruction stream (thread); in contrast, a multi-core processor can issue multiple instructions per clock cycle from multiple instruction streams. IBM’s Cell microprocessor, designed for use in the Sony PlayStation 3, is a prominent multi-core processor. Each core in a multi-core processor can potentially be superscalar as well—that is, on every clock cycle, each core can issue multiple instructions from one thread.
Simultaneous multithreading (of which Intel’s Hyper-Threading is the best known) was an early form of pseudo-multi-coreism. A processor capable of simultaneous multithreading includes multiple execution units in the same processing unit—that is it has a superscalar architecture—and can issue multiple instructions per clock cycle from multiple threads. Temporal multithreading on the other hand includes a single execution unit in the same processing unit and can issue one instruction at a time from multiple threads.
Microsoft execs expect frame rates
to more than double when comparing DX12 to the current DX11 API. But that estimate looks to be conservative if Futuremark’s new API Overhead Feature test is to be believed.
http://www.pcworld.com/article/2900814/tested-directx-12s-potential-performance-leap-is-insane.html
(edited by stephanie wise.7841)
I also agree that dx 12 would help this game(gw2) a lot. any one saying the opposite do not know what they are talking about. face it dx12 brings all core and thread can be use. new pc have 4 core and 4 thread. other dx version where all using 1thread of 1 core. so it can use 8x more of the cpu power. also dx 12 can use all the graphic card on the pc as one what ever size or brand. and new pc have often have 2 or more now. so of course it would help the game a lot. it would free the cpu and send the job to the gpu. no more bottle neck. means higher fps and more constant.
direct x 12 would help this game. since this game is cpu bound. also since this game use direct x 9 and is limited to 1000 animation by fps in instance. and after your fps drops. direct x 12 is capable of over 600k animation drawcall. would help a lot in big event and zerg in wvw. also direct x 12 can use all your graphic card in the system as one what ever the brand or size. and many pc with i7 processor all ready have intel hd graphic inside apart from the other graphic card that you add to play game (NVidia, amd,etc…) look at it this way before direct x 9 all game would update the game to the latest dx version. after gaming company decided that it did not bring enough improvement it cost to much and was complicated process. direct x 12 is the easiest version to upgrade to ever for game company. and between direct x 11 and direct x 12 it brings over 70% performance boost. so from direct x 9 to direct x 12 it should bring even more. direct x 9 was in use in 2003 we are now in 2016 everything as evolve pc hardware software etc.. optimize the game would be a good idea. will also say that windows xp as less then 1% of user in the world and many of those can be business pc, and july 29th 2016 is the last day to upgrade free to windows 10 for any one that as windows 7, windows 8, windows 8.1. after that date the majority of pc will be windows 10. from a business point of view optimize the game so that it can continue in the future is a good idea. they plan (gw.2) to make another expansion pack. if so they plan to continue the game. and since they make money when they sell expansion pack they cannot say they have no money to pay devs to optimize the game.
will also add that right now even with the best pc. because of the direct x 9 limitation. you will experience fps drops in big event or big zerg in wvw. it is not because the pc cannot handle it. it is because the dx9 is limited. what is the difference in dx 12.
http://www.pcper.com/reviews/Graphics-Cards/NVIDIA-Talks-DX12-DX11-Efficiency-Improvements
switching between Direct3D 11 and 12 slightly reduces GPU power consumption but dramatically reduces CPU power consumption.
To put these numbers in perspective, a 50% reduction in power consumption is about what we would see from a new silicon process (i.e. moving from 22nm to 14nm), so to achieve such a reduction in consumption with software alone is a very significant result and a feather in Microsoft’s cap for Direct3D 12. If this carries over to when DirectX 12 games and applications launch in Q4 2015, it could help usher in a new era of mobile gaming and high end graphics. It is not often we see such a substantial power and performance improvement from a software update.
http://www.anandtech.com/show/8388/intel-demonstrates-direct3d-12-performance-and-power-improvements
Microsoft execs expect frame rates to more than double when comparing DX12 to the current DX11 API. But that estimate looks to be conservative if Futuremark’s new API Overhead Feature test is to be believed.
remember that DirectX 12 is about making the API more efficient so it can take better advantage of multi-core CPUs. It’s not really about graphics cards. It’s about exploiting more performance from CPUs so they don’t bottleneck the GPU.
http://www.pcworld.com/article/2900814/tested-directx-12s-potential-performance-leap-is-insane.html
the difference from dx 11 to dx 12 is that it can use all core and thread not only one core and one thread like dx 11 was doing.
will always have problem with someone that say that something new and up to date will not solve some issue. if something is updated it is uselly to solve some issue.