64 bit client

64 bit client

in Guild Wars 2: Heart of Thorns

Posted by: ikereid.4637

ikereid.4637

Would be long time that we get this. Every modern computer can choose how he is running things. Low machines running 32 Bit and DX9 and powermachines 64 BIT with DX12. That should not really a big problem.

Its a problem for me cuz my pc can run max dx 11 and there is almost none dx 12 games so it should be a big problem…

DX11 cards can support a lot of the DX12 API. There are already benches with the CPU load aspect on DX12 using DX11 based cards. So no, its not really a problem.

Thats interesting but im not touching win10 in its current state. mostly cuz of bugs but also cuz of the weird privacy policy.
I have a friend who uinstalled after a plethora of bluescreens.

Win10 is fine, it just need a few adjustments. But if your friend is getting a ton of BSOD’s they are doing something wrong.

DX12 Benchmarks

http://www.anandtech.com/show/8962/the-directx-12-performance-preview-amd-nvidia-star-swarm/3

Desktop: 4790k@4.6ghz-1.25v, AMD 295×2, 32GB 1866CL10 RAM, 850Evo 500GB SSD
Laptop: M6600 – 2720QM, AMD HD6970M, 32GB 1600CL9 RAM, Arc100 480GB SSD

64 bit client

in Guild Wars 2: Heart of Thorns

Posted by: Kendra Nightwind.8734

Kendra Nightwind.8734

Of all the things promised to be in Guild Wars 2 when it shipped; an x64 client was not one of them. What was promised, but we still don’t have nor will ANet even acknowledge, is DX10 support.
I was there when she (Gaile Gray) was asked about both DX 10 and a 64 bit client. She confirmed that Guild Wars 2 would support DX10, but she did not know anything about if there was going to be a 64 bit client.
It irks me no end that we are stuck with DX9c and even worse that we are stuck with a 32 bit (x86) client that won’t even distribute itself (some what) evenly across all available cores.. Shoot, even Guild Wars would take advantage of multi-core systems better than Guild Wars 2 does.

The original post itself was necroed recently by by Gaile when called out: No promise was made. What ANet did promise early on was an attempt to create a DX10 client. It was made, but it was extremely buggy and unfamiliar to most of the developers working with it, and had several major issues, so they stopped supporting development.

They fulfilled their end of the bargain, but I believe it’s something that should really be considered. They could do a lot more with the game by opening these doors up and ultimately just improve performance across the board. The game itself is already very well-optimized for the platform it runs on. Imagine the glory of it on a DX12 64-bit client O.O

Thank you for the update. I knew that the Guild Wars Wiki showed Gaile’s statement as a “they will try,” however I was not aware that it was recorded anywhere other than in-game chat logs. I was actually in Lion’s Arch when Gaile made the statement, I might still have the screen shots, not sure where or even if I have them.
I was not aware they even attempted a DX10 client, but given how buggy DX 10 was at release I can imagine how buggy that client would be.
Like you, I would love to see a fully threaded, x64, DX12 client. ANet would have something to proudly shout about and not be snickered at.

64 bit client

in Guild Wars 2: Heart of Thorns

Posted by: Ricky.4706

Ricky.4706

what they did is the wisest choice for a game imo – i use a 3d software – iclone that once did 32 bit dx9 …then emulated 64 bit dx 9….then went full 64 dx 11

the result, the program was way more resource intensive, so many upgrades will have to be had

the speed was faster in 32 bit, but the visuals were also less rich – it was nice, the 64 bit emulation allowed us to make bigger scenes while keeping smooth real time performance….. the detail in all 64 is intense, huge jump in quality……but now it needs more of a gaming machine and higher video card benchmark with 2-4 gig ram to do big scenes smoothly. Laptops struggle more and get hotter faster, to see full quality in real time they have to tone down a lot of features to get better usability.

now the next question is how will it perform in windows 10, dx12

and if guildwars ever went dx12, would they give us a director mode for making machinima like gta 5 ^.^

would be nice to beat someone down in wvw and make a movie about it, slow motion fights n stuff -shrug-

IBM PC XT 4.77mhz w/turbo oc@ 8mhz 640kb windows 3.1 hayes 56k seagate 20 meg HD mda@720x350 pixels

(edited by Ricky.4706)

64 bit client

in Guild Wars 2: Heart of Thorns

Posted by: MordekaiZeyo.7318

MordekaiZeyo.7318

Would be long time that we get this. Every modern computer can choose how he is running things. Low machines running 32 Bit and DX9 and powermachines 64 BIT with DX12. That should not really a big problem.

Its a problem for me cuz my pc can run max dx 11 and there is almost none dx 12 games so it should be a big problem…

DX11 cards can support a lot of the DX12 API. There are already benches with the CPU load aspect on DX12 using DX11 based cards. So no, its not really a problem.

Thats interesting but im not touching win10 in its current state. mostly cuz of bugs but also cuz of the weird privacy policy.
I have a friend who uinstalled after a plethora of bluescreens.

Friend’s pc problem, not Windows 10.

I would say…. DirectX9 for users with legacy hardware, DirectX11 for everyone and DirectX12 for Windows 10 users with supported graphic card…

64 bit client

in Guild Wars 2: Heart of Thorns

Posted by: Gazareth.7230

Gazareth.7230

Unfortunately it seems it’s not as easy as DX12/64-bit to solve GW2’s performance issues.

Here is a post an engine programmer made on the GW2 subreddit:

http://www.reddit.com/r/Guildwars2/comments/3ajnso/bad_optimalization_in_gw2/csdnn3n

The problem is apparently not something that can be solved by APIs, but by the lower-level code itself. It was not programmed to work in parallel, so even if you make it multithreaded, those different threads all try to access the same data at once, and have to wait their turn.

One thing that stood out to me though, was that the developer said the main stack is the bottleneck. This doesn’t line up with the story about the different threads having to queue for memory access. I am wondering if, despite this developer’s claims, DX12 would relieve the main stack bottleneck by reducing the impact of the rendering stack/draw calls. But I am not a programmer, I am just speculating there.

(edited by Gazareth.7230)

64 bit client

in Guild Wars 2: Heart of Thorns

Posted by: Tkosh.1923

Tkosh.1923

I cannot believe the kicking and screaming about 64bit and DX12. First 64bit is OLD now XP even had a 64bit version. If you are running a 32bit OS on a laptop that is your problem, upgrade!! Second DX12 would be an option do you ever look at your video settings in any games?? The majority have the option to choose which DX you want to use. I have seen newer ones that only allow dx11 but they are new and never used DX9.
And I still don’t buy the memory speed increase, BS your old memory could not have been that old to be used in the same MOBO. You changed something else and don’t realize it!!

64 bit client

in Guild Wars 2: Heart of Thorns

Posted by: skowcia.8257

skowcia.8257

Im not sure who still uses dx9 card yet O.o

All (except indie) games come out with dx11 olny, some of them already looking at dx12 support. I dont see anyone complaining that they wont be able to run these games. Even Dark Souls 2 moved to dx11

obey me

64 bit client

in Guild Wars 2: Heart of Thorns

Posted by: Behellagh.1468

Behellagh.1468

You also have to remember that the game also uses 3rd party libraries, it’s possible that some of the ones they use don’t have a 64-bit available or would require a higher licensing fee from ArenaNet to use.

We are heroes. This is what we do!

RIP City of Heroes

64 bit client

in Guild Wars 2: Heart of Thorns

Posted by: ikereid.4637

ikereid.4637

Unfortunately it seems it’s not as easy as DX12/64-bit to solve GW2’s performance issues.

Here is a post an engine programmer made on the GW2 subreddit:

http://www.reddit.com/r/Guildwars2/comments/3ajnso/bad_optimalization_in_gw2/csdnn3n

The problem is apparently not something that can be solved by hardware, but by code itself. It was not programmed to work in parallel, so even if you make it multithreaded, those different threads all try to access the same data at once, and have to wait their turn.

One thing that stood out to me though, was that the developer said the main stack is the bottleneck. This doesn’t line up with the story about the different threads having to queue for memory access. I am wondering if, despite this developer’s claims, DX12 would relieve the main stack bottleneck by reducing the impact of the rendering stack/draw calls. But I am not a programmer, I am just speculating there.

This issue that the Dev talks about is a perfect example of TSX use in a consumer desktop. Very interesting.

But if it is the main stack they should be able to solve that by rebuilding that stack. If Memory handling is an issue they just need to rip out the memory management system and rebuild it with sharing in place. sure no easy under taking, but would be far easier then rebuilding for DX12/64bit, and could give a pretty large performance boost.

Desktop: 4790k@4.6ghz-1.25v, AMD 295×2, 32GB 1866CL10 RAM, 850Evo 500GB SSD
Laptop: M6600 – 2720QM, AMD HD6970M, 32GB 1600CL9 RAM, Arc100 480GB SSD

64 bit client

in Guild Wars 2: Heart of Thorns

Posted by: Gazareth.7230

Gazareth.7230

Like I said, I’m not a programmer, I have no idea about “rebuilding the main stack” and “ripping out the memory system” and stuff, but from what I’ve heard these problems aren’t simple to solve. GW2 (and MMOs in general) just have so much information to process; there’s no way to avoid memory accessing queues.

64 bit client

in Guild Wars 2: Heart of Thorns

Posted by: ikereid.4637

ikereid.4637

Like I said, I’m not a programmer, I have no idea about “rebuilding the main stack” and “ripping out the memory system” and stuff, but from what I’ve heard these problems aren’t simple to solve. GW2 (and MMOs in general) just have so much information to process; there’s no way to avoid memory accessing queues.

This is why I mentioned TSX. It would affect the line in Bold

Transactional Synchronization Extensions (TSX) is an extension to the x86 instruction set architecture (ISA) that adds hardware transactional memory support, speeding up execution of multi-threaded software through lock elision. According to different benchmarks, TSX can provide around 40% faster applications execution in specific workloads, and 4–5 times more database transactions per second (TPS)

Source – https://en.wikipedia.org/wiki/Transactional_Synchronization_Extensions

And with the 6th Gen Intel CPUs we might start to see TSX in the i5’s and higher end i7’s. Currently its a server feature in the haswell-EP line.

Desktop: 4790k@4.6ghz-1.25v, AMD 295×2, 32GB 1866CL10 RAM, 850Evo 500GB SSD
Laptop: M6600 – 2720QM, AMD HD6970M, 32GB 1600CL9 RAM, Arc100 480GB SSD

64 bit client

in Guild Wars 2: Heart of Thorns

Posted by: Behellagh.1468

Behellagh.1468

Unfortunately it seems it’s not as easy as DX12/64-bit to solve GW2’s performance issues.

Here is a post an engine programmer made on the GW2 subreddit:

http://www.reddit.com/r/Guildwars2/comments/3ajnso/bad_optimalization_in_gw2/csdnn3n

The problem is apparently not something that can be solved by APIs, but by the lower-level code itself. It was not programmed to work in parallel, so even if you make it multithreaded, those different threads all try to access the same data at once, and have to wait their turn.

One thing that stood out to me though, was that the developer said the main stack is the bottleneck. This doesn’t line up with the story about the different threads having to queue for memory access. I am wondering if, despite this developer’s claims, DX12 would relieve the main stack bottleneck by reducing the impact of the rendering stack/draw calls. But I am not a programmer, I am just speculating there.

The problems he mentioned I’ve mentioned over the last two plus years as why it’s difficult to scale across cores.

First is making data “thread safe” which means allowing only one thread at a time to modify or read memory. But doing this short of the TSX hardware solution can significantly slow things down. And since TSX is not universal yet across the CPUs that the game supports, it’s a problem.

As for memory heap fragmentation, that’s an effect which “swiss cheeses” the memory heap into lots of small, but unusable fragments of available memory. You may have 100s of megabytes free but only a few megabytes that have chunks big enough to be useful. C and C++ don’t use double indirection to allocated memory segments which could allow memory compaction thread. Other common solutions is using separate heaps for each or similar size data types to reduce the chance of fragmentation. But none of these are standard to the memory allocation portions of those language’s standard library. You either home brew one or use a third party package which would cost ANet money. As he points out the solution is a much larger memory heap like what’s available with a 64-bit build. And he’s right about the code not being faster.

We are heroes. This is what we do!

RIP City of Heroes

64 bit client

in Guild Wars 2: Heart of Thorns

Posted by: Bearhugger.4326

Bearhugger.4326

TSX is just a band-aid to the real problem. You shouldn’t have to constantly mutex-lock every other thread before they’re done. At least not constantly. Normally, you mutex-lock for resource access or for calling third-party APIs that are not thread-safe. You don’t constantly bother each other thread with mutex locks because otherwise you may as well do it single threaded due to the operating system’s scheduler latency.

So what are the resources that a game may access? There aren’t a lot in the case of video games because of a lot of the heavy work is done in software.

The most obvious resource is the video card, but all versions of OpenGL and all versions of DirectX earlier than 12 will systematically fail if not called from the thread that created the rendering context, so even if you mutex-lock the there is no point to mutex-lock to call graphics API. You must call it from the render thread and that’s how it is.

Other devices on the system (sound card, network, keyboard, etc.) are resources as well but they generally don’t matter.

I’m not sure what third-party they use that wouldn’t be thread-safe. I have absolutely no experience programming with Umbra so maybe I don’t know what I’m talking about, but since it’s a third-party designed for high-end games I figure it’s not too picky when it comes to threading and other performance-related concerns. The only thing I can think of that maybe wouldn’t be so nice about threads is the embedded browser for the gem store. (Because who embeds a browser in their game?)

So not sure why they would need to constantly mutex-lock other threads over and over again. They seem to have a problem of software engineering. They wrote their engine before you had octo-cores CPUs and whatnot, and now their code doesn’t scale with that new parallelism. They have an engine that probably dates from 2004 or 2005 (not sure when Mike’O left Blizzard to found his company) and now they’re just patching it.

64 bit client

in Guild Wars 2: Heart of Thorns

Posted by: Behellagh.1468

Behellagh.1468

Actually lots of MMOs embed a 3rd party browser. I still have several that use Awesomium which this game also used until they switched to Coherent UI. Why reinvent a browser backend?

And I don’t think anyone is talking about mutex-locking hardware, the OS does that for us but objects/data structures. Dx9 library is certainly not thread safe and Dx11 is mostly thread safe except in very limited cases. Umbra is thread safe as it only generates a list of potential visible objects, there is no modification of that data set.

The issue is the main line thread can’t generate the data to the GPU fast enough and the shaders on the GPU aren’t complex enough to make the GPU the significant limiting factor. It’s the code. All Dx11 or 12 would do is allow the GPU to process that data faster.

We are heroes. This is what we do!

RIP City of Heroes

(edited by Behellagh.1468)

64 bit client

in Guild Wars 2: Heart of Thorns

Posted by: Sytherek.7689

Sytherek.7689

May I say that this is an excellent discussion, far better than threads on many tech sites. I’m even learning a bit, and I’ve been coding since the 1970s.

Carry on!

64 bit client

in Guild Wars 2: Heart of Thorns

Posted by: Aidenwolf.5964

Aidenwolf.5964

I waste a ton of ram playing GW2 and most of my CPU and half of my GPUs due solely to the fact that the game is 32 bit and not multi thread. It’s a shiny coat of paint on an old school engine.

Buy To Play Guild Wars 2 2012-2015 – RIP
Unlucky since launch, RNG isn’t random
PugLife SoloQ

64 bit client

in Guild Wars 2: Heart of Thorns

Posted by: MordekaiZeyo.7318

MordekaiZeyo.7318

Unfortunately it seems it’s not as easy as DX12/64-bit to solve GW2’s performance issues.

Here is a post an engine programmer made on the GW2 subreddit:

http://www.reddit.com/r/Guildwars2/comments/3ajnso/bad_optimalization_in_gw2/csdnn3n

The problem is apparently not something that can be solved by APIs, but by the lower-level code itself. It was not programmed to work in parallel, so even if you make it multithreaded, those different threads all try to access the same data at once, and have to wait their turn.

One thing that stood out to me though, was that the developer said the main stack is the bottleneck. This doesn’t line up with the story about the different threads having to queue for memory access. I am wondering if, despite this developer’s claims, DX12 would relieve the main stack bottleneck by reducing the impact of the rendering stack/draw calls. But I am not a programmer, I am just speculating there.

A good speculation.

That’s DirectX 12 what you are talked about approximately… the rendering stack are spreaded from single cpu core to multi core to reduce the latency and increases the maximum possibly amount of draw calls per sec. (depends on which CPU and GPU model are used). The GPU have not to wait for the CPU.

I have attached a screenshot from 3DMark Advanced Edition with Windows 10.
I had noticed a very huge FPS improvement at the beginning of a scene during the test.
DirectX 11 was mostly less than 80 FPS while DirectX12 exploded the 300 FPS barrier.
There is no DirectX9 test sadly. I think it’s lesser than DirectX 11.

Guild Wars 2 during World Bosses/WvW uses a lot of draw calls per sec that can not be solved with DirectX9.0c due of single threaded utilization. DirectX 11 Multi threading would help but the biggest impact is DirectX12.

Attachments:

64 bit client

in Guild Wars 2: Heart of Thorns

Posted by: Aidenwolf.5964

Aidenwolf.5964

Unfortunately it seems it’s not as easy as DX12/64-bit to solve GW2’s performance issues.

Here is a post an engine programmer made on the GW2 subreddit:

http://www.reddit.com/r/Guildwars2/comments/3ajnso/bad_optimalization_in_gw2/csdnn3n

The problem is apparently not something that can be solved by APIs, but by the lower-level code itself. It was not programmed to work in parallel, so even if you make it multithreaded, those different threads all try to access the same data at once, and have to wait their turn.

One thing that stood out to me though, was that the developer said the main stack is the bottleneck. This doesn’t line up with the story about the different threads having to queue for memory access. I am wondering if, despite this developer’s claims, DX12 would relieve the main stack bottleneck by reducing the impact of the rendering stack/draw calls. But I am not a programmer, I am just speculating there.

A good speculation.

That’s DirectX 12 what you are talked about approximately… the rendering stack are spreaded from single cpu core to multi core to reduce the latency and increases the maximum possibly amount of draw calls per sec. (depends on which CPU and GPU model are used). The GPU have not to wait for the CPU.

I have attached a screenshot from 3DMark Advanced Edition with Windows 10.
I had noticed a very huge FPS improvement at the beginning of a scene during the test.
DirectX 11 was mostly less than 80 FPS while DirectX12 exploded the 300 FPS barrier.
There is no DirectX9 test sadly. I think it’s lesser than DirectX 11.

Guild Wars 2 during World Bosses/WvW uses a lot of draw calls per sec that can not be solved with DirectX9.0c due of single threaded utilization. DirectX 11 Multi threading would help but the biggest impact is DirectX12.

My 8 core CPU crossfired GPUs and 8 gigs of high speed ram approve. DX 12 should be in Anet’s plans if the game wants to move into the future.

Buy To Play Guild Wars 2 2012-2015 – RIP
Unlucky since launch, RNG isn’t random
PugLife SoloQ

64 bit client

in Guild Wars 2: Heart of Thorns

Posted by: Bearhugger.4326

Bearhugger.4326

Actually lots of MMOs embed a 3rd party browser. I still have several that use Awesomium which this game also used until they switched to Coherent UI. Why reinvent a browser backend?

I’m not arguing about reinventing the browser, but while we’re there I have to question their decision to use an embedded browser in the first place. They could have made the gemstore UI in C++ and simply write a Web service to fetch the data and the images. That would have been considerably lighter on resources and performances than querying and decoding full HTML+CSS and executing JavaScript. And the gemstore UI would probably be snappier. I mean if they used the embedded browser to make an in-game /wiki command than that would be awesome, so OK might as well use it for other tasks such as the gem store. But if it’s just for the gem store, I dunno

Personally, Guild Wars 2 is the only game I know that does.

And I don’t think anyone is talking about mutex-locking hardware, the OS does that for us but objects/data structures. Dx9 library is certainly not thread safe and Dx11 is mostly thread safe except in very limited cases. Umbra is thread safe as it only generates a list of potential visible objects, there is no modification of that data set.

The issue is the main line thread can’t generate the data to the GPU fast enough and the shaders on the GPU aren’t complex enough to make the GPU the significant limiting factor. It’s the code. All Dx11 or 12 would do is allow the GPU to process that data faster.

I was not talking about mutex-locking hardware, the operating system will never allow your app to manipulate hardware in the first place. I was talking about being incapable of issuing D3D9 or OpenGL calls from other threads even if you use critical sections and are careful about race conditions because the call will fail at the API level, and that’s why there’s not much that multithreading can do for rendering on D3D9. (Edit for correctness sake: there is a flag for D3D9 to let you do calls from other threads, but it’s still not thread-safe.) DirectX11 I believe will let you do that on some interfaces but not on others, but in the end all it does is defer the synchronization to Direct3D. Direct3D12 will be the first truly multi-threaded, and if the threading issues are as bad as the reddit post make them be, that could actually be a boon to them.

But they shouldn’t have those threading issues in the first place is what I’m saying. That’s why I suspect that this game’s engine has serious software engineering issues that they have been patching but cannot really fix without a rewrite. It’s obvious that the CPU can’t feed the GPU fast enough (although technically the renderer could be fine and something else in the game loop would be the CPU hog) but why is it the case? Most games are GPU-bound, not CPU bound.

To be clear I’m not assuming anything about Anet’s programmers skills when I point out issues like that. MMORPGs are probably the hardest types of game for a programmer to make an optimized rendering engine for. In a single-player game, it’s pretty easy to know what resources the game will need for a given level, so you can pre-load a lot of stuff and then load some others asynchronously if you know you’ll need them a bit later. If a level is overloaded, you can tell your level designer to calm down and remove stuff to keep the 60 FPS. For an MMO on the other hand it’s harder. You know what resources to load for the level, but then you have an unknown number of highly customizable and highly detailed player characters that are probably all different and most likely won’t batch-render very well, and sometimes you may have 1 to render, and later 200. On top of that not all characters are equal for the renderer. If your character owns a legendary and some armor pieces with particle effects all over the place, and on top of that is a ranger with a combat pet out, he’s likely to be a lot more work to render than a level 1 warrior. And sure they have a limit of characters per map (before they spawn a new map) but if they base the maximum of characters per map instance on extreme situations like everyone being on the same area in top legendaries then the limit of characters per instance would probably be very low and the maps would feel empty, so they’re probably making reasonable assumptions and when extreme situations happen then that’s too bad.

As much as I can be understanding about the difficulties of creating a MMO though, I’m also severe because the only platform they support is Windows (Mac isn’t supported natively) so I’d expect the game to run better than it currently does.

But the games I worked on are not without their problems either. Perhaps I should keep my trap shut.

(edited by Bearhugger.4326)