DirectX 11/12 request [merged]
First @stephanie wise, learn to use the quote system so it’s clear what is a quote from another poster and what’s your comment. Walls of text is confusing.
I agree with you 64 bit fix those problem also. but the game with the hot expansion use more memory. I have seen some one in game having issue. I asked is your system and os 64 bit and did you try the 64 bit client? the answer was yes problem still there. I asked how much memory do you have on your system? the answer was 4gb. told the person to put more memory in the system since 64 bit use more then 4 gb of memory.
so if the 64 bit client with 4 gb of memory does not solve the issue it means that the game use more then 4 gb of memory. or need the extra memory to solve that fragmentation problem that you get with 32 bit. in one case or the other you need more then 4 gb of memory.
Here’s an analogy. Think of memory as pages in a book and program data words on a page. Going from 32-bit to 64-bit is like having a book printed using a larger font. Same number of words but needs more pages to hold it.
The “get more than 4GB” is due to how an OS handles a limited resource like system memory and how it’s shared among all the software that is currently running, assuming we aren’t also talking about shifting from a 32-bit to a 64-bit version of an OS, that the OS could always handle more than 4 GB of system memory.
An Operating System’s (OS) primary purpose is to let one program think it has access to the entire resources the PC has. But since other programs are also running, the OS supervises, to the best of it’s ability, the sharing of common resources. Like which thread gets a chance to run, sharing of I/O (keyboard, graphics, disk) and memory.
Now what happens when the programs you have running wanting more memory than you actually have in the system, the OS juggles what data is actually in system memory (those are the sticks of RAM by the way) and what data is temporarily stored on disk based on how frequently that data is accessed and if needed again will read it off the drive. It does this anyways even if you have plenty of memory but it gets very aggressive if you are close to filling all of the system memory all the time. All this reading and writing of data onto the drive will significantly slow software down, to a virtual standstill in extreme cases. All that said, adding more system memory means this worse case is less likely to happen. And since the 64-bit client will use more memory, cause “bigger font analogy”, you would encounter that worse case sooner than in the 32-bit client. It’s just the 32-bit client has that OOM issue that, yes, crops up more frequently on HoT maps.
RIP City of Heroes
(edited by Behellagh.1468)
First @stephanie wise, learn to use the quote system so it’s clear what is a quote from another poster and what’s your comment. Walls of text is confusing.
I agree with you 64 bit fix those problem also. but the game with the hot expansion use more memory. I have seen some one in game having issue. I asked is your system and os 64 bit and did you try the 64 bit client? the answer was yes problem still there. I asked how much memory do you have on your system? the answer was 4gb. told the person to put more memory in the system since 64 bit use more then 4 gb of memory.
so if the 64 bit client with 4 gb of memory does not solve the issue it means that the game use more then 4 gb of memory. or need the extra memory to solve that fragmentation problem that you get with 32 bit. in one case or the other you need more then 4 gb of memory.
Here’s an analogy. Think of memory as pages in a book and program data words on a page. Going from 32-bit to 64-bit is like having a book printed using a larger font. Same number of words but needs more pages to hold it.
The “get more than 4GB” is due to how an OS handles a limited resource like system memory and how it’s shared among all the software that is currently running, assuming we aren’t also talking about shifting from a 32-bit to a 64-bit version of an OS, that the OS could always handle more than 4 GB of system memory.
An Operating System’s (OS) primary purpose is to let one program think it has access to the entire resources the PC has. But since other programs are also running, the OS supervises, to the best of it’s ability, the sharing of common resources. Like which thread gets a chance to run, sharing of I/O (keyboard, graphics, disk) and memory.
Now what happens when the programs you have running wanting more memory than you actually have in the system, the OS juggles what data is actually in system memory (those are the sticks of RAM by the way) and what data is temporarily stored on disk based on how frequently that data is accessed and if needed again will read it off the drive. It does this anyways even if you have plenty of memory but it gets very aggressive if you are close to filling all of the system memory all the time. All this reading and writing of data onto the drive will significantly slow software down, to a virtual standstill in extreme cases. All that said, adding more system memory means this worse case is less likely to happen. And since the 64-bit client will use more memory, cause “bigger font analogy”, you would encounter that worse case sooner than in the 32-bit client. It’s just the 32-bit client has that OOM issue that, yes, crops up more frequently on HoT maps.
ok just found the quoting system. will make this more easy. yes I know about 64 bit. if you have a 64 bits processor and not a 64 bit os you run in legacy taking chunk of 64 bit and process it 32/32 makes your system 150% faster then a 32 bit system if you have 64 bit processor and 64 bit os process is 64 bit you run 200 % faster then a 32 bit system. 64 bit provide more memory and speed over 4gb of memory and over 3 gb for speed that are the limit of 32 bit system. and yes system usely go to ram . random access memory. for data that is in use. that data stay in ram as long as you do not reboot the system. as for data temporarily on the disk it is rather permanently on the disk on less that you are talking about the virtual disc space when you do not have enough ram on your system and the pc create ram memory on the virtual disc space. totally agree with you more memory is better faster hard drives is also better, since the slowest component in system is always the hard drives and the system always run at the speed of the slowest component. that is why fast system usely use the fastest rpm hard drives or raid 0(hard drive x2) with queing(x32). or ssd (solid state drives) or hybrid raid ssd with hard drive to do the caching maximize mode.
was just saying that some one was having the issue with the 64 bit client but is system add only 4 gb of memory. so basically he need more memory that was is problem for the game with the 64 bit client you need more then 4 gb of memory I would say at least 8gb as a minimum. since the system as always some process running in the back ground apart from the game. and since using a 64 bit system and 64 program it will use more then 4 gb of ram. 64 bit can use up to 128 gb of ram or more depending on your os and system limit. sorry again for the quoting system I add not seen it.
(edited by stephanie wise.7841)
There is so much wrong with what you just said it’s clear now that there is no way to educate you about the factors involved with 64-bit code. Believe what you believe. It’s wrong but whatever.
RIP City of Heroes
There is so much wrong with what you just said it’s clear now that there is no way to educate you about the factors involved with 64-bit code. Believe what you believe. It’s wrong but whatever.
what is wrong? and who said that I need educating? maybe you need some educating? nothing that I said is wrong. it is like your comment that dx12 is not backward compatible with older game using older dx version. because the dll for dx 9, 10 ,11 are in the pc of course they are. dll are: Dynamic-link library and yes they are in the system. it proves that it is backward compatible. just like when you update a driver to the latest version the old version are still in the system and you can use them or roll back to them. why is it that each time some one say something people say that they are wrong? or that they need educating? did you verify the info before making such comment? you are not even able to say on what I am wrong. because I am not wrong. it is not a question of belief from my part I am telling you the truth. my old system add a 64 bit processor and no 64 bit os was out yet so it was working in legacy mode. you think I did not search and educate my self about this to know how to set it up 11 year ago? intel talk about it Microsoft also at that time. do you know more then intel and Microsoft? to make such comment? I started on pc in 1984 at that time you add to write your own software to make game sound or image using basic or lotus. everything was done by typing. but of course everything I say is wrong to you. doh.
I agree with you 64 bit fix those problem also. but the game with the hot expansion use more memory. I have seen some one in game having issue. I asked is your system and os 64 bit and did you try the 64 bit client? the answer was yes problem still there. I asked how much memory do you have on your system? the answer was 4gb. told the person to put more memory in the system since 64 bit use more then 4 gb of memory.
so if the 64 bit client with 4 gb of memory does not solve the issue it means that the game use more then 4 gb of memory. or need the extra memory to solve that fragmentation problem that you get with 32 bit. in one case or the other you need more then 4 gb of memory.
Well not true, as the system can rely on HDD swap space to make additional virtual RAM, your performance will drop, considerably as HDD acces and data read and write is significant slower, but in theory this should be no problem.
Here’s an analogy. Think of memory as pages in a book and program data words on a page. Going from 32-bit to 64-bit is like having a book printed using a larger font. Same number of words but needs more pages to hold it.
The “get more than 4GB” is due to how an OS handles a limited resource like system memory and how it’s shared among all the software that is currently running, assuming we aren’t also talking about shifting from a 32-bit to a 64-bit version of an OS, that the OS could always handle more than 4 GB of system memory.
An Operating System’s (OS) primary purpose is to let one program think it has access to the entire resources the PC has. But since other programs are also running, the OS supervises, to the best of it’s ability, the sharing of common resources. Like which thread gets a chance to run, sharing of I/O (keyboard, graphics, disk) and memory.
Now what happens when the programs you have running wanting more memory than you actually have in the system, the OS juggles what data is actually in system memory (those are the sticks of RAM by the way) and what data is temporarily stored on disk based on how frequently that data is accessed and if needed again will read it off the drive. It does this anyways even if you have plenty of memory but it gets very aggressive if you are close to filling all of the system memory all the time. All this reading and writing of data onto the drive will significantly slow software down, to a virtual standstill in extreme cases. All that said, adding more system memory means this worse case is less likely to happen. And since the 64-bit client will use more memory, cause “bigger font analogy”, you would encounter that worse case sooner than in the 32-bit client. It’s just the 32-bit client has that OOM issue that, yes, crops up more frequently on HoT maps.
ok just found the quoting system. will make this more easy. yes I know about 64 bit. if you have a 64 bits processor and not a 64 bit os you run in legacy
taking chunk of 64 bit and process it 32/32 makes your system 150% faster then a 32 bit system if you have 64 bit processor and 64 bit os process is 64 bit you run 200 % faster then a 32 bit system.64 bit provide more memoryand speedover 4gb of memoryand over 3 gb for speedthat are the limit of 32 bit system.
And yes, system usely go to ram (random access memory) for data that is in use.
That data stay in ram as long as you do not reboot the system.
As for data temporarily on the disk it is rather permanently on the disk on less that you are talking about the virtual disc space when you do not have enough ram on your system and the pc create ram memory on the virtual disc space.Totally agree with you more memory is better faster hard drives is also better, since the slowest component in system is always the hard drives and the system always run at the speed of the slowest component. that is why fast system usely use the fastest rpm hard drives or raid 0(hard drive x2) with queing(x32). or ssd (solid state drives) or hybrid raid ssd with hard drive to do the caching maximize mode.
Was just saying that some one was having the issue with the 64 bit client but is system add only 4 gb of memory. so basically he need more memory that was is problem for the game with the 64 bit client you need more then 4 gb of memory I would say at least 8gb as a minimum. since the system as always some process running in the back ground apart from the game. and since using a 64 bit system and 64 program it will use more then 4 gb of ram. 64 bit can use up to 128 gb of ram or more depending on your os and system limit. sorry again for the quoting system I add not seen it.
The 64 bit client was used to fix OutOfMemory errors due to the gw2 32 bit app trying to adress more then it’s max 4 GB of adressible RAM. being compiled for a 64 bit processor, it will not work on a 32 bit machine, or a 64 bit machine with a 32 bit OS.
Why 64 bits:
64 bit’s memory codes could adress up to 16 ExaByte of data, which when thinking about it would be a bit overkill, so 64 bit machines use lower amounts of bits to adress memory space, 48 bit to be exact as specified in the AMD64 instruction set, which is standard nowadays on both AMD and Intel which allow machines to adress 256 TB of memory. Until windows 8.1 this wasn’t even supported by Microsoft and the used adresses microsoft implemented were 42bit (if I’m correct) limiting the user of the OS to 8Tb of adressable space.
So XP 64, Vista 64, Windows 7-64 and Windows 8 64bit were all limited to 8 Tb of actual adressable space. This comes with a problem:
But this was fast becoming a problem due to HDD’s growing bigger and bigger, with 6 TB HDD’s being on the market, whcih is likely a reason for Microsoft to push Windows 10. problems would arize when files being bigger then 8TB would be stored on a Drive. Now consider how often you use this size of files, 8 Tb…. this is a number only relevant for 4K video editing. In most other contecxt it’s beyond huge. imagine formatting a stripe set of 8 6Tb HDD’s… 48 Tb…. well this might actually work, but the filesize would gain be limited…
Speedgain?
No there is none…..
for 32 bit instructions on 64: Instead of a 64 bit instruction being proccessed 1 32 bit is entered, and if you look at it deeply you’ll gain nothing from 64 bit except some calculation precision, and the ability to work with bigger numbers. In effect it could be slower as the adresses & datapackets are longer causing a need for more data to be moved through all I/O’s ,pipelines and so on. But on a full 64 bit machine tis is mostly unnoticable.
using 64 bit -data- on a 32 bit computer is also possible, but will be slower (about 3 clocks for a 64 bit argument, instead of 1 for a 32 bit), but the 32 bit computer cannot handle direct 64 bit adressing,, in the end the same precision could be aquired only at a timeloss due to inefficient workaround needed to use the for 32 bit long data values.
This would be the same when computing 128bit strings on a 64 bit PC.
Speedgain comes from raising transferrates, Speed, or adding more of components, Parallel computing, Multitasking, or pooling resources.
2 CPU’s would be faster when the multithreading of a program allows it. 4 better still 8 or 16 better as well. Problem becomes the size of needeed components
2 Harddisk when used in tandem will allow them can both to add speed (so raid 0 is speed advantage when reading and writing, raid 1 only when reading, as both disks store the same (mirrored data)
2 networkcables when you can combine the datarate
But 32 and 64 bit: no difference in speed.
They do differ in the maximum size of numbers and calculating precision.
Any actual difference is based upon physical numbers, clockspeed and cores and to a lesser extent the CPU’s efficiency in using those clocks (only the last would include threads)
Been There, Done That & Will do it again…except maybe world completion.
(edited by PaxTheGreatOne.9472)
I agree with you 64 bit fix those problem also. but the game with the hot expansion use more memory. I have seen some one in game having issue. I asked is your system and os 64 bit and did you try the 64 bit client? the answer was yes problem still there. I asked how much memory do you have on your system? the answer was 4gb. told the person to put more memory in the system since 64 bit use more then 4 gb of memory.
so if the 64 bit client with 4 gb of memory does not solve the issue it means that the game use more then 4 gb of memory. or need the extra memory to solve that fragmentation problem that you get with 32 bit. in one case or the other you need more then 4 gb of memory.
Well not true, as the system can rely on HDD swap space to make additional virtual RAM, your performance will drop, considerably as HDD acces and data read and write is significant slower, but in theory this should be no problem.
Here’s an analogy. Think of memory as pages in a book and program data words on a page. Going from 32-bit to 64-bit is like having a book printed using a larger font. Same number of words but needs more pages to hold it.
The “get more than 4GB” is due to how an OS handles a limited resource like system memory and how it’s shared among all the software that is currently running, assuming we aren’t also talking about shifting from a 32-bit to a 64-bit version of an OS, that the OS could always handle more than 4 GB of system memory.
An Operating System’s (OS) primary purpose is to let one program think it has access to the entire resources the PC has. But since other programs are also running, the OS supervises, to the best of it’s ability, the sharing of common resources. Like which thread gets a chance to run, sharing of I/O (keyboard, graphics, disk) and memory.
Now what happens when the programs you have running wanting more memory than you actually have in the system, the OS juggles what data is actually in system memory (those are the sticks of RAM by the way) and what data is temporarily stored on disk based on how frequently that data is accessed and if needed again will read it off the drive. It does this anyways even if you have plenty of memory but it gets very aggressive if you are close to filling all of the system memory all the time. All this reading and writing of data onto the drive will significantly slow software down, to a virtual standstill in extreme cases. All that said, adding more system memory means this worse case is less likely to happen. And since the 64-bit client will use more memory, cause “bigger font analogy”, you would encounter that worse case sooner than in the 32-bit client. It’s just the 32-bit client has that OOM issue that, yes, crops up more frequently on HoT maps.
ok just found the quoting system. will make this more easy. yes I know about 64 bit. if you have a 64 bits processor and not a 64 bit os you run in legacy
taking chunk of 64 bit and process it 32/32 makes your system 150% faster then a 32 bit system if you have 64 bit processor and 64 bit os process is 64 bit you run 200 % faster then a 32 bit system.64 bit provide more memoryand speedover 4gb of memoryand over 3 gb for speedthat are the limit of 32 bit system.
And yes, system usely go to ram (random access memory) for data that is in use.
That data stay in ram as long as you do not reboot the system.
As for data temporarily on the disk it is rather permanently on the disk on less that you are talking about the virtual disc space when you do not have enough ram on your system and the pc create ram memory on the virtual disc space.Totally agree with you more memory is better faster hard drives is also better, since the slowest component in system is always the hard drives and the system always run at the speed of the slowest component. that is why fast system usely use the fastest rpm hard drives or raid 0(hard drive x2) with queing(x32). or ssd (solid state drives) or hybrid raid ssd with hard drive to do the caching maximize mode.
Was just saying that some one was having the issue with the 64 bit client but is system add only 4 gb of memory. so basically he need more memory that was is problem for the game with the 64 bit client you need more then 4 gb of memory I would say at least 8gb as a minimum. since the system as always some process running in the back ground apart from the game. and since using a 64 bit system and 64 program it will use more then 4 gb of ram. 64 bit can use up to 128 gb of ram or more depending on your os and system limit. sorry again for the quoting system I add not seen it.
The 64 bit client was used to fix OutOfMemory errors due to the gw2 32 bit app trying to adress more then it’s max 4 GB of adressible RAM.
64 bit’s memory codes could adress up to 16 ExaByte of data, which would be a bit overkill, so 64 bit machines use lower amounts of bits to adress memory space, 48 bit to be exact in the AMD64 instruction set, which is standard nowadays on both AMD and Intel which allow machines to adress 256 TB of memory. Until windows 8.1 this wasn’t even supported by Microsoft and the used adresses limiting the user of the OS to 8Tb of adressable space. But tbh at 32 Gb I run a fairly powerfull PC, I do not use a swapdrive unless I need to videoedit.
So XP 64, Vista 64, Windows 7-64 and Windows 8 64bit were all limited to 8 Tb of actual adressable space.
And this was fast becoming a problem due to HDD’s growing bigger and bigger, with 6 TB HDD’s being on the market, whcih is likely a reason for Microsoft to push Windows 10. problems would arize when files being bigger then 8TB would be stored on a Drive. Now consider how often you use this size of files, 8 Tb…. this is a number only relevant for 4K video editing. In most other contecxt it’s beyond huge.
Speedgain?
No there is none…..
for 32 bit instructions on 64: Instead of a 64 bit instruction being proccessed 1 32 bit is entered, and if you look at it deeply you’ll gain nothing from 64 bit except some calculation precision, and the ability to work with bigger numbers. In effect it could be slower as the adresses & datapackets are longer causing a need for more data to be moved through all I/O’s ,pipelines and so on. But on a full 64 bit machine tis is mostly unnoticable.using 64 bit -data- on a 32 bit computer is also possible, but will be slower (about 3 clocks for a 64 bit argument, iunstead of 1 for a 32 bit), but the 32 bit computer cannot handle direct 64 bit adressing,, in the end the same precision could be aquired only at a timeloss due to inefficient workaround needed to use the for 32 bit long data values.
This would be the same when computing 128bit strings on a 64 bit PC.Speedgain comes from raising transferrates, Speed, or adding more of components, Parallel computing, Multitasking, or pooling resources.
2 CPU’s would be faster when the multithreading of a program allows it. 4 better still 8 or 16 better as well. Problem becomes the size of needeed components
2 Harddisk when used in tandem will allow them can both to add speed (so raid 0 is speed advantage when reading and writing, raid 1 only when reading, as both disks store the same (mirrored data)
2 networkcables when you can combine the datarateBut 32 and 64 bit no difference in speed.
They do differ in the maximum size of numbers and calculating precision.
for windows 8 it can address more then that. Each storage pool on the ReFS filesystem is limited to 4 PB (4096 TB), but there are no limits on the total number of storage pools or the number of storage spaces within a pool. windows 8 comes with storage space. https://en.wikipedia.org/wiki/Features_new_to_Windows_8#Storage
64 bit on the processor level takes chunk of 64 bit of memory. 32 kittenunk of 32 bit. so processing 64 bit is double the memory compare to 32 bit multiply that by the ghz and the core and thread and I think there is a speed benefit.
Storage solutions can be made incredibly large by using software raid solutions. but I was trying to tell you the adresseble physical limits of files, as the max storage size is not the same as addressible filesize.
Been There, Done That & Will do it again…except maybe world completion.
I would like to explain the idea of your twice as fast 64 bit as will but with smaller numbers
you have a 4 bit and an 8 bit machine.
you need to do something with data:
you need to add 1 and 1…..
for 4 bit the values would be 0001 for 8 bit 0000 0001
4 bit adds 0001 + 0001= 0010 (1clock)
8 bit adds 0000 0001 + 0000 0001 =0000 0010 (1 clock)
both get the same answer being 10 -> being the binary for 2
if the 64 bit machine would enter 0001 0001 this would be 17 for the 64 bit CPU and your system would fall apart.
With merging and decode:
- shift value 0000 0001 -> 0001 0000 (1 clock)
- and add new value 0000 0001 to value 0001 0000+ 0000 0001 = 0001 0001 (1 clock)
- 0001 0001 + 0001 0001 = 0010 0010 BEING 34….. (1 clock)
- extract the last 4 bits value -> (0000) 0010 leaving 0010 0000 (1clock?)
- shift valueback 0000 0010 (1 clock?) now you have 2 values both being 10…
It would seem handy to merge and decode the difference but this decode takes more clocks then actually making a second calculation. And in the end decoding this woud result in a speed loss. as you could make 2 seperate calculations in 2 clock and you would need at least 3 (provided the step 1 and 2, shift and add could be done in 1 clock, and 4 and 5 could be done in 1 clock) and likely 5 clocks to merge the info and decode it again…. 3 clocks would be 33% speed loss 5 clocks would be a 60% speed loss over just making 2 basic calculations….
so: No 2 connected 32 instructions….
Been There, Done That & Will do it again…except maybe world completion.
(edited by PaxTheGreatOne.9472)
Storage solutions can be made incredibly large by using software raid solutions. but I was trying to tell you the adresseble physical limits of files, as the max storage size is not the same as addressible filesize.
ok so for the storage it would use a vr trick. to multiply its basic adressable physical limit of file. since with vr you can multiply the number of virtual pc on the same system. my older pc was using raid 0 with queing. 2 drives that work in parallel at the same time sending each 32 que per turn of disk of data to the processor. the only problem with raid 0 is that it as no back up in case of disk failure. my other pc use a raid hybrid solid state drives with a hard drive for catching. it is about equivalent. since solid state is memory stick similar to ram some process are faster on one and some other process on the other. so I know about raid solution. but any way the game gw2 most not use over 8tb of adressable physical file memory? yep 32 gb of ram is nice. what is less nice is when the fps drop because there is to much animation for the dx9. you could have a pc with 256 gb of memory with 6 processor quad core. and the same would happen. since it is a dx 9 limit. I understand that they want the game to work on every system old and new. people with old system have lower fps then people with new system so they lag even more then us. but even with new system you experience lag. it is not because the technology is not there it is because the implementation is not done. before dx 9 every gaming company would optimize the game to the newest version. they stopped doing that at dx 10 because of the small benefit it offer compare to the cost and time. but now dx 12 bring over 70% improvement compare to dx 11. and solve the cpu bottle neck. so it would be good to optimize the game. every player would benefit from it in the long run. and people that do not want to upgrade their system can still use the dx 9 version. that way it still work with all the system. any way we are not the one that takes the decision. we will see what they will do.
I would like to explain the idea of your twice as fast 64 bit as will but with smaller numbers
you have a 4 bit and an 8 bit machine.
you need to do something with data:
you need to add 1 and 1…..
for 4 bit the values would be 0001 for 8 bit 0000 00014 bit adds 0001 + 0001= 0010 (1clock)
8 bit adds 0000 0001 + 0000 0001 =0000 0010 (1 clock)
both get the same answer being 10 -> being the binary for 2
if the 64 bit machine would enter 0001 0001 this would be 17 for the 64 bit CPU and your system would fall apart.
With merging and decode:
- shift value 0000 0001 -> 0001 0000 (1 clock)
- and add new value 0000 0001 to value 0001 0000+ 0000 0001 = 0001 0001 (1 clock)
- 0001 0001 + 0001 0001 = 0010 0010 BEING 34….. (1 clock)
- extract the last 4 bits value -> (0000) 0010 leaving 0010 0000 (1clock?)
- shift valueback 0000 0010 (1 clock?)
It would seem handy to merge and decode the difference but this decode takes more clocks then actually making a second calculation. And in the end decoding this woud result in a speed loss. as you could make 2 seperate calculations in 2 clock and you would need at least 3 (provided the step 1 and 2, shift and add could be done in 1 clock, and 4 and 5 could be done in 1 clock) and likely 5 clocks to merge the info and decode it again….
so: No 2 connected 32 instructions….
I tried to find the exact page that talked about this. but it was 11 year ago. did not find it. it was in 2005 with em64t intel 64 bit processor. it said that a 32 bit processor takes chunk of 32 bit memory then process the 32 bit(100%) . if you use a 64 bit processor with no 64 bit os it takes a chunk of 64 bit and process it 32/32 at the processor level (legacy mode) it is 150% faster then a 32 bit. and a 64 bit processor and 64 bit os it takes a chunk of 64 bit memory and process it in one chunk of 64 bit and is 200% faster compare to the 32 bit process. but in 11 year trying to find the page it could have been moved or removed not able to find it any more.
the only thing I have find is this: Larger physical address space in legacy mode
When operating in legacy mode the AMD64 architecture supports Physical Address Extension (PAE) mode, as do most current x86 processors, but AMD64 extends PAE from 36 bits to an architectural limit of 52 bits of physical address. Any implementation therefore allows the same physical address limit as under long mode.[
https://en.wikipedia.org/wiki/X86-64
could be because of processor technology (PAE) I am not a chip maker and can’t find the info any more. so we will leave it at that.
Storage solutions can be made incredibly large by using software raid solutions. but I was trying to tell you the adresseble physical limits of files, as the max storage size is not the same as addressible filesize.
ok so for the storage it would use a vr trick. to multiply its basic adressable physical limit of file. since with vr you can multiply the number of virtual pc on the same system. my older pc was using raid 0 with queing. 2 drives that work in parallel at the same time sending each 32 que per turn of disk of data to the processor. the only problem with raid 0 is that it as no back up in case of disk failure. my other pc use a raid hybrid solid state drives with a hard drive for catching. it is about equivalent. since solid state is memory stick similar to ram some process are faster on one and some other process on the other. so I know about raid solution.
but any way the game gw2 most not use over 8tb of adressable physical file memory? yep 32 gb of ram is nice. what is less nice is when the fps drop because there is to much animation for the dx9. you could have a pc with 256 gb of memory with 6 processor quad core. and the same would happen. since it is a dx 9 limit. I understand that they want the game to work on every system old and new. people with old system have lower fps then people with new system so they lag even more then us. but even with new system you experience lag. it is not because the technology is not there it is because the implementation is not done. before dx 9 every gaming company would optimize the game to the newest version. they stopped doing that at dx 10 because of the small benefit it offer compare to the cost and time. but now dx 12 bring over 70% improvement compare to dx 11. and solve the cpu bottle neck. so it would be good to optimize the game. every player would benefit from it in the long run. and people that do not want to upgrade their system can still use the dx 9 version. that way it still work with all the system. any way we are not the one that takes the decision. we will see what they will do.
Problem is if you would want to upgrade to DirectX 12 Compatible, you would have to rewrite the games 3d engine (e.g. the 3D engine embedded in gw2.exe and gw2-64.exe) from scratch to allow multithreading for CPU’s and multithread communication with GPU’s when a computer is capable of supporting DX12…
It should provide the same capabilities to directx12 users as other engines provide to legacy (below dx12) capable users, while maintaining support and upgrading both parts until the legacy part has deminshed so far you can remove it altogether. Without unfair advantages for PvP-ers for the League games….
this would create
gw2.exe (32bit OS, any dx9+ GPU)
gw2-64.exe (64 bit CPU & OS, any dx9+ GPU)
gw2-dx12.exe (32bit OS, and a dx12 compatible GPU)
gw2-64-dx12.exe (64 bit CPU & OS, a dx12 compatible GPU)
All these files are platforms the whole game needs to run on…..all these files need maintainance…..
This seems easy, but the original Guid Wars 3D-engine is an old extremely heavily modified dx8 Unreal engine (w. Havok Physics) 3D engine (if i’m correct, which im pretty sure now about regarding the name) which has seen 2 major rebuilds and transition to dx9 and is fully owned by A-Net….A-Net would have to develop a DX 12 version with full backwards capability which is a plan which would likely take more time, then you and I would like to see removed from a developer time….
It is not a case of running to a 3D engine company and saying: “pls I need the next version…..” and then an easy recompile… (easy… lolz)
And I’d be all for an upgrade, as long as I can play gw2 with updates. expansions…. I’ve just seen a tiny bit of action in PvE land and there is still so much to do on other parts which are being adressed soon™ , I doubt forcing A-net into making dx12 would be my first priority at this time or at all…..
Been There, Done That & Will do it again…except maybe world completion.
(edited by PaxTheGreatOne.9472)
Dev: It can’t be done
Forum Person : pages and pages of how I feel I know better than you.
/end
pve, raid, pvp, fractal, dungeon, world clearing, legendary questing.. Zapped!
Dev: It can’t be done
Forum Person : pages and pages of how I feel I know better than you./end
Yup, you’re right…
/end
Been There, Done That & Will do it again…except maybe world completion.
So many words, with so little effect…
Programmer
I’m not sure if you’re able to answer these but I have two questions. You don’t have to get into the specifics.
How much work would it require to incorporate something like DX12 into GW2?
Would you have to re-write a lot of the code in order to do so?
Dev: It can’t be done
Forum Person : pages and pages of how I feel I know better than you./end
It’s not about if it can be done or not.
It is more about anet and devs don’t have plans to do it because it is quite a difficult, long and resource-taking path, which results are not worth for all what it takes to do it.
But here there are some people that are stubborn to not to understand that, bringing walls of posts based on false premises and not-so-well understood concepts.
i7 5775c @ 4.1GHz – 12GB RAM @ 2400MHz – RX 480 @ 1390/2140MHz
(edited by Ansau.7326)
Anyone else find it hilarious that ANET devs hate their own forums so much that Jon Olsen actually wrote more characters and went through the trouble to link to a Reddit post that has only a five-word response than answer the question directly here? Awesome.
Anyone else find it hilarious that ANET devs hate their own forums so much that Jon Olsen actually wrote more characters and went through the trouble to link to a Reddit post that has only a five-word response than answer the question directly here? Awesome.
I noticed this, as well. Not that I think they hate the Official Forums; if so, he would not have posted. Just think it would have been easier to just post, ‘We have no plans to implement this (at this time).’
Nice instead of reposting that one short sentence here you link to a 3rd party page
In the whole of this thread I don’t see a single major benefit DX12 would bring to GW2 being put forward as a reason Anet would want to do this, especially as it’s still only available to a minority of Windows gamers and the numbers of Win10 installations is only growing at a snail’s pace, in spite of Microsoft’s resorting to spamming under-hand ‘important’ updates to Windows 67/8 users attempting to coerce or trick them into ‘upgrading’ to a tablet O/S.
I rather they focus on living world again
Dev: It can’t be done
Forum Person : pages and pages of how I feel I know better than you./end
Yup, you’re right…
/end
You were more patient than I as. I salute you.
RIP City of Heroes
LOL; let’s stick to then to the 10+ year old DX9 introduced with GW1 back in 2006…
‘would of been’ —> wrong
I have been wondering, did anyone measure the drawcalls of GW2?
Henge of Denravi Server
www.gw2time.com
OP, change the heading to " I WANT" please.
I’m not sure if you’re able to answer these but I have two questions. You don’t have to get into the specifics.
How much work would it require to incorporate something like DX12 into GW2?
Would you have to re-write a lot of the code in order to do so?
Just incorporate it you say?
I’m not a dev but you know, the answer to that is probably simple: very, very little work.
99% of games – including AAA titles – that where “next gen” with the newest DX has pretty much all done the same kind of crap ever since DX8-9. Slap on a “next gen” shader and wooooooo it’s DirectX10, 11 or 12 or whatever! Of course the rest of the game is 99.9% coded with what is probably DX9 features.
I’m sure Anet can do the same. They could always give us async shaders for 5 more fps on AMD cards and 10 less fps on Nvidia cards. The game will look exactly the same, it’ll just perform either slightly better or slightly worse depending on what side in the GPU wars you took. Hey you asked for it.
I agree … DX 12 would be great.
Even DX 11 would be nice.
It would be also nice if we had Unreal Engine 4 implemented in this game for better performance and optimisation.
Sadly any kind of upgrades like that would require way too much man hours to acctually implement it.
My hope was that we would see those features in GW3, but they also crushed this dream, since there won’t be GW3, but rather some kind of update like HOT was.
So we are stuck at DX9c and GW1 engine. Like 10 years old technology.
You will probbably get respond like: it doesn’t matter performance of the game, if story is great etc.
if it makes you feel any better, not even 20 core CPU with quad titan x SLI won’t get you stable 60 fps at WvW massive battles
Corsair RM650x, Fractal Define S (with window panel)
Sad.
Guild Wars 2 is destined to end up like Lineage 2 in terms of performance. No matter what hardware people get, the performance will always be bad.
Windows 10
Sad.
Guild Wars 2 is destined to end up like Lineage 2 in terms of performance. No matter what hardware people get, the performance will always be bad.
My performance has been great the last four years. If I were to upgrade it would be even better. The same goes for other people. As they upgrade their computers, they will eventually be able to play on higher graphics settings. The graphics on the highest settings look infinitely better than WoW and that game has been out for over a decade.
Sad.
Guild Wars 2 is destined to end up like Lineage 2 in terms of performance. No matter what hardware people get, the performance will always be bad.
My performance has been great the last four years. If I were to upgrade it would be even better. The same goes for other people. As they upgrade their computers, they will eventually be able to play on higher graphics settings. The graphics on the highest settings look infinitely better than WoW and that game has been out for over a decade.
I don’t belive that.
Put all your settings on high and try to get 60 fps at WvW or any populated events like boss kills etc.
It’s just not possible.
About 144 FPS … big NOPE.
If game can acctually utilize hardware ressources before it brings us FPS lagg, that great optimisation.
But what you see in GW2? CPU usage at 40%, GPU usage at 30%, but FPS are dropping. Ehem why you no use 100% CPU and GPU GW2? Why?
We don’t acctually need DX12. We just need this game to be better optimised and acctually use Hyper Threading and utilize 100% of GPU and CPU if it needs to.
Sadly this 10 years old engine can’t do that, and won’t ever be able to do so.
Corsair RM650x, Fractal Define S (with window panel)
In my opinion, exactly that lack of optimisation will cause the end to GW2.
And then developers will be like: “Why you don’t want to play anymore. We have great updates and much new content”.
We even had server merges recently. Meaning? Well we have less players than before. Players are loosing interest in this game, and don’t tell me that game performnace have nothing to do with it. Sure it’s not just that, but fair amount of players quited this game because of that FPS drop. We are in 2016, and if game can’t deliver us at least those 60fps … well that’s just sad.
Corsair RM650x, Fractal Define S (with window panel)
Sad.
Guild Wars 2 is destined to end up like Lineage 2 in terms of performance. No matter what hardware people get, the performance will always be bad.
My performance has been great the last four years. If I were to upgrade it would be even better. The same goes for other people. As they upgrade their computers, they will eventually be able to play on higher graphics settings. The graphics on the highest settings look infinitely better than WoW and that game has been out for over a decade.
I don’t belive that.
Put all your settings on high and try to get 60 fps at WvW or any populated events like boss kills etc.
It’s just not possible.About 144 FPS … big NOPE.
If game can acctually utilize hardware ressources before it brings us FPS lagg, that great optimisation.
But what you see in GW2? CPU usage at 40%, GPU usage at 30%, but FPS are dropping. Ehem why you no use 100% CPU and GPU GW2? Why?
We don’t acctually need DX12. We just need this game to be better optimised and acctually use Hyper Threading and utilize 100% of GPU and CPU if it needs to.
Sadly this 10 years old engine can’t do that, and won’t ever be able to do so.
Read what I said again instead of inferring something that I didn’t say.
Dev: It can’t be done
Forum Person : pages and pages of how I feel I know better than you.
/end
If a dev says “It can’t be done” I take it with a grain of salt. Because over the years I have heard a lot of devs saying “it can’t be done” until it was done (for whatever reasons).
Other MMO game companies with an “old” engine (more or less single threaded engine for DX9) upgraded their engine to multi-core/multi-threaded that also supports DX12. So those companies saw a commercial benefit in this development effort.
For GW2 I would not see the biggest performance boost in DX12 itself, but in the optimization (and partly redesign) of the game engine.
However, Mike O’Brian does not say “It can’t be done” but instead, that they have no plans to do this. That’s his decision.
Gw2 runs more than fine in an at least Haswell i3 with a gtx 750ti and cpu intensive settings toned down.
There are no excuses, but bad hardware.
i7 5775c @ 4.1GHz – 12GB RAM @ 2400MHz – RX 480 @ 1390/2140MHz
What they mean to say is that they have too much time invested into creating useless gem store products (like wings, gliders and backpacks) rather than actually releasing updates to the game itself.
Doesn’t matter. Maybe if we ask them to add it as a gem store product we will see results.
as for those that say I said rubbish I don’t if you cannot understand the relevance of what I say. you have much more to learn. as for the uefi comment to pax the great one it was about is comment of having upgrade from vista up to windows 10. and wondering why dx 12 is not available on all os. and why graphic card when you buy a new pc they send you the cheap model that is low standard. as for the comment of Elden Arnaas.4870: windows 10 is free upgrade until july 29th. if you did not upgrade before that date then you will have to buy the windows 10 os or a new computer with windows 10 on it. it is not that they will make you pay for what was free. like you seams to think. I agree with you on the part of old system needing some investement to being kept current. you complain about article that provide information. is it because you cannot understand the relevance? and for what you said before it is not as simple as that to make a program multithread I agree with you there is some fonction that need to be properly design by the devs and tested. but it is part of a devs job. that is why I did not mention it. unless you want me to give you around 100 page on the subject. since there is many programming language that could be use by the software. just to give you example of what we are talking about.Many programming languages support threading in some capacity. Many implementations of C and C++ support threading, and provide access to the native threading APIs of the operating system. Some higher level (and usually cross-platform) programming languages, such as Java, Python, and .NET Framework languages, expose threading to developers while abstracting the platform specific differences in threading implementations in the runtime. Several other programming languages also try to abstract the concept of concurrency and threading from the developer fully (Cilk, OpenMP, Message Passing Interface (MPI)). Some languages are designed for sequential parallelism instead (especially using GPUs), without requiring concurrency or threads (Ateji PX, CUDA).
A few interpreted programming languages have implementations (e.g., Ruby MRI for Ruby, CPython for Python) which support threading and concurrency but not parallel execution of threads, due to a global interpreter lock (GIL). The GIL is a mutual exclusion lock held by the interpreter that can prevent the interpreter from simultaneously interpreting the applications code on two or more threads at once, which effectively limits the parallelism on multiple core systems. This limits performance mostly for processor-bound threads, which require the processor, and not much for I/O-bound or network-bound ones.
Other implementations of interpreted programming languages, such as Tcl using the Thread extension, avoid the GIL limit by using an Apartment model where data and code must be explicitly “shared” between threads. In Tcl each thread has at one or more interpreters.
Event-driven programming hardware description languages such as Verilog have a different threading model that supports extremely large numbers of threads (for modeling hardware).
https://en.wikipedia.org/wiki/Thread_(computing)#Multithreadingso of course I do not go in all the detail of course. the devs should know how to do is job. so I should not have to go in to detail like that. I am talking about the basic of it. and even staying very basic people say rubbish. I am telling you the truth. all the info I gave you, you can find on the web by your self. rather then trying to shoot me down and saying rubbish look at the information and how it is relevant. if you are not even willing to try to understand. why are you here? to make your opinion more valid then some other one? we are discussing here on a subject. that as many simple thing and many complex thing. if you cannot understand the simple thing why even try to go higher?
What?
Beats me, but I’m thinking of having all of that tattooed on my back. Chicks checkin’ me out gonna be like “omg you must be so smart… if nothing else!” xD
as for those that say I said rubbish I don’t if you cannot understand the relevance of what I say. you have much more to learn. as for the uefi comment to pax the great one it was about is comment of having upgrade from vista up to windows 10. and wondering why dx 12 is not available on all os. and why graphic card when you buy a new pc they send you the cheap model that is low standard. as for the comment of Elden Arnaas.4870: windows 10 is free upgrade until july 29th. if you did not upgrade before that date then you will have to buy the windows 10 os or a new computer with windows 10 on it. it is not that they will make you pay for what was free. like you seams to think. I agree with you on the part of old system needing some investement to being kept current. you complain about article that provide information. is it because you cannot understand the relevance? and for what you said before it is not as simple as that to make a program multithread I agree with you there is some fonction that need to be properly design by the devs and tested. but it is part of a devs job. that is why I did not mention it. unless you want me to give you around 100 page on the subject. since there is many programming language that could be use by the software. just to give you example of what we are talking about.Many programming languages support threading in some capacity. Many implementations of C and C++ support threading, and provide access to the native threading APIs of the operating system. Some higher level (and usually cross-platform) programming languages, such as Java, Python, and .NET Framework languages, expose threading to developers while abstracting the platform specific differences in threading implementations in the runtime. Several other programming languages also try to abstract the concept of concurrency and threading from the developer fully (Cilk, OpenMP, Message Passing Interface (MPI)). Some languages are designed for sequential parallelism instead (especially using GPUs), without requiring concurrency or threads (Ateji PX, CUDA).
A few interpreted programming languages have implementations (e.g., Ruby MRI for Ruby, CPython for Python) which support threading and concurrency but not parallel execution of threads, due to a global interpreter lock (GIL). The GIL is a mutual exclusion lock held by the interpreter that can prevent the interpreter from simultaneously interpreting the applications code on two or more threads at once, which effectively limits the parallelism on multiple core systems. This limits performance mostly for processor-bound threads, which require the processor, and not much for I/O-bound or network-bound ones.
Other implementations of interpreted programming languages, such as Tcl using the Thread extension, avoid the GIL limit by using an Apartment model where data and code must be explicitly “shared” between threads. In Tcl each thread has at one or more interpreters.
Event-driven programming hardware description languages such as Verilog have a different threading model that supports extremely large numbers of threads (for modeling hardware).
https://en.wikipedia.org/wiki/Thread_(computing)#Multithreadingso of course I do not go in all the detail of course. the devs should know how to do is job. so I should not have to go in to detail like that. I am talking about the basic of it. and even staying very basic people say rubbish. I am telling you the truth. all the info I gave you, you can find on the web by your self. rather then trying to shoot me down and saying rubbish look at the information and how it is relevant. if you are not even willing to try to understand. why are you here? to make your opinion more valid then some other one? we are discussing here on a subject. that as many simple thing and many complex thing. if you cannot understand the simple thing why even try to go higher?
What?
Beats me, but I’m thinking of having all of that tattooed on my back. Chicks checkin’ me out gonna be like “omg you must be so smart… if nothing else!” xD
They’re just copy and pasting from Wikipedia.
Gw2 runs more than fine in an at least Haswell i3 with a gtx 750ti and cpu intensive settings toned down.
There are no excuses, but bad hardware.
Nah it doesn’t. Performance is poor for how the game looks. Look at the performance in the xpacs first zone. Its awful. Then there are things like reflections, why on earth are they being rendered on the cpu?
I respect that blizzard actually upgrades their engine to take advantage of newer hardware. Wow wasn’t always dx11, it use to be dx9 and I think dx8. The engine upgrade gave a big boost in performance as well.
Windows 10
“It can’t be done,” in this context almost always means, “with current funding and/or resource allocation.”
Nice instead of reposting that one short sentence here you link to a 3rd party page
Considering how many times this question keeps getting asked, my bet is that he has this one bookmarked and a copy/paste is faster.
Sad.
Guild Wars 2 is destined to end up like Lineage 2 in terms of performance. No matter what hardware people get, the performance will always be bad.
My performance has been great the last four years. If I were to upgrade it would be even better. The same goes for other people. As they upgrade their computers, they will eventually be able to play on higher graphics settings. The graphics on the highest settings look infinitely better than WoW and that game has been out for over a decade.
I don’t belive that.
Put all your settings on high and try to get 60 fps at WvW or any populated events like boss kills etc.
It’s just not possible.About 144 FPS … big NOPE.
If game can acctually utilize hardware ressources before it brings us FPS lagg, that great optimisation.
But what you see in GW2? CPU usage at 40%, GPU usage at 30%, but FPS are dropping. Ehem why you no use 100% CPU and GPU GW2? Why?
We don’t acctually need DX12. We just need this game to be better optimised and acctually use Hyper Threading and utilize 100% of GPU and CPU if it needs to.
Sadly this 10 years old engine can’t do that, and won’t ever be able to do so.
You do not need 60fps to have a game playable. As lng as it’s over 24fps your brain will not tell you differently. Once it drops below 24fps, your brain will notice. This whole “game must run at 60fps or its garbage” is in fact garbage it’s self. I can play the game with everything on max and I never notice frame drops, ever. If you are seeing frame drops, you are doing something wrong.
|Seasonic S12G 650W|Win10 Pro X64| Corsair Spec 03 Case|
Sad.
Guild Wars 2 is destined to end up like Lineage 2 in terms of performance. No matter what hardware people get, the performance will always be bad.
My performance has been great the last four years. If I were to upgrade it would be even better. The same goes for other people. As they upgrade their computers, they will eventually be able to play on higher graphics settings. The graphics on the highest settings look infinitely better than WoW and that game has been out for over a decade.
I don’t belive that.
Put all your settings on high and try to get 60 fps at WvW or any populated events like boss kills etc.
It’s just not possible.About 144 FPS … big NOPE.
If game can acctually utilize hardware ressources before it brings us FPS lagg, that great optimisation.
But what you see in GW2? CPU usage at 40%, GPU usage at 30%, but FPS are dropping. Ehem why you no use 100% CPU and GPU GW2? Why?
We don’t acctually need DX12. We just need this game to be better optimised and acctually use Hyper Threading and utilize 100% of GPU and CPU if it needs to.
Sadly this 10 years old engine can’t do that, and won’t ever be able to do so.You do not need 60fps to have a game playable. As lng as it’s over 24fps your brain will not tell you differently. Once it drops below 24fps, your brain will notice. This whole “game must run at 60fps or its garbage” is in fact garbage it’s self. I can play the game with everything on max and I never notice frame drops, ever. If you are seeing frame drops, you are doing something wrong.
If you cant notice the difference between 30fps and 60fps then I don’t know what to say, other than you are unique.
Windows 10
The question is, is Dx11/12 a selling feature? Because there is a cost to rebuilding the engine. A cost a developer normally pays when developing a new game. If they design the engine to make it easy to add new graphic APIs then that could possibly be added on the fly. But it’s at a cost, unless a developer is drowning in cash.
So maybe GW3.
RIP City of Heroes
How about Anet just sticks to spending money on making the game actually fun to play?
People throw their hands up in the air an wonder why AAA gaming is such kitten these days. Probably because gamers kitten and moan about graphical fidelity like that;s all that matters and when they get just what they asked for they only realize after the fact that they probably should have asked for a good game first.
It runs great on my Mac with the all the graphics on high. To be honest I rather have them port the game over to PS4.
what is the point in posting screenshots of fps u get in game where there is barely anyone on map. those just mean you in game and mean nothing.
if you want to post meaningful screenshots go into a at least 20v20 fight in wvw or a populated LA and post the fps u get.
Go to Shaman in Wayfayer Hills when the entire map is doing that world boss. There’s a good test.
Shaman in particular is good proof that the visual effects engine in GW2 is actually broken and the reason for many slowdowns.
You can have 100 peeps in front with a clusterkitten of sparkly effects at 20 fps (expected) when it suddenly – often for about 5 seconds – will go to buttery smooth 60 fps. I’m not even joking, the sudden performance jump is jarring with no noticable visual change, its still 100 peeps tossing effects in front. Then it goes back to “normal”.
For the most part, GW2 is often bottlenecked at the cpu iirc. That’s why some people still cant get above a certain threshold even if they have the “newest and bestest” gfx card.
page-eating bug fix/bump
playing below 30fps is fine if the game can maintain it, but if it fluctuates between 25 to 40fps my eyes would be tired easily, that is what always happen in gw2 atleast in my mid-range machine. changing graphic setting doesnt work at all, might as well push it to highest quality. hope there will be an optimization especially for amd cpu