New nVidia driver - 310.33 Beta (w/ AO fix)
in Account & Technical Support
Posted by: deltaconnected.4058
in Account & Technical Support
Posted by: deltaconnected.4058
Installed yesterday, no performance gains which was expected. AO works but seems like it adds some stutter, even at 60+ (vsync’d or not).
How long it takes for navigation to get a lock depends entirely on the quality of your receiver. Navigation doesn’t send to or request any data from the satellites (not counting the time/location “ping” they send down).
http://en.wikipedia.org/wiki/Global_Positioning_System#Basic_concept_of_GPS
(edited by deltaconnected.4058)
c = 299 792 458 m/s
GEO = 35 786 000 m
dist/speed = 0.119s (240ms since it has to be broadcast up first) at the very best. But, the usual network congestion + light being slower in the atmosphere + not being directly under the satellite can easily double or triple that number.
At least that’s my take on it.
If your board can support the 3470, it can support the 3570k and 2500k as well. Those with a little overclock will fare a lot better than the 3470 at almost the exact same price.
Wait however long you need and go for the i5-3570k or 2500k. Then OC it a tad.
in Account & Technical Support
Posted by: deltaconnected.4058
Not the perfect solution, but I’ve found logging in on my laptop (still won’t load), then trying back on the desktop usually helps when this happens.
in Account & Technical Support
Posted by: deltaconnected.4058
+1 for holding the fort.
Just to clear up the CPU usage, the 2.5 was only in my example in LA. Depending on where you look, this might be 1.5 or 3.5 or even 4.0, but assuming the GPU is not the bottleneck, one (or more) of those threads will be using very close to the max allotted to a single core. A few too many flaming infractions so I don’t post much any more :p
(eg. my new jormag benchmark is very close to maxing out my 2500k @ 4.8)
http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx
Per-thread usage. One will be at a percent just under 100/#cores.
in Account & Technical Support
Posted by: deltaconnected.4058
Shadows ultra → high. Been here since the BWE’s, not even 3×580′s will help.
in Account & Technical Support
Posted by: deltaconnected.4058
Whatever’s on sale. HD7770, 550Ti, 650Ti… sub-$100 will not get far at all.
in Account & Technical Support
Posted by: deltaconnected.4058
The scheduler is part of Windows, all GW2 can do is the affinity trick but I’ve only seen that help very marginally on Bulldozer (run /AFFINITY 55 or 1A or 05 /B path-to-gw2.exe). No difference on a friends 3770k machine.
in Account & Technical Support
Posted by: deltaconnected.4058
@Alteris, I think that screenshot’s probably showing package temperature and not core, which was always something like 4-5c cooler on my 955BE server box.
in Account & Technical Support
Posted by: deltaconnected.4058
If your temps are near-freezing as you suggest, why are you complaining about 100% GPU usage?
in Account & Technical Support
Posted by: deltaconnected.4058
With overclocking, the cheaper option between a 2500k or 3570k. It’ll come down to luck of the draw for how high it can reach. Without, 3570k. Both should work no problem on a Z77 board.
in Account & Technical Support
Posted by: deltaconnected.4058
Problem from day 1 was that the only requirements ANet’s given were minimum requirements. Nothing about recommended.
When you do upgrade, stay away from AMD. Intel’s been miles ahead in single-threaded performance since the Core series, and judging by Piledriver APU review, this likely isn’t going to change for a while.
in Account & Technical Support
Posted by: deltaconnected.4058
Read my post here
in Account & Technical Support
Posted by: deltaconnected.4058
Software can’t set hardware on fire. It can only point out obvious flaws in cooling.
in Account & Technical Support
Posted by: deltaconnected.4058
And why exactly shouldn’t it be maxing out?
If your PC or GPU can’t deal with the heat, either underclock it, turn down your house/apartment heating (or turn on AC), or limit frames to 30. Card locking up is a straight-up hardware issue, not game issue.
in Account & Technical Support
Posted by: deltaconnected.4058
So you run a slightly overclocked 6970, did zero stress testing for temperatures, and now complain that it’s getting too hot…. am I reading this right?
in Account & Technical Support
Posted by: deltaconnected.4058
It would help to know where these drops occur. If this is 25fps in WvW during a zerg, or during a PvE boss zerg, they look about right. There isn’t a CPU on the market that can run ultra + constant 60 in WvW, and it’s very unlikely we’ll see any large changes to the engine (especially not the 100-300% people expect). Lowering settings is all you can do.
As for outdated or not, in a few months the Qxxxx and Exxxx will be over 5 years old. Since Yorkfield, we’ve had Nehalem, Westmere, Sandy, and now Ivy. As nice as it would be for PC hardware to have a shelf life similar to kitchen appliances, that just isn’t the case. And if it were we’d probably see as much improvement in performance as the compressor in your fridge has had in the last 20 years — none. Not that playing Everquest is necessarily a bad thing…
(Based on hwbot, I’d put Yorkfield at about 33% slower clock-for-clock than Ivy).
If you have no money to spend, you get what you can afford.
If you do have money to spend, stay far and away from AMD.
Not quite. Pretty much the only use for changing device states is power management (wake, sleep, suspend, and manufacturer-defined D1/D2 states).
If those GPUs are showing up in device manager, check your SLI bridge. Then try those two separately. And if it still doesn’t work then idk cause I’ve never had an issue once with tri-580’s.
in Account & Technical Support
Posted by: deltaconnected.4058
Maybe placebo, but I’ve found that to help in all the Ascalon zones. At least it feels smoother.
in Account & Technical Support
Posted by: deltaconnected.4058
In that case, bump up the thread and let em know: https://forum-en.gw2archive.eu/forum/support/tech/nVidia-Surround-UI-Placement-Cutscenes-Character-Selection/page/2
Offtopic, but,
shutting down doesn’t have to be clicking “Shutdown” at all. First possibility is to set system G3/S6 state through a PCI call, all it takes is ring0 (eg administrator) access. Second way is to flag the PROCHOT MSR followed by sending #THERMTRIP on the system bus (might not work on AMD machines).
Likewise, a hard lockup can be achieved by sending a PCI call to change a device state into the state it’s already at (eg GPU D0 -> D0). What this does is overwrite the current memory space with what the driver defines as ‘fresh’, and will have extremely unpredictable effects (usually driver problems).
in Account & Technical Support
Posted by: deltaconnected.4058
Did you try deleting local.dat (in my documents)?
127.x is a local address, no need to worry about that. 6112 is the game data port (for a lot of other games too).
Since there’s no way to predict when each packet will arrive without jumping into the future, that QoS is for uplink only. Works by tagging each packet with a numerical priority, and when tc starts queuing packets, it’ll send higher priority first.
1) How do you know what is or isn’t being culled if it doesn’t have an impact on the final pixel calculation… GW2 gives you no polygon or texture info.
2) Software doesn’t cause hardware to overheat. It exposes an obvious flaw in your cooling.
http://images.bit-tech.net/content_images/2011/01/intel-sandy-bridge-review/sandy-bridge-die-map.jpg
If the i5 and i7 are different chips (different silicon layout), which one is this, and find me the other.
Because as far as I know, one of those LLC circuits is etched away and HT is disabed at the microcode level.
Sparkfly is one of the few zones in the game that pushes all 3 of my 580’s to or very close to 100% usage each no matter where I go. Top card peaks at 88c, lowest at 78, all settings (minus shadows) maxed. If I can keep them under 90, no other well ventilated card should have issues.
http://www.anandtech.com/bench/Product/288?vs=287
I don’t see a difference between the SB i5-2500k vs SB i7-2600k.
Based on anecdotal evidence from the 2500k/2600k overclocking threads, I don’t even think their higher binned either.
See if there’s an option to disable the iGPU in BIOS.
in Account & Technical Support
Posted by: deltaconnected.4058
Happens when I open a couple threads in new tabs. FF 16.01.
Find me just one person running this game smoothly on a 7 year old notebook. Just one. And pulling these “smooth” framerates in WvW or LA, not looking at a wall in an empty PVE environment.
Bonus point for finding me one person with a 3.5GHz+ sandy or ivy with problems that isn’t an obvious issue with their system.
I honestly can’t wait to see the whining on Crysis 3’s launch from the laptop/outdated/budget hardware crowd.
@OP: disable the GTX600 series turbo and frame limiter.
No. Either the driver’s default fan profile doesn’t make sense and needs to be controlled by something like Afterburner, or there’s a ventilation problem with the case/heatsink.
Furmark will only damage GPUs that are getting too hot (VReg’s, memory, or core). As long as it’s properly ventilated this shouldn’t happen, so something might’ve changed between then and now.
Reason I suggested this is because it’s impossible from a software perspective to have that kind of impact on hardware. The only way is if it actively writes to the hardware-mapped memory (EC or registers), which I know for fact that it doesn’t
HWiNFO64. Run in sensor mode, configure, and when you click on the entries there’s RIVA Tuner OSD. Play around with the settings til you find something you like.
(Things are read top-down and added to the end of a line, label will be whatever the first element on the line is set to – i renamed it to CORE#)
Out of curiosity, what happens to your temperatures if you run both prime95 (blend) and furmark at the same time?
These threads seem to crop up after every update….
As for reinstalling, probably not. If resetting in-game settings (via renaming local.dat) didn’t help, check the usual culrpits – PCI-e bandwidth, temperatures, PSU (underclocking). Otherwise it’s placebo.
What Andy said. First time I heard about these “bad” calls too lol.
As for why you don’t see 100% total CPU load, look at my screenshot.
Add up the usage for all the threads and it will equal the total for GW2.exe. Notice the thread that’s using a little less than 25% – that is your bottleneck. Because of concurrency it’s impossible to run a thread on more than 1 core at a single point in time, meaning that there will always be one thread that’s using x%, where x = 100/#-of-cores, physical + virtual (think of this more as an upper-bound, as each millisecond in a Wait state doesn’t count towards usage).
The speed of that loop, whether it’s doing the drawing calls or preparing necessary data for another thread to do them, is inversely related to your in-game FPS. More uops per cycle (5ish for bulldozer vs 8ish for sandy) and higher frequency will both give linear increases until the GPU or memory becomes the bottleneck.
(edited by deltaconnected.4058)
Installed 306.89 yesterday (r306_41-12), no difference on any of my machines. They’ve reused release notes since r300, not a lot of relevant info since 306.23. SLI has worked in GW2 for me since the betas.
Windows Vista/Windows 7
[3D Surround, SLI], GeForce 500 series: With Surround and SLI enabled, after
resuming from suspend mode and then playing video on 5 Internet Explorer tabs, the
display driver unloads when disabling SLI. 878245
[SLI], GeForce GTX 285M: After enabling SLI mode, the NVIDIA Control Panel -
>Adjust Desktop Size and Position page does not show the connected display images
to select. [724229
(edited by deltaconnected.4058)
in Account & Technical Support
Posted by: deltaconnected.4058
Didn’t have to manually clear settings on mine, but I have a feeling this should help everyone else who still has it stuck on the left after the update
in Account & Technical Support
Posted by: deltaconnected.4058
Top one was fullscreen – everything lined up where it should be. Bottom was windowed – still off to the left.
Tri-SLI GTX580
306.23 (16xAF forced in nvcpl)
3x 1920×1080
5760×1080 no bezel correction
Display resolution – 5760×1080 Fullscreen
Display interface – normal
in Account & Technical Support
Posted by: deltaconnected.4058
If it still doesn’t work in fullscreen, which I see it doesn’t, then idk. Maybe delete/rename local.dat (under my docs/guild wars 2/) to reset in-game settings, reset the GW2 profile (dunno if the newer AMDs have this yet. I use all defaults in nVidia), reinstall drivers with a little praying.
That’s like saying a mouse with 5 extra buttons on the side can also get you banned, because you can strafe/kite easier with your left hand while using your right thumb for abilities.
Pretty sure if those were against the TOS they wouldn’t be pushing out updates to fix UI placement and cutscene stretching. I’d expect something more like SC2 where the game just doesn’t have support for ultrawide resolutions (1920×1080 is as high as fullscreen goes, and windowed will still be 1920×1080 and on the left monitor only).
in Account & Technical Support
Posted by: deltaconnected.4058
5760×1080, nVidia Surround fullscreen (not windowed). 3x GTX580s with 306.23. Didn’t have to change anything, logged in and good to go.
If restricting the number of physical/logical cores helps, then either the brick isn’t supplying enough power, the laptop has an internal amp sensor, or there’s a problem with the way those temperatures were measured (it would help to know what you used).
Based off what I see running a 2500k @ 4.8, those look fine.
in Account & Technical Support
Posted by: deltaconnected.4058
without the "s around the whole command, but you do need “path to/gw 2/gw.exe” if it has spaces.
start /AFFINITY 55 /WAIT /b E:\GW2\Gw2.exe
(2A and 05 are the affinity masks for 6- and 4000 series.)
It’s not the 30% the benchmarks shows, but it’s a noticeable increase on a friend’s rig using the same side-by-side comparison I did in Lion’s Arch. I’m guessing this will heavily depend on how much you have running in the background too.
The only reason I did a quick 3.2 and 4.8 test is to show that what you get on your system doesn’t have to correspond with their system and testing method. I’m not saying they’re wrong, just that it’s not a 100% reproducible statistic.
(edited by deltaconnected.4058)
in Account & Technical Support
Posted by: deltaconnected.4058
1) I stated how to fix the FX8000’s problem in my second post which you obviously did not read
“start /AFFINITY 55 /B /WAIT c:\path\to\gw2.exe”
2) The 1090t has 6 cores. The FX8000 has 4 with hyperthreading. Likewise, the FX4000 is really a dual-core CPU. I don’t care what the spec sheet says, I trust the technical data more.
3) I redid my comparison of 3.2 vs 4.8GHz. And it’s still a linear increase at 1024×768 because it was CPU-limited to begin with, even at 5760×1080.
Not affiliated with ArenaNet or NCSOFT. No support is provided.
All assets, page layout, visual style belong to ArenaNet and are used solely to replicate the original design and preserve the original look and feel.
Contact /u/e-scrape-artist on reddit if you encounter a bug.