(edited by deltaconnected.4859)
Showing Posts For deltaconnected.4859:
I’ve tried explaining in my two posts here the two bugs that cause this. Tl;dr is that you need a specific guild aura buff already applied on you before you can get presence. Ideally waypointing from the same keep to the same keep at a distance far enough to trigger a load screen, or from another objective with the same level (should work across maps even), or figure out which side to run into an objective from (if any).
Waypointing from an objective that you just claimed (while being inside the objective) is weird; when running into one, internally the game applies two Aura buffs onto you – one invisible that tends to be a level lower than the one you claimed with, and the visible one that shows up on your buff bar. But when you claim it, you only get the single visible Aura buff. Probably a workaround that wasn’t updated for the new buffs or for claiming. I don’t have any guilds I can claim with to test that out 100% so it’s just my hunch looking at my logs.
That said, the waypointing from the same keep to the same keep at a distance far enough to trigger a loading screen (around 8000 according to map tooltip) has worked for me every time. Just to make sure I explained myself right: waypointing from citadel to bay should give Aura with no Presence. Then run to the south-east wall of bay, making sure you still have the Aura buff on you. Then waypoint back to bay with a loading screen. Should have the presence buff on you (same thing can be done from NE outer gari wall to gari wp, or from south hills gate to hills wp).
I’ve reported this but not sure if anet’s aware (yet).
Bug #1 is that the guild Aura buff has to already be on you before Presence can get applied, and the new post-fix Aura buffs have higher IDs than Presence’s, so you need to already have the Aura buff on you for it to work. That can mean running to a tower that has the Aura and then waypointing to the keep, or running outside of loadingscreen-less wp range and then waypointing back while still having the Aura buff on you.
Bug #2 is that the area used to calc where Presence is applied on a lot of objectives is different than Aura. For example the “center” of Presence for EBG blue keep is more SE than Aura, so running into it from SMC side (from roughly the arch) will give you Aura first then Presence second, but running in from spawn side will fail to give you Presence and then only apply Aura.
Happens to me and several others as well, not necessarily at the same time. You can wait 120 seconds for whatever’s failing to load to time out but it’ll happen every waypoint til restarting the client anyway. Looking at -maploadinfo has the models loaded count some number lower than the models total. When it does finally get past it, the NPC models will be missing/invisible (nameplates only) and occasionally other things, like your weapons, will be too.
17833742: Error: Map load hang on STATE_AGENT_STREAM detected: IsWorldReady: 1, CharLoadingCount 7 <- this is the only line that’s in the log that doesn’t appear in a good load.
This has been discussed to death and they have firmly said NO!
I find it interesting that Anet would give a firm “no” and at the same time notify the client of all physical attack/heal data going on around it… specifically the skill, the source, the target, and for how much it hit. For everyone, by everyone, within a radius bigger than what you can target.
It may not be ToS friendly to get this data, however it’s helped me a fair bit to know who in my raid failed which mechanic and how many times, all without them needing to have any external program running. Those who know what I’m referring to know that it does not include condition ticks and some traits, however it’s still partially useful to eyeball the damage and compare it to a benchmark you can establish run to run to see how well someone is doing or what effect shuffling parties has.
We already have accurate 3rd party DPS meter addons, all we need is anet to say that it’s all right to use them
Have they stated that it is not? My general thought was that if it didn’t give you a distinct advantage or play the game for you it was okay.
Writing a 100% accurate personal DPS meter is trivial – a bit of engine byte patching can get you as-they-happen combat strings fed right to your own combat processor. The problem comes down to the method used… it’s essentially a hack. While I know these tools are around, chances are you won’t be finding them in the open for that reason.
It’s in the ToS of the game that you cannot use combat log data or something to that effect.
“Use, obtain or provide data related to operation of the Game, including but not limited to:
software that reads areas of computer memory or storage devices related to the Game;
software that intercepts or otherwise collects data from or through the Game;”
7.d., 8.c., and 8.e are also worth noting: https://www.guildwars2.com/en/legal/guild-wars-2-user-agreement/
In other words, if you sneeze too close, you’ll probably have violated something in there. Just running the game is breaking the ToS – there’s no other way for Windows to get the EP or build an exports table without reading. Make that double if you have any background process like anti-malware.
It’s wording like that that translates to “we can do what we want whenever we want for whichever reason” for anet that pretty much guarantees we’re about as likely to see a gw2 addons section on curse as we are to see an official in-game API for mods (and with it DPS meters). And yes, sticking purely to what’s written, even the screenshot meter could be considered as collecting data from the game.
That said, I’d love to see something official, or at least something to not make me second guess using any mod.
Ya want to say the devs are not competent enough to add this support or that arenanet/Ncsoft are too cheap to invest in the game fine. But please stop with the FUD. Honestly there is no reason to not do this. As it won’t hurt anything and it won’t force anything. It just adds options. Not sure why anyone is against options.
You can add ‘likely do profiling of their own’ to your two reasons.
“There is no reason to not do this” is also a terrible argument… there’s no reason to not do a lot of things for the game – more armor skins, more hairstyles, reviving WvW, OCE servers, even an addon system so I don’t have to write my own and risk a tampering ban for Read/WriteProcessMemory() for something as simple as an accurate personal DPS meter – all of which take up a portion of two limited resources: time and money. Knowing that DX12 won’t help performance and trusting that their art won’t gain from it either, a better question would be ‘what reason is there to do this?’. More options is fine as long as creating those options isn’t sacrificing resources that could be used to actually add or improve something.
The problem is quite visible if you look at it properly as a programmer I get it it’s a huge undertaking to re-write code, but that’s more of a microsoft problem they should be writing APIs that do NOT require complete re-writes of code period.
Some ideas 10 years ago may not be so great today, and get dropped (d3dx9). Likewise some ideas today didn’t exist 10 years ago, so you won’t inherently be taking advantage of everything new that comes around. Such is development.
Forgot to add to my post above – making your engine compatible with DX_ was only step 1. Step 2 would inlcude change/redesign and retest again for each new feature of DX_ that you want to take advantage of.
Objects aren’t thread-safe? Well then rewrite them to support multithreading!
Thread-safe doesn’t mean faster or more efficient. On one hand, it can be used across threads without worrying about race conditions, but on the other, the semaphores and locks used to make this happen can result in serious performance penalities even when threaded.
Likewise, for everyone wanting GW2’s workload to be spread over more than the 3-4 cores it is now (just look at processexplorer while playing…), ANet knows (reddit):
There are conscious efforts in moving things off the main thread and onto other threads (every now and then a patch goes out that does just this), but due to how multi-threading works it’s a non-trivial thing that take a lot of effort to do. In a perfect world, we could say “Hey main thread, give the other threads some stuff to do if you’re too busy”, but sadly this is not that world.
Think of each frame like a dependency tree… you have to calculate what you see to know what to draw, you have to know what to draw to know what assets to load, you have to know what to draw to know what effects can be culled etc. No amount of crying about threads and cores will change that. Those dependencies must be done in that order, and in the case of GW2, that’s a pretty tall (serial) tree compared to something wide (parallel) like video encoding. Yes, sometimes things end up on that tree that don’t need to, and if they happen to be on the longest path, can result in some small gains after being moved off. I’m sure they also have ideas for better parellel designs than what it ended up being. But, like shipping with a new DX, design changes of that scale are not likely to be worth the effort this far into a game’s lifespan.
But what really is required to update a games DX version? Just update the DX API’s or the game engine itself as a whole,? I want links to said information not speculation, Because few game are set to get DX12 in patches.
Seems you’ve forgotten one of the most important links on your crusade about DX12’s glorious performance…
https://en.wikipedia.org/wiki/DirectX
Microsoft DirectX is a collection of application programming interfaces (APIs) for handling tasks related to multimedia, especially game programming and video, on Microsoft platforms. Originally, the names of these APIs all began with Direct, such as Direct3D, DirectDraw, DirectMusic, DirectPlay, DirectSound, and so forth.
As that implies, you don’t just ‘replace some files and recompile something from DX_ to DX12’. You refactor or reconstruct everything that has any reference to functionality provided by DirectX (not just graphics) that’s changed or deprecated, then ensure that no other component in your entire system behaves differently after doing so. Turns out in a game engine more complex than Minesweeper, it’s an incredible amount of work.
For performance; it was already said. Everyone arguing DX11 (and 12) knows one of the most noticable improvements comes from multithreaded rendering – or being able to keep deferred device contexts on many threads instead of just one. Turns out you can use this knowledge for some basic profiling. Doesn’t take much more than some creative thinking and a debugger to start on this.
http://i.imgur.com/dgwY5Es.png
In practice it takes a bit more work than the above as D3DCreate9 will only get you a device context. What you need to do after is find and put breakpoints on ::BeginScene or ::EndScene routines in that context which are guaranteed to be called once and only once per frame in the renderer loop. Or find yourself a proxy/debug D3D9.dll to skip that part. When you find the thread that does the drawing, you can compare it’s ID in something like ProcessExplorer and see that, like Johan said, it is not the bottleneck. It’s the 2nd or 3rd thread on that list. Which means that, if you consider a super simple pipeline like this,
http://i.imgur.com/L2H4Aky.png
reducing the DX9 GPU drawing overhead will have zero impact on how long the underlying CPU calculations that prepare the frame take, and therefore, zero impact on your observed FPS in the CPU-bottlenecked case of GW2.
So please… can we let these DX11/DX12 threads die already…
From someone with knowledge about the game engine and what it can and can’t do.
https://www.reddit.com/r/Guildwars2/comments/3ajnso/bad_optimalization_in_gw2/csdnn3n
Or just read that if you have no understanding of what I typed out above.
5 tank 5 healer setups using high/low toughness weapon swap as an aggro mechanism between them will let you wail on VG until it dies even past the enrage timer (it is only 200% and the current unavoidable damage doesn’t pressure zerker+1 player sponges) but it’ll fall flat on the next two because of the non-timer DPS/mechanic checks. Last I checked my groups haven’t cleared a boss with 10 full glass full DPS trait setups either, so I’d say the full party full glass zerker meta is as viable as full defense.
The problem I see is that if raids were balanced around 95th percentile execution at, say, 1500 toughness, then it would be “1500 toughness meta” instead of “zerker meta”, and we’d be right where we now. Challenging content doesn’t just mean new challenging mechanics, it also means reinforcing existing ones such as ’don’t stand in red circles’. Defensives negate half that and therefore take the “challenging” out of “challenging group content” until we end up at just “group content”, or world bosses. And if raids become the next world bosses, what sense of achievement or accomplishment would there be for killing one?
While I do agree that the game letting you choose your own stats may be misleading for anyone hoping to to cover up ‘I can dodge a tell 50% of the time’ with ‘I can double up my toughness and live’ while maintaining the same effectiveness, here where the combat system is a measure of skill, that means you are doing less damage. And that makes damage output a reflection of individual skill. And since Anet stated that this is balanced around the best, that makes damage the best measure of who clears it and who does not. There’s just no argument that surviving mechanics longer means it’s just as skillful because those mechanics pose no threat to you anyway.
Can they release a 2-man or 10-man story mode with no loot? Sure. Will enough people run it to justify the development time? Probably not.
Shutdown and not lockup or reboot pretty much rules out software. I’d say you don’t see this on other games because other games don’t load your CPU nearly as hard.
750W might be alright for powering SLI 970s and a 4790k at stock (I say might but I wouldn’t be running performance range SLI/XF on anything less than 850-1000). Given that this always happens in certain “spots” it’s possible that it could be from a lack of power. Fastest way to check that would be as mentioned – disable SLI or go for a hefty (30-40%) underclock on your GPUs and run through those same spots.
72c on haswell is also fine; 95c is your throttling and 100c is your shutdown point. Most of the stress on the chips comes from thermal cycling moreso than heat (cold→hot→cold etc.) so it’s still in your best interest to not run it so high hence why you’ll see people say the low to mid 80’s as the “safe” range.
‘Sensitive to overclocks’ happens because everyone’s definition of ‘stable’ is different. Some do narrow-op stress tools (prime, IBT, everything else in this category). Others specific games. Chances are that no matter what you do some part of the die isn’t actually stressed; the most popular case of haswell and up is no one bothers to use the FMA/AVX options (prime has them) because they add ~.1v and major heat. Given the 1D crashes people are seeing with the x64 client it’s safe to assume that GW2 does use these for optimization (at least on a compiler level).
This doesn’t apply here since you’d be seeing watchdog or WHEA BSoDs or lockups/reboots if edge stability was the case. Shutdown from overclocks mostly happen when your chip is well past it’s clock wall and won’t do that MHz no matter how many volts you run through it.
Disk errors are probably the last obscure but still possible shutdown cause so a chkdsk /f c: wouldn’t hurt. HTH
• Do not debate Customer Support decisions or actions. Threads or posts designed to announce, appeal, or contest your own or another player’s suspension or account termination—be it forum or game account—will be removed without notice.
So, yeah! I really like this last one, because how does it look to continue to break rules that one agrees to, while trying to push so hard on how a reinstatement doesn’t send the right message to those who break the rules. Kind of a double standard huh?
I post about the obvious flaws knowing that a mod can dish out infractions if they feel like, and I can guarantee I won’t be contesting it here or sobbing on reddit either. Sharing accounts (and expecting nothing to happen) because you don’t see some of the deeper reasons why it’s in place are quite different from a double standard.
Personally I don’t see account sharing is ever ‘bad’, ever.
A single account can only be played once at a time so it matters not at all who’s playing it at any point in time IMO.
Frankly I’ve always viewed the ban on sharing by treadtional sub-based MMOs as nothing more than corporate greed!
Someone uses hacks on 3 alt/f2p/dummy accounts. 1 gets banned 2 do not. By your reasoning they are three different people. Same someone continues to use said undetected hacks on his main because they are, for the time at least, undetected.
Sure, the debate about why account sharing is bad and a lot more severe than trolling on the forums, and the debate about how other account owners might not know that to even have a chance of appeal they have to sob on reddit after their forum access is revoked, may be against the forum code of conduct, but the reason it’s drawn on this long is the inconsistency that’s taking the worst of both worlds. If every ban should be manually appealed, that defeats the purpose of automatic ban waves. Likewise giving special treatment to only a subset of the automatically banned accounts will give a really “toxic” and disheartening feeling to those that don’t get it.
^Make sure you have SP1 installed and all updates after. It may be one of the “Update for Windows 7” ones under Optional as well, I’m not sure.
Edit: if you’re on an intel build, it probably wouldn’t hurt to have the latest chipset drivers installed too.
(edited by deltaconnected.4859)
1D is invalid instruction. For the most part this happens because CPU feature set detection is done with assembly but Windows’ “valid” list comes from Windows’ updates.
Eg. you upgraded your rig with a Haswell CPU or newer but are still running Win7 without SP1 including some updates after it (or worse, any version of XP). The game will try using the new instructions because the hardware says it’s supported, Windows doesn’t know about it, thinks it’s a corrupt exe, and kills it.
The double standard and message this sends is pretty cringeworthy. I do like some of the suggestions though… our daily-collecting alt accounts could be converted to timeshares, reap whatever gold comes in to the account, maybe collect a couple $$ for rent, and plead “sorry I didn’t know” when it inevitably gets banned from someone else’s actions. Cry loud enough and you’ll get a second chance. Even a third and fourth chance if some posters in this thread are any indication.
If you share your GW2 login, clearly you value it less than something financial or educational or workplace related otherwise you wouldn’t be sharing it in the first place. Any consequence that happens because of it is your fault and your fault alone, and whatever judgment Anet makes about it is their right entirely. It’s their support team spending time on users that broke the agreement instead of those with legitimate issues – of which from bad key batches to map rewards to progress resets there are guaranteed to be plenty.
And honestly, for anyone trying to say “I forgot account sharing is in the rules”, have you ever read an agreement that doesn’t explicitly disallow sharing login information? Or for that matter, read through even the headings to get an idea of what the generic agreement every game account ever states?
Another positive about the x64 client (or maybe something in a recent patch); ultra shadows no longer has major drops when turning the camera. It still hurts FPS a bit as it should, but it works now
Q: Is there a hard limitation how much ram the 64 bit client will use?
None. It asks the OS for more memory, it gets more memory. If there’s no memory to get, it crashes in similar form as before or cleans out unused allocs. If it still can’t get more it crashes.
I would also be interested to know whether there are certain graphic card configurations that the 64bit client can’t cope with. I think people have asked about SLI already, I would like to know whether the game can correctly identify and use a crossfire configuration (since I run an ATI card).
Make sure it’s named Gw2.exe (rename the 32bit one instead) or crossfire/SLI won’t “know” it’s loading GW2’s profile.
Why though? What is the instability? What typically causes it? Why does it work fine for some and not for others? These are question that no one EVER seems to be able to answer. ANet blames the hardware, nVidia/eVGA blames the software or other non-GPU hardware, the techs where I bought the computer blames ANet because all their test show the hardware is clean…it’s a vicious circle while my computer continues to crash along.
The biggest question is Will the x64 version of the client help? You say, “no”, but what does ANet say? Can’t make it much worse I guess since I typically crash every 15 minutes or so.
I wish I knew. Never touched a driver in a debugger or looked into the docs on Windows’ framework for writing one. All I’ve got is my four cards, win10 x64, and a bunch of tests across many versions. Since r355, SLI+surround doesn’t work with 50%/50% scaling and major drops (not other games, just GW2). 353.49 was the last version that was ok for perf. Likewise, on my 780’s, I noticed BSoD’s at random in-game since 353.06 or whatever the first r353_00 one was. This is all strictly testing in GW2, no other games or synthetics.
Also I think it was mentioned already, but it feels map load times on x64 got a MAJOR improvement. 4770k @ 4.5, 16GB 1866C9, 840 pro, 980ti SLI @ 5760×1080. LA is down from around 28 to 18s, VB loads in 3, tarir in 8.
Can’t say I experienced any problem with 780 SLI for over two years, I actually don’t remember a single crash that wasn’t an unstable OC til r355 (and couldn’t downgrade low enough after upgrading to the 980ti’s). I do agree on staying on old drivers though… not looking forward to doing some minor ‘changes’ to SWBF’s client to skip the forced driver check when it launches…
Anyway, outside of a 1second hiccup every now and then, so far so good on the x64 client.
(edited by deltaconnected.4859)
The “something” is just “attempted NVidia optimizations” lol. Nothing to do with GW itself. I’d list how painful surround has been as an experience (start menu far left on win10 since TPs, can’t disable surround to try non-surround without reinstalling, TW3 more broken with no fix in sight that makes the GW experience a pleasure, which says quite a bit given SLI+surround is broken for the newer drivers for it, among other less noticeable things) but it’d take up more than this page sooo…
Whatever you do just avoid the r355 branch and newer. Doubly so if you’re on SLI (it’s completely broken for it). Ideally 347.88 as that was the last time I had no NVidia-related crashes, and 353.49 if you’re on 980ti’s because it’s the newest driver that doesn’t suck for performance.
If you’re getting nvlddmkm.sys crashes, try downgrading to 347.88 (980 and older) or 353.49 (980ti). That’s caused by something else so I doubt switching to x64 would help.
FYI for SLI/surround users (might apply to xfire too): make sure you rename the 32bit client if you wish to keep it, and drop the 64bit one as ‘Gw2.exe’. Or change the exe name for the profile in nvidiainspector or something.
(edited by deltaconnected.4859)
Modify CE’s kernel driver a bit so the signature doesn’t match the public one and recompile (it IS open source). GW2 will allow it and you can go ham on replacing whatever routines you want. I think blizzard tried this blacklisting in one of their older games, SC1 maybe, but I think everyone who’s played that knows how it turned out…
Basics of security here is you assume the game client is shipped to enemy territory and will be ripped to shreds no matter what you do. Everything of importance has to be done on the servers – and when validation isn’t feasible (due to design or resources), bring out the banhammer.
As I said many times, it’s very difficult to obfuscate this information. What you are proposing not effective either while it has potential to cause headaches to innocent people.
Which comes down to opinion. Personally, I’d rather be stricter on ToS and rest knowing that there isn’t a surge of cheaters in the modes I play (and the complaining that comes with it on both the qq hacker and omg unfair but not really ban fronts) because of the cheats that would be released as a result. If you played WvW and tried to catch the teleporting thieves and superman mesmers you might feel differently, and not be so sympathetic to people who break rules (however harmless).
Find me a way prevent me from knowing what got me banned and I’ll agree with there being no need to ban every account based on association.
There are ways to implement server side validation that will not cause noticeable latency. Let’s please drop this though. It’s a discussion I care not to have here.
Skills need targets/locations. Locations need validation. That right there is latency. But sure, let’s put the technicals aside.
I’m still waiting for how you would prevent me from knowing exactly what got me banned to make all my future cheats (more) undetectable. Because as it stands, you would rather give the cheaters lenience and unban accounts that are in violation of the ToS which to me just doesn’t make sense.
And I am getting technical because I know it isn’t possible to “just plug it”. You say you worked security so you should know this too. You can’t hope to prevent someone from changing runtime memory, all you can do is ban whatever they use to change it. I also gave an extremely simple solution to the character weight edit: server side movement validation. Except that if you think skill lag is bad now, it’s no secret that adding MORE calculations will only make it worse.
Find me a way prevent me from knowing what got me banned and I’ll agree with there being no need to ban every account based on association.
Because there is NOOOOOOOOOOOOOOOOOOOOOOOOO way to determine it is the same person with 100% accuracy. Let’s put this in another way. Someone killed a person and the police just arrests everyone that was in the neighborhood and charged them with murder. Simply because there WERE in the neighborhood. This is exactly why they get false positives their method is faulty and not accurate at all. Why say people lie and put anet above that. Sorry to burst your bubble anet lies too, but you just argue the same thing over and over regardless.
Of course it’s not possible to determine who’s behind the keyboard – that’s what I said we agree on. A better comparison here is you have a 9pm curfew (representing the ToS). At 10pm, 5 people were at the scene of the murder. While only 1 might be guilty of murder, all 5 are breaking the law. And in Anet’s house and in the house where the keyboard is the person, all 5 get arrested.
Please do answer my question though; if you unban every single account because you can say it was someone else, what’s the point of banning or anti-cheat at all?
It’s not a very valuable information though. Most people use just one account and when they get banned and they tend to let the developer know anyway… angrily. It’s very difficult to obfuscate such things.
Most cheats work by exploiting something that was not thoroughly implemented during development. When people are banned and the exploit is plugged, the cheat won’t work anymore. (generally speaking, if you can detect it, you can fix it) You can’t really keep this hidden from the cheat’s developer.
If we’re talking macros, in most cases, they are not developed as cheats but can be used as such. I don’t think Razer/Logitech really cares what Guild Wars’ macro policies/detections are.
Ever used OllyDbg? CE? IDA? <insert debug tool of choice>?
No, most competent cheats modify something while the game is running from intended to unintended. Let’s say your jump distance and height is based off your character’s weight (which in the case of GW2 it is… just add a breakpoint for the well-known EPs on Havok). While the game is running, you use CE to change your weight to 1/10th of it’s value. All of a sudden you can jump 10x higher – or over the walls in WvW.
Can you validate all movement server-side? Of course. But only where performance allows, which is not everywhere and why cheats exist.
edit: so I, as a cheat developer, was told that the account that used CE got banned but the account that used OllyDbg did not. All of a sudden I know I should not use CE for all future cheats. That’s as valuable as valuable gets to me.
(edited by deltaconnected.4859)
You are assuming things here. I’m a developer and did work on security aspect a few very public software projects.
In this particular case, no, there aren’t “hundred other ways to uniquely track somone”. You cannot reliably track people, you can track accounts, network endpoints and computers. Who is using the said computer is for the most part unknown to the tracker (unless you get into some crazy stuff like turning on webcams and running face recog software on the captured images ^^).
Good. That means we both agree that “someone” in this context of account credentials isn’t a real life person, but rather the keyboard that’s used to enter them. Trying to argue otherwise, as was stated, would be arguing “my twin used my PC while I was gone” which won’t work irl or here.
So I have to ask; knowing that telling the cheat developer which account was banned tells him extremely valuable information about the cheat detection deployed, why would you not ban every account associated with this instance of cheating?
I think that Anet needs to give 100% absolute feedback to the users they ban with server side logs stating what they did. It wont prevent them from coming on the forums, but it may help those that are hit by ban waves that did nothing wrong (such as running linux, you guys have banned quite a few running Linux lol).
This is possibly the worst idea I’ve heard (sorry). It would be nice for every user except the most important one – the developer of the cheats. Let’s say as this developer you run 4-5 $10 or F2P accounts to test various file or runtime mods. By doing this you’ve just learned which account was caught and which method tripped the security alarm, and made future cheats undetectable until Anet can figure out how to detect the next method used. Between kernel drivers and dll overrides, there’s no winning this battle.
This is also why every account tied to the “collection” of the perceived owner has to go no matter what; doing anything else is anti-cheat suicide. Will it catch people who legitimately share PCs at home (not internet cafes – look up determining user permissions)? No doubt. But there is no alternative. You only log in to your account from places you trust absolutely. Otherwise you accept the inherent security risk and it’s no different. Would you log into your banking site on someone elses PC? Same applies here.
I think there might be a small point you’re missing:
Tyu did not share his account. People shared their accounts WITH HIM. This makes a huge difference.
Look at his reddit posts. He logged into her account, she logged in to his.
This is way too harsh! You catch a cheat? You can the account. You don’t ban all accounts that were recently played from that IP! I’ve worked at a game publisher before and this is a ridiculous policy to have. Imagine this IP was in a internet cafe or a PC bang. You’d be banning random people’s accounts. Or in this case someone who’s infractions are little to none.
Not only does this help the cheat developer immensely, as someone who worked at a publisher I expect you to know that there’s a hundred other ways to uniquely track someone that isn’t with IP. And not just what’s mentioned in this thread. Unless you never worked on the security aspect in which case it has no meaning here.
Before anyone suggests about sending in ID or anything like that; Anet has no reference to trust because you never sent in ID before. Meaning you could play the ‘evil twins’ argument as a single person then borrow your neighbour’s brother’s IDs (for a price I’m sure) to pass off as two people and Anet wouldn’t be able to tell.
And what about Internet Cafes? Good grief, the idea of banning by association even if very vague, is just stupid.
Hint: Guest account. It has a predictable SID, and predictable name, making it perfect for cafe’s. Or a guest-equivalent with matching numbering. Oh wait…
With the ability to create multiple user accounts on a computer, Microsoft doesn’t demand one user per computer. Just one computer per key. So Computer GUID isn’t a foolproof one either.
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\ProfileList
Two teenagers may have to share a computer. One cheats, the other doesn’t. But according to ANet their accounts share MachineGUID key. Yet they don’t share accounts. And could quite possibly even use the same credit card if a parent holds the purse strings for all credit card purchases.
That’s the unfortunate part. If they don’t have their own accounts (and this is assuming SID is tracked), then saying ‘my brother cheated plz unban’ won’t cut it. For all intents and purposes you are the same person – otherwise no one will be banned ever.
Again, IP address has nothing to do with identity. Here’s two reg keys I found by googling that you can use that are available with zero effort:
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\ProfileList
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography\MachineGuid
Your computer logs in to one account, then that same computer logs into another. From where is irrelevant. Anet has no physical presence at this computer, so what do they do? Refer to my example about using a friend’s unlocked PC and account why “it was my friend” won’t fly. By your expectations that cheating account would be unbanned on the sole word of “it was my friend”.
Or perhaps you forgot that the P in PC stands for Personal?
How did you determine that he owned all 9 accounts, and that you didn’t just ban a pile of people? I am still concerned about the whole IP address issue.
Because each of these accounts is linked to another one (A log in to B, B logs in to C, C logs in to D), and somewhere in that chain, one of them cheated. How can Anet know if it’s sharing or 1 person who forgot to swap? They can’t. It’s highly unlikely they’re flagged off of IP or geoIP (bc of mobile); there’s lots of ways to uniquely identify, with varying degree, the installation or user or hardware.
Um, but you do understand that a computer nor an IP address cannot simply be used on their own to determine the identity of just one single person.
No, they can’t. Doesn’t make you any less responsible; what if you lend your car to a roommate who totals it? That’s your deductible and your insurance going up. The “it wasn’t me” argument won’t hold up anywhere, no reason to expect it would here. Any implementation of a local install ID or machine ID or even user SID is more than enough to track “ownership” – it’s unlikely to appear anywhere else in the world (and can very easily be cleared – eg. temporary/expiring/guest users on shared PCs).
I’m afraid that there may be the slight possibility that one day I could end up being banned for somebody elses actions, if the wrong situation were to arise.
Won’t be a problem with somebody else’s actions if you never exchanged any account detail with them, no? Or think of this situation: ‘I left my computer logged in with my account on autoplay. My friend used my account and cheated and now I’m banned’. As has been mentioned by almost everyone here – no account sharing, no problem. The difference between legit and non-legit is indistinguishable without physical presence, which is why these are in the ToS.
Computers are not infallible, nor are people.
Computers won’t be wrong. The implementation might. In this case the implementation caught account sharing dead on so it’s working as intended. Nothing can be done when the user does something to adversely affect it.
Try that in a court, and unless the prosecution is saying the inability to reveal evidence is because of national secrets, the case won’t fly in court.
And what would your case be? That Anet refused to provide you the service after discovering the terms of using it were breached?
Defend or argue all you’d like. User agrees to terms, terms get broken, Anet takes appropriate action. If you look at the 2nd post this was the second time this happened, so I wouldn’t be surprised if they simply didn’t want to deal with this giant sharing mess again now and possibly again in the future.
And he replied that he doesn’t even have 9 accounts. Hence my initial post that this needs further research from a GM. Bluntly removing access from your accounts and ignoring any ticket made is not really respectful.
He bought the game for crying out loud, the least he could receive is some thorough support that isn’t copy pasting a “you were caught cheating” message to everyone asking what they were banned for.
Read the rest of the thread, or just my posts. Account sharing makes it nigh impossible to identify who’s account is really who’s. He (maybe unfortunately and maybe unknowingly, we’ll never know) logged in to an account that was used for cheating. This means that he’s now flagged as a cheater. When they decided to ban the person behind the cheating, this banned every account in that “circle”. Some his, some friends, some cheaters, some cheater’s friends. This is why you don’t share accounts.
And since it sounds like this would be the second time the account would be getting reviewed after breaking the “no account sharing” clause in the ToS, I don’t consider Anet’s decision to make it final unfair at all.
Understandable if it were not for the fact that support refuses to give the guy information as to why he was banned. If I suddenly ended up being banned and was told “you know what you did!” while I did nothing at all, you’d bet I’d put a thread on every forum or a post on every medium available to me. I didn’t spend €300+ over the years to end up getting banned.
Cheating on 2 of your 9 accounts got all 9 of them banned.
Logging in to an account that was used to cheat is a pretty good reason to suspect one of cheating.
How do you determine which accounts belong to the same user?
Any official word on “this is what we use to identify” will be a big red warning sign for potential cheaters on how to get around it so it won’t happen. Just assume there are dozens of sources of information: Windows GUID, MAC, and install ID (both for Windows and stored to your client settings) to name a few.
I am also concerned with the whole, this is final attitude, and not giving users a chance to defend themselves, or even telling them what exactly they are accused of. I am sure you could find in 9 out of 10 users is violating the TOS in some way; half aren’t aware of it or are using something to prevent killing there left mouse button, or software that came with there mouse/keyboard etc.
People regularly break the speed limit too. Sometimes you get unlucky, but I like to think the majority of the time it’s for good reason.
I bet that whoever owns the dirty account you logged in to can answer where the other 4 clean + 1 dirty account come from; chances are it’s a group of two or three people who frequently share them. By sharing you ‘associated’ yourself with it.
(Of course, like Elieanna said, that’s assuming what’s said here is all truthful)
Which is where the disagreement lies. Banning all of someone’s accounts accounts because they broke the rules on some of them is ridiculous with a couple of possible exceptions (for example, abusing exploits on one account to feed gold/materials to a main account).
Make F2P account, send just enough to level to 80, go 1vZerg in WvW and win a few times, get banned, repeat with new account. Ad infinitum. We don’t have access to the logs so our best guess can only come from tyu’s word (which may or may not be true).
Say I have 3 accounts and decided to share one with a friend. I get caught. Do I deserve to have all of my accounts banned or just the one that was shared/had the rules broken? Personally, I’m solidly in the “just the one account, if at all” camp. The ToS are agreed to on a per account basis; how can they be applied across multiple accounts?
You were the one that originally agreed to the ToS, not your friend. It’d still be account sharing.
It wasn’t his friend logging on his computer, he actually said he logged into his friend’s account on his computer. That is a HUGE difference.
Just taking things at face value, that’s interesting that it would get his account banned at all; if anything, I’d think it would be the friend’s account. Not that they’d even be able to get you for account sharing unless you’re a moron and admit to it ingame or the login location of the account bounces around the country at a rate faster then it’s possible to travel and they have some reason to look at it.
Instead of having the 9 accounts tyu logged in to, it likely would’ve ended up with the accounts his friend logged in to banned. As for catching account sharing, see my copy/paste of an easy to read registry key that’s unique for each Windows installation.
I highly doubt they ban for account sharing.. i have never heard a single example of that.
Yep. It’s not the account sharing itself that get’s you banned, but it’s what identifies the perceived owner. And if that owner is deemed cheating, you are banned through it.
All this does not prove that i have not cheated but surely it is at least enough to have a second look at this decision.
Pretty much this. All I can say is hopefully it does get a second look, and that the best way to not get banned like this in the future is to not log in to other accounts.
For a time me and my roommate gamed on the same computer on different accounts. How is it being checked that, it is infact 2 ppl using this computer?
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography\MachineGuid
edit: sorry, misread. That’s why they would be flagged as the same person.
So you are saying because i logged in to another account for less than a few hours from my IP which is static and has been the same for my accounts – and they used a cheat i am banned as well?
Exactly. Whatever’s being used to detect it doesn’t care how long or who. IP may only be part of it, if at all, hence why I mentioned Windows unique crypto GUID. The “owner” of the accounts is flagged as the same.
But… there is chat logs between both these accounts and me that shows we are different people – hell just 2-3 days ago we were doing an adventure together (catching bugs). How on earth does someone control 3 accounts at same time to do such complex things as that adventure.
I’m not saying the method is perfect. It never will be, which is why they sometimes get reviewed. I’m only giving an example why automated (and even manual similarity checking) would flag you as cheating.
I am just saying while you may be correct, there are ways to see if we are the same person or not. Hell, i can even tell you when we met with eachother just this year.. they are new members to my guild :/
The problem is how long it would take to manually review chat, guild, and every other associated log with every banned account. Just not feasible.
Although you might not “own” 9 accounts, the software has to assume you do. Think about a situation like this – a cheater owns 2 accounts, one legit and one for fun. The one for fun will always be connected through a VPN on a somehow or other ‘faked’ Windows’ GUID to avoid having anything linking the two accounts together. Then one day he forgets to hide his real GUID (or use a VPN).
At that point, in the eyes of anti-cheat, you are considered the same person. How does someone differentiate the above from someone who logs in to a friend’s account? They can’t. VPNs are useful for more than just masking identity (draconian school firewalls that have DPI running on HTTP/S traffic, carrier’s peer’s with GW2’s datacenter overloaded, etc), and GUID gives the illusion of different machines. That’s why the clause about account sharing is in the ToS – so there is no ambiguity.
Otherwise that cheater would be able to do the exact same complaining as above in hopes of getting unbanned without spending extra $$.
So after almost a full day at work, we continue to have absolute silence from arenanet, yet last time we had half the issues (looking at https://www.youtube.com/watch?v=AFms7vTndQ0) they were disabled within hours of being published.
No acknowledgement of exploiting the golem buff.
No acknowledgement of exploiting golem cloning.
No communication if this is intended or not.
No telling us if we should be reporting exploiters or simply accepting that it’s part of the game now.
No warnings if abusing it risks a ban.
Nothing.
Why do we pay for gems again?
I wouldn’t bother with the exploits@arena.net email, it’s like a black hole for info from which nothing ever comes out of. After reporting golem duping (among other things) several times there and to staff on here, it makes me sad to see it still in the game.
Going by a video from the pvf forums that I won’t link, it looks like there’s at least one person (or small group of people) that’s either really lucky or figured out a way to dupe without the need for spamming enter. If it’s the latter, then I think anet has a LOT more to worry about than just golems and we can probably see this floating around til HoT or later.
It is pointless.
Until better optimization intel is the best choise for this game.All i said better API better framerate. I mentioned MANTLE because it is low level API on PC.
@deltaconnected.4859
You didnt totally understand me
that why i want to see MMOs like elder scrolls online on consoles
… it’s not noticeable in situations where another (eg. physics) engine component requires more CPU time than the drawing component. And that’s the big bold text of Mantle – 9x more draw calls per unit of time than D3D.
When the bottleneck is not draws, Mantle won’t speed anything up. Please read up on what Mantle is and where it fits in to game design. Doesn’t matter if this is on PC’s or consoles.
In fact, World Of Warcraft had such improvements made with even DX11 and it changed nothing with large fights, proof has been shown earlier in the thread.
It’s not that there weren’t improvements, I’m sure it ran better than before, but it’s not noticeable in situations where another (eg. physics) engine component requires more CPU time than the drawing component. And that’s the big bold text of Mantle – 9x more draw calls per unit of time than D3D.
But FlyingBee just can’t seem to grasp the concept of there being more than graphics to a game engine. He seems to be under the impression that “if X used Mantle instead of D3D/openGL, the framerate will be significantly better at all times” which is just false. Improvements for GW2 (and WoW) will come from redesigns to how the game ‘handles’ large amounts of players, not how they’re drawn.
Here’s a mathematical way of trying to explain it since my other two attempts have clearly failed for him;
Let’s assume the zerg calculation (locations, projectiles, everything) time is 40ms per frame, and draw queue time is 8ms. This means the entire frame time is 48ms for a total of 20FPS.
Let’s say we now introduce Mantle which cuts the draw queue time to 0ms. There is still 40ms of calculations to or else the drawing commands will not have up-to-date vertex/coordinate lists. This puts us at 25FPS.
Why did it run so well in Star Swarm? Because there’s nothing else going on. The pre-calcs consist of only simple AI, and the rest is draw time. Instead of reducing the 1 in a 1:5 ratio, it’s now reducing the 5.
(edited by deltaconnected.4859)
over 45%…
I did test FX 4300 directX vs Mantle (3.4Ghz).
Same spot, same settings: 32 FPS vs 64FPSStarswarm benchmark is mostly same as GW2 – D3D11 only 2 cores Mantle up to 8 cores!
FX 4300 3.5Ghz
D3D11 – 21 FPS – 50-60% usage
MANTLE – 61 FPS – 100% usage
i3 4330 3.5Ghz – 60 FPS – 100% usage (mantle)lets say that you get with FX 4300 3.5Ghz about 10-15 FPS in huge fights.
MANTLE should get you 29-43.5OC to 5.0Ghz – about 40% FPS boost
personally i want to see same API as Mantle or Mantle running on NVIDIA/AMD systems in GW2
1) Comparing GW2 (an MMO with network, player interaction, animation and other components I can’t think of) to Star Swarm (a demo) is comparing apples to oranges. “If X can do this Y can too” is not how things work. Here’s a quote from the first review, since I assume you didn’t read my example:
Battlefield 4 has multiple CPU tasks going on here, not the least of which is the simulation itself, so in the case of our 4C/4T setups it’s likely we’ve stumbled onto a situation where the game is more strongly CPU-bound by the simulation and other aspects of the game than it is the submission of draw calls.
Replace BF4 with GW2, and simulation with zerg tasks.
2) The many benchmarks suggest a lot less than a 100% gain, and closer to 10-30% for Mantle in BF4.
I already did by purchasing an i7 4770K. I’m satisfied.
+1 for that solution. No more Mantle arguments :p
I hate double-posting, but may as well for Mantle; AMD advertised it like D3D11 with some additional low-level control options that expand on multiple render device contexts.
AMD claims it increases BF4 framerate by 45% (https://www.youtube.com/watch?v=Ms16uGxQzSY in the CES teaser) in CPU-limited situations. Even if Arena is able to get the same numbers out of it, 45% over 20fps in wvw = 29fps. People will complain just as hard because it’s still too small a difference to notice.
The majority of the performance improvements for D3D11 come from multithreaded rendering (NOT programming). That is, instead of having one CreateDevice() for your application like in D3D9, you can create “virtual” devices in each of your threads – letting you draw from any instead of only the primary.
Imagine a simple, theoretical, two-part rendering process. 1-terrain, 2-player. The terrain has an effect on the player. The D3D9 way of doing this is [CPU:terrain][CPU:player][Draw:terrain][Draw:player]. D3D11 let’s you split this to have one thread do [CPU:terrain][Draw:terrain], and another can start [CPU:player][Draw:player] as soon as the first thread is done [CPU:terrain]. If we assume all four pieces are equal weight we just got an fps boost of 33% because player calcs and terrain draw is done in parallel. If the player did not depend on terrain, we can do both tasks completely in parallel (100% boost).
Unfortunately, as the scene becomes more complex, there are more dependencies in place and less things that can be done at the same time, making it a performance boost only in very specific situations.
I have a gut feeling that large fights (model physics, animations, 3D particle effects, projectile physics, and w/e else is there) is not one of them, WoW or GW2.
(edit: that’s not to say there won’t be any improvement from D3D11 in this case, rather it’ll be small enough to not make much of a difference. The only “fix” would be a whole game rewrite, which at this point is likely impossible)
(edited by deltaconnected.4859)
Probably off-topic now, but some things to note about WoW’s multicore and DX11 patch…
Did it help in two zergs (50-60 total) going at in Alterac? No – 28fps.
Did it help in a zone raid (40) on Sha of Anger? No – 18fps.
Did it help in keeping 60fps in org/terrace with an LA-amount of people? No (unfortunately no SS here).
Everywhere else the game would be running >60 so I didn’t bother, most settings ultra-ish.
Those were taken with a 3570k @ 4.6GHz / 3x GTX580. Afterburner monitoring info top-right.