Showing Posts For Xerol.1578:
Tried to login today and got the 45:6:3:2212 “account closed” message, wanted to file a ticket but the support page just gives the following:
Page not found
Host
en.support.guildwars2.com
URL
/
Remote Address
71.246.93.135:59189SpawnSrv/301.4032601 Instance/0.419547832
The issue is neither directly with Anet or Verizon. If anyone’s been following the net neutrality news, Verizon and Level 3 have had a dispute over paying for and carrying, among other things, Netflix traffic. Level 3 is a backbone carrier and pretty much all traffic goes through them (or other backbone carriers, but for east coast Verizon customers it’s pretty much always L3). The end result of this is that Level 3 is giving other traffic priority over Verizon customers’ traffic. (This is also not limited to just FIOS; DSL traffic is similarly affected.)
-The VPN solution worked because the traffic either (a) did not appear to originate from a Verizon customer, and therefore got better priority, or (b) was routed through non-Level 3 backbone (traceroutes on my end showed the VPN traffic going through Level 3 backbone just fine)
-The HOSTS file solution works because you’re going to a different server altogether, and not going through the east coast Level 3 network.
-I’m not entirely sure why the DNS solution worked for some people and not others. It didn’t work for me; I was already using google DNS. I suspect there’s some sort of geolocation going on with the CDN, and this might be happening at the DNS level – so a new DNS, for some people, will make either the client or the server try a different patch server than usual, resulting in a different route, circumventing Level 3.
As I posted in the other thread, the primary issues are with two Level 3 nodes:
(Hop #6) – 4.68.62.133 (ae16.edge1.washingtondc12.level3.net) – Samples: 3901 – Average ping: 138ms – Packet loss: 91.1%
(Hop #10) – 4.69.156.41 (ae-2-52.edge2.Newark1.Level3.net) – Samples: 3849 – Average ping: 142ms – Packet loss: 68.7%
These were done a couple hours after the May 20 patch hit, around 6pm ET. Trying to ping the same servers (174.35.10.10 and 174.35.10.73) again now (2am ET) I’m getting the same route as before, but the packet loss is still present, albeit in a lesser intensity. This seems to fit in with people having problems during peak hours.
So I think the actual problems are twofold:
1) The Verizon-Level3 dispute is hurting Verizon customers.
2) Congestion is happening during peak hours as well.
And the solutions, as tried and tested by many people:
1) Use a VPN
-or-
2) Add a line to the HOSTS file.
edit:
Once again at 12:00 a.m. eastern standard time. Anet’s download ran as it normally would. Download took a few seconds rather than hours or nothing at all. I hope this helps, Anet.
This adds some more credence to both the congestion and routing theories; either the congestion is much less at this time, meaning less prioritization is going on, or some router along the way was going through its nightly reboot and you got routed differently.
(edited by Xerol.1578)
Getting a bit off-topic here, but from working at a laptop repair shop, one of the first things I learned is that the only job of frontline support is to tell you “no”.
More on-topic, I just wanted to point out that this isn’t FIOS-specific, I’ve been having the same issues on DSL (and the VPN worked great, thanks).
Confirmed by a ton of people in this thread (the posts from today): https://forum-en.gw2archive.eu/forum/support/tech/Stuck-at-0-KB-s
Also having issues and also using Verizon. I’ve never had issues before though. Just with this patch. Only downloading at ~50k/sec and eventually it drops to 0.0 and never recovers. Starts download over each time.
Baltimore Maryland here.
Just FYI it does keep any completed files downloaded, if you made it past the patcher-patch you should see the “Files Remaining” remain where it was when you restarted the patcher.
I’ve been having issues getting the patch as well. Using pingplotter (a handy tool for diagnosing certain network issues) the problem seems to be with two level3 nodes between myself and the patch server. I’m in Baltimore on Verizon.
(Hop #6) – 4.68.62.133 (ae16.edge1.washingtondc12.level3.net) – Samples: 3901 – Average ping: 138ms – Packet loss: 91.1%
(Hop #10) – 4.69.156.41 (ae-2-52.edge2.Newark1.Level3.net) – Samples: 3849 – Average ping: 142ms – Packet loss: 68.7%
So I’d be inclined to say the problem’s with either level3’s routers or verizon’s choice of routes.
Restarting the patcher seems to help, it’ll run at full speed for about 5 minutes before dropping back to the 1-3k/s range.
Got the login server error a few times while trying to run 1-1 in SAB, then log back on and I’ve been on the SAB loading screen for a good 15 minutes now, no way to even exit the game.
e: Can’t ping one of the servers the client is connected to, gets lost somewhere on your end.
(edited by Xerol.1578)
Ran the numbers again for NA since it was quite early when I did the earlier run. Doesn’t change the odds by a lot (for Maguuma at least, the only one I really look at) but it does have some effects.
Now with EU again! Also I finally rebooted (RIP 8-week session) so I’m running more trials again. First attachment NA, second EU.
There’s probably a better way to communicate this to the player base, as unless you’re looking as deep into the numbers as I’ve been, most people won’t see this. Given the starting rating, deviation, and volatility numbers, it’s possible to calculate a target value for each server (actually a target ratio, since the total number of points scored each week varies by up to 10%) which shows what the expected performance of each server in a matchup would be. For the hypothetical “Mag in T1” matchup this would be (using 600,000 points in a week as the basis):
JQ - 264033 (44.0055%)
SoR - 259574 (43.2623%)
Mag - 76393 (12.7321%)
Due to what’s either a flaw in my calculator or a flaw in using glicko2 for 3-way matchups, each server goes up 0.494 rating points in a week with these scores (instead of staying exactly the same). Any server scoring more than these values would subsequently gain more rating than the others, giving each server in a matchup a target to show whether they’re under- or over-performing their rating. I’m working on a webtool to show these values (which has had some setbacks, otherwise it’d be out by now).
There’s two ways to communicate this to the player base – either show the targets in the WvW UI, or scale the displayed amount of points to the equality ratios. In the above example, this would show all servers as having 200,000 points at the end of the week – a literal tie, because that’s what it is, in the context of the rating system. Or you could have a pie chart of total points earned, with ghost lines or something similar showing the ratios each server needs to “win”.
tldr: You don’t have to have the most points at the end of the week to win. Communicated correctly, the rating system does a good job of allowing servers with vastly different participation and coverage levels to compare performance over the course of a week.
The main problem with one up, one down, and I think I’ve elaborated on this in the past, is when you get in situations where there’s very large gaps in performance between servers. For the sake of argument suppose there’s 9 servers, with the following ratings (and assume, for the moment, that ratings correlate strongly with skill/coverage):
1 - 2100
2 - 2050
3 - 2030
4 - 2000
5 - 1850
6 - 1700
7 - 1580
8 - 1550
9 - 1520
Another issue with one up, one down, which immediately becomes apparent, is that ratings mean nothing.
But that’s not the main problem. Pick any three servers from 1-4, and they’ll all be matched somewhat closely. But any of 1-4 will give 5 a hard time, and 6 will be left in the dust. Let’s assume that things play out about as well as you’d expect with these matchups, and one-up-one-down results in the 2nd week of matchups being as follows:
1 - 2110
2 - 2050
4 - 2010
3 - 2020
5 - 1850
7 - 1590
6 - 1690
8 - 1550
9 - 1510
The Tier 1 and Tier 3 matchups don’t look all that bad, #6 will probably come out safely ahead in T3 and #4 might have a hard time but could be competitive. #1 and #2 will almost always end up together, which doesn’t really change anything over the current system (and possibly makes it worse, as at least right now it’s possible for #1 and #2 to get mixed down into T2 once in a while – which also isn’t ideal, but consider that pre-randomization you’d be getting the same matchups every week no matter what).
Server #5 is where it gets messy. They’ll always be facing someone from the meta-T1 (#s 1-4) with little to no chance of winning a matchup to move up, and even if they did, they’d get completely crushed the next week and move back down. And none of the meta-T3 servers (#s 6-9) are likely to challenge them enough to push them down into T3, where #5 would end up steamrolling everybody and moving back up again anyway.
“But,” you say, “people will transfer and server performance will change over time.” While true, this is still a very slow process. And while this candle is burning slowly, most matchups will either stay the same or oscillate between two different matchups, leading to the “odd week, even week” problem that a lot of mid-tier servers had pre-randomization.
Is the current system perfect? No. But it’s getting better every week. The Deviation values, which determine the range of random matchups, has been steadily decreasing week to week, by about 0.7 points/week. This may not sound like much, but in the 8 weeks I’ve been running simulations (see “NA Potential Matchups X/YY” threads in matchups forum) the likelihood of a matchup happening “on-tier” (e.g. T3 matchup being #s 7, 8, and 9) has doubled or more for most mid-tiers, even with servers moving up and down in ranking every week.
The amount of rating change from a matchup is still enough to account for transfers and improvements, and (although I haven’t run the numbers) I think it may actually result in servers that have major changes (big guild transfers or coverage changes) being placed in more appropriate matchups faster than in the old system. Yes, you get the oddball matchup once in a while*, but at least it’s variety, and the setup of the rating system still allows servers to move up/down as appropriate even when fighting way out of their skill/coverage level.
*This has yet to happen, but every week I dump the results of the simulation to files. For Maguuma this week, here’s one way down in the list of possible matchups:
0.18440% - 1844/1000000
Tier 1
Jade Quarry
Sanctum of Rall
Maguuma
Likely to happen? No. Possible? Yes. And a quick back-of-the-envelope calculation shows that Mag would gain rating by scoring 80,000 points or more on the week. And 80,000 points would be about what I’d expect of Mag against the top two servers, and in my mind this shows that the system works because it normalizes away factors like numbers and coverage.
[continued in next post due to character limit]
Are you all staring at the map 24/7 or have I just been playing the wrong way all this time by looking at the field in front of me.
Also don’t say goodbye because there’s about a 1.6% chance of a rematch, the 17th most likely matchup for Maguuma, which isn’t really all that low. (This week’s matchup had a 1.34% chance of coming up and was #25 most likely for Mag.)
I’m going to be away today and tonight so I’m posting these early. Since they’re missing nearly a full day of play they might be a bit more inaccurate than usual.
I’ve been doing these for quite a few weeks now, and the deviation values are starting to drop considerably for a lot of servers. This is resulting in the ‘canonical’ matchups for the higher tiers (where there is more rating separation) becoming more and more likely, but even those are still relatively low chance (34.8% for T1, 15.1% for T2, 10.2% for T3). The middle tiers are still quite a mess.
Maybe I’ll rewrite the program next week to use less ram so I can run more than a million trials.
Only ran a million trials this week because my swap file doesn’t seem to want to allocate enough space for 2 million. (Also, Kerbal Space Program REALLY does not like being shoved into the swap file.)
Here are some estimates for the next matchups. 2.38% chance for a rematch.
Almost a 2% chance of being matched up against JQ? That could get interesting.
Quite a bit more than 2%, actually, that’s just the most likely way we’ll end up matching up with them if it happens. I don’t have it add up which individual servers are most likely to be faced (maybe I should). Full list here.
I’ll probably stop posting the “top matchups” they just tend to be misleading – a 4% top matchup means there’s a 96% chance any OTHER matchup will happen. There’s just too many possibilities to say who anyone’s going to face with any certainty, T1/8 excepted.
This matchup was pretty unlikely, doesn’t even show up in the top 27 for Maguuma (it’s #28).
1.42435% – 28487/2000000
Tier 2
Tarnished Coast
Maguuma
Sea of Sorrows
Ignore the color info, it’s wrong.
Reading up on glicko again, the deviation is supposed to represent ONE standard deviation. Here’s what that looks like:
So far I’ve been using a flat (uniform) distribution for random numbers. I’m assuming this is what ArenaNet is using, although they haven’t specified.
What if they used a normal distribution, though? I ran two tests, one where the server deviation represents 3 Standard Deviations of difference, and one where it represents 2 SDs. The 3 SD case actually represents a lower spread, basically 99.7% of the random rolls stay within +/- deviation, whereas with the 2SD case 95% stay within +/- deviation. The main difference is 65% of the rolls are within 1/3 or 1/2 of the server deviation in each case – much more centrally clustered. Taking the server deviation as 4 or 5 SDs would make it even more centralized, with the downside of a lot less variation in matchups (in extreme cases, especially as server deviation trends downward, almost all matchups being exactly what they would be before randomization).
The other thing about doing it this way is that, in theory, any server can roll any number, but it’s very very unlikely. (Edit: Made a 4 SD example as well.)
I think taking the server deviation as either 3 or 4 Standard Deviations (or somewhere in between, it could be fine-tuned and doesn’t need to be an integer) and using a normalized random variable would work at lot better. The change on the server-side would simply need to be changing the random function from [-1..1] to a function that produces a normal number in terms of standard deviations.
I think, by this point, you can’t really compare ratings between NA and EU – each is its own closed system, so a 200 point spread in NA might be a “tier-level” difference in skill and/or coverage, while 100 points in EU might represent the same. EU servers have higher deviation overall as well, meaning it contributes even more to the spread of potential matchups.
That’s not to say NA can’t have its share of extremely lopsided matchups, here’s two that came up for Maguuma:
0.30036% – 8260/2750000
Tier 1
Sanctum of Rall
Blackgate
Maguuma0.15062% – 4142/2750000
Tier 5
Maguuma
Borlis Pass
Anvil Rock
Also don’t take the “top match” too seriously, Maguuma’s most likely match is still only sitting at a 4% chance, so there’s a 96% chance we DON’T get that matchup.
Here is a corrected NA, using fairly recent numbers. I don’t think they’ll move much between now and reset (and I’d rather be playing before reset than crunching numbers anyway).
I had a typo in one of my sort functions that was causing the sorting (and thus estimated ranking) to be off by 1 in about half of the cases. Pretty sure I’ve got all the bugs worked out now, doing new EU guesstimates now.
I will run the numbers for EU again about 1 hour before reset, and NA about 2 hours before reset.
I was never able to get my calculated ratings to match mos.millenium.org or yours either, but I assume it’s due to rounding errors. I think a big part of the problem is that ArenaNet doesn’t publish exact ratings and deviations; the numbers they publish at https://leaderboards.guildwars2.com/en/eu/wvw are all rounded to 4 decimal places and I think that rounding accounts for the differences we see.
-ken
Mine were matching anet’s numbers to within 0.005 when they only had the 3 decimal places posted, and 0.0005 with 4. Millenium, prior to the leaderboards/API going up, actually had accumulated quite some error for a while by recycling their own numbers week after week and not correlating them with the officially posted numbers. I also checked my numbers against some confirmed working general glicko calculators, and they came out within floating point error.
Most important thing is making sure you have the order of operations correct, that caused me a ton of problems when first putting mine together, which is why I broke it down into so many steps. It could be broken down even further, although combining or breaking up different operations might affect precision on the very low end, but when you’re doing as many operations as glicko2 requires, those errors add up quick.
I’ll just explain the process.
I used numbers from millenium with my single match calculator (which shows the new deviation/volatility scores) to get the new ratings and deviation for the next week.
oh, http://xerol.org/gw2/what-if.html is yours? that’s an excellent tool and I used it to debug my own rating calculator (the detailed breakdown really helped me a lot). since your calculated ratings match mine (and we both match mos.millenium.org) I’m pretty sure we’re all doing that part right.
I think my tool might actually have a bug in it, the 3rd server is always off by a little bit compared to millenium, but it’s usually less than 0.01 so I haven’t worried about it. I need to update my other site to account for randomized matchups, right now you can’t plug-and-play scores since it assumes the tiers are unrandomized.
FoW actually has a small (~2.5%) chance of ending up in T8, so they’re not always paired up with Vabbi. What Vabbi rolls doesn’t matter at all for matchups (although they may flip between red and blue on occasion). FoW’s performance this week actually has an effect. Based off my earlier calculations (from scores ~9am ET today) they’ll have a rating around 690 and a deviation about 217. For them to not be matched with Vabbi, they need to roll higher than two other servers. The two most likely candidates for that are Blacktide (~1103 rating, 229 dev) and Whiteside Ridge (1190, 182).
Assume FoW rolls at the top of the roll range. This will add 257 to their randomized rating, putting them around 948. Blacktide needs to roll in the bottom 43% to get below 948, and Whiteside needs to roll…well, when I plug the numbers in, they can’t roll lower than ~967. Nor can Arborstone roll low enough. So I have to wonder where those ~2.5% of rolls that came out with FoW at the 24 seed actually came from. Maybe some anomaly from not starting with a sorted server list, although I don’t know why that would affect it. Time to run more tests…
Edit: Yep, that was it. Generating corrected results again…
(edited by Xerol.1578)
Somehow I completely overlooked that in their explanation post. Rerunning things now.
Of course there’s no indication they’re actually using 1.0 and 40 as the parameters, they just gave those as an example. If a dev could confirm or deny it would be very helpful.
edit: Fixed & Updated. See first post.
(edited by Xerol.1578)
Xerol, my numbers are very different from yours. would you mind posting the code you used to calculate base ratings and the code you used to calculate matchup ratings?
I’m thinking that either your code or my code (or possibly both) are wrong and I’d like to track down why so that we both produce similar results.
attached are the base ratings I used for my matchup calculations; they are predicted ratings based on live scores from a couple of hours ago.
-ken
My code is terribly written and barely even readable by myself so I’ll just explain the process.
I used numbers from millenium with my single match calculator (which shows the new deviation/volatility scores) to get the new ratings and deviation for the next week. The ratings I got agreed with what millenium had within 0.001, and I’ve checked this tool against posted ratings with past scores to verify that it is accurate. For the record, here’s the resulting data I had from the NA servers as of 7am ET today:
1,Sanctum of Rall,2194.217,176.434,0.741,1
2,Blackgate,2181.708,170.258,0.736,1
3,Jade Quarry,2117.046,170.594,0.735,1
4,Tarnished Coast,2013.035,182.921,0.739,2
5,Dragonbrand,1956.766,178.986,0.741,3
6,Fort Aspenwood,1875.687,178.834,0.738,3
7,Sea of Sorrows,1797.509,185.074,0.756,2
8,Maguuma,1769.354,182.97,0.752,3
9,Yak’s Bend,1690.237,170.607,0.737,4
10,Kaineng,1688.495,172.55,0.767,4
11,Crystal Desert,1681.059,179.546,0.747,2
12,Ehmry Bay,1655.469,181.671,0.764,4
13,Borlis Pass,1395.871,179.659,0.742,5
14,Stormbluff Isle,1394.844,186.433,0.764,5
15,Anvil Rock,1350.646,176.75,0.737,5
16,Darkhaven,1232.168,172.042,0.743,6
17,Isle of Janthir,1217.504,179.222,0.763,6
18,Gate of Madness,1159.356,175.038,0.74,7
19,Northern Shiverpeaks,1155.672,173.548,0.735,6
20,Sorrow’s Furnace,1113.432,175.11,0.765,7
21,Henge of Denravi,1087.719,177.439,0.74,8
22,Devona’s Rest,1080.806,176.863,0.746,7
23,Ferguson’s Crossing,894.601,174.808,0.749,8
24,Eredon Terrace,851.438,176.86,0.747,8
Then, for 2 million iterations, my program generates a random number from -1 to 1, multiplies it by the (new!) deviation, and adds it to the server’s (new!) rating. (And this may be where I’m wrong, I don’t think they’ve divulged the exact method of randomizing ratings; they only gave this method as an example.) This gives me 2 million lists of servers, which are then each sorted and then I start to derive tiers and matchups from those.
For each roll, and for each server, I determine which other servers are in the same tier. For memory efficiency I just use a bitfield for this, so if a particular roll comes up with maguuma (rank
in 7th, DB (rank 6) in 8th, and kaineng (rank 10) in 9th (after the random roll) the bitfield would look like 000000000000001010100000_2 (672 decimal). A matchup where maguuma rolled 8th and DB rolled 7th would look EXACTLY THE SAME, so it’s color-agnostic. This gives me a unique number for a particular matchup on a particular tier, which is why my results will have the same matchup showing up in different tiers.
For each roll, these numbers are stored, and later counted up to get the most common matchups for each server. To easily pull out the matchups by server for making the graphs, each server stores its matchup bitfield for each roll, which means a bit of duplication, which is where my application gets memory-hungry and why I can only do about ~2.4 million rolls at most at a time. I suppose I could discard most of the data after doing a particular roll but I’d probably have to rewrite the program from scratch to do this, since it’s organized in the stepwise fashion detailed above.
The variation could just be because we started with different numbers, or because we used different randomization methods, or because my sample size is much smaller.
(edited by Xerol.1578)
Just realised why that’s wrong. SoS couldn’t get randranked higher than SoR, but they could be in the same “bucket”. Still, it’s highly unlikely. Actually figuring out which matchups can’t happen is a bit of a pain since it also depends on what the other 22 servers roll. It’s very unlikely, though.
Working on EU right now.
I had my computer roll a bunch of virtual dice and these came out. Based off the current scores for this week about an hour before this post, so they may slide around a bit before reset but it’s late enough in the week that the score ratios likely won’t change much.
Some notes on the image and methodology:
-Doesn’t predict what color anyone will be. So every permutation of colors for 3 servers on the same tier is grouped together (the alternative had me getting 1999983 unique matchups out of 2000000 rolls)
-The rankings/tiers given are the result of the randomized matchups. The graphs show the likelihood of a server being “ranked” at a given rank after randomization.
-Right side shows the 27 most likely matchups. I have every matchup that came out saved in a file, broken down by server, if anyone wants to see the results for their server let me know.
-Two million rolls is about as much as I can do in a single run, and it’s not currently set up to easily aggregate multiple runs. I may, in the future, do this, so the results can become more averaged.
-NA Only for now, I can do EU too if there’s demand.
Some general notes on matchups:
-The average deviation is decreasing. After last week, the average was 183.3531, this week it’s 177.2590 (estimated). This makes the likelihood of servers getting matchups far outside their ranking less.
-The decreased deviation is making the T4-T5 gap more significant. Basically there are two supertiers on NA right now, 1-12 and 13-24. This is most significant for servers close to the gap: Ehmry Bay (est. Rank 12 after this week) has less than a 10% chance of being placed in a matchup with T5 and lower servers, and Borlis Pass (est. Rank 13) has about a 10% chance of matching with T4 and above.
-Running the numbers from the end of last week, the T3 matchup for this week had a 0.4% chance of happening. The exact same matchup could’ve happened as T2 with an 0.8% chance, or even as T4 with an 0.002% chance. But, if the dice are feeling saucy, basically anything can happen.
-Well, not anything. Servers are still limited by their volatility in how far they can “move” via randomization. Sanctum of Rall (est. T1 after this week, ranking 2194.2, deviation 176.4) could not possibly roll lower than 2017.7, while Sea of Sorrows couldn’t roll higher than 1982.6 (est. T7, 1797.5, 185.1) so they could never actually match up.
Edit: Updated with corrected (+40) deviations and both NA and EU graphs!
Edit 2: I think I’ve worked out all the bugs. Fixed charts attached.
(edited by Xerol.1578)
Does the dolyak looting bug (sets your supply to 10 if you have more than 10 supply, from a +5 guild bonus) also affect you if you’ve got extra supply from the rank bonus?
What are the post-reset deviation and volatility numbers?
My calculators have been updated with the latest numbers.
http://xerol.org/gw2/what-if-all-na.html
http://xerol.org/gw2/what-if-all-eu.html
A few more details, since everything reset with the patch, and we were still unable to clear it tonight:
-Prior to the patch, at least 10 days before, North and South reinforcements had arrived. At Rally, the event blurb said “waiting for central”.
-Central was stuck at "The mega lasers destroyed the undead bone ship. The Pact will soon assault the krait structures at Stygian Deeps. " No NPCs would spawn at Stygian or anywhere else for Central.
-After reset, we cleared the event with only Central and South reinforcements. North was about 75% towards Rally when someone started the final push, and the Northern NPCs disappeared.
-After succeeding, the Northern NPCs reappeared at the temple. With the event still active. The karma merchant was stuck in the dialogue tree from the beginning of the event.
-We let the risen re-take the temple, and tried again. We cleared north and south, and the event wouldn’t start. We then cleared central, which was stuck waiting for someone to clear the giant shark, and that proceeded smoothly.
-Around this time, someone mentioned MORE reinforcements coming from North. This event (apparently) failed.
-Now, at Rally, there’s reinforcements from North, Central, and South; all 3 reinforcement leaders say there’s still preparations to be made, and the final push will not start.
It might be worth looking into that buggy North reinforcements event, since it seems like it may have not completely reset when the temple was lost. This is evidenced by a second group of northern reinforcements starting off.
Okay, this is ready to release. It’s not great (could use a little CSS care, if anyone wants to step up to it) but it’ll tell you the ratings for the next matchup within +/- 2 rating points (if anyone can see something wrong with my implementation of glicko2, let me know).
So what is it? It’s an instant ratings calculator!
http://xerol.org/gw2/what-if-all-na.html (change ‘na’ to ‘eu’ for europe)
All you need to fill in is the “in-game points” column, and maybe hit the Calculate button (it should automatically update, but doesn’t always work, so just hit the button to make sure everything updates). A couple things I found out while putting it together:
-Actual points don’t matter, it’s the ratio of points that matters. So if you want to input points quickly, just enter the thousands (20 222 49 instead of 20xxx 222xxx 49xxx) and you’ll get pretty much the same results.
-Total points earned ranges from around 580k (on the lower tiers) to 610k (T4 and above), if you want to play around with week-end estimates.
-Because of the scaling function (see this handy thread) once you get a ratio of about 5:1 over a server, beating them more doesn’t help much. Which is why after about Monday you pretty much have an idea of what the next week will look like.
Things I’m still working on:
-Persistent scores that don’t need to be manually verified
-More what-if analysis (hence the page name), like estimating how many points a server needs to maintain or move up in position
-Automatic end-of-week updates
-Historical data
-Making it look better (I’m a coder not a designer)
Bugs/notes:
-Ratings calculation isn’t perfect, but it’s pretty close – after last week I calculated ratings of 1670.306, 1339.256, and 1341.473 for last week’s T5 matchup. The actual results were 1671.007, 1338.760, and 1341.406. It seems especially inaccurate when servers are making larger jumps.
-Doesn’t always auto-update when typing in or pasting scores.
-I have to wait for the “wvw ratings thread” to be posted for the week before I can update it. This is because there’s some black magic going on where I can’t get the right volatility/deviation numbers myself. Normally it seems to be posted shortly after reset although this week’s thread didn’t get posted until Tuesday, which is why I’m releasing it now and not on Friday, when I had it ready.
Feedback/suggestions welcome!
Edit:
Regarding the current rankings on millennium.org, remember that those were the first updates for this match-up. That means that those scores were uploaded during the first 3 hours after reset — smaller numbers tend to cause more radical swings in the rankings, but stabilize over the course of the week.For example, I took this SS after submitting a score ~30 mins after reset. The rankings at the end of the week normalized as numbers became larger. You won’t be able to fully grasp how things are going to play out until Monday.
I actually wrote a little (private, for now) app that lets you put in scores and get instant ratings out, and I was using the latest screenshotted scores to get the numbers I posted above. Also, millenium’s calculations are actually wrong somewhere. I was initially getting numbers off a couple points from millenium’s, and so I went and found some (verified accurate) Glicko2 calculators, and my numbers were correct, while millenium’s were the ones that were wrong. It’s a very small error (no more than 1.5 rating points) but with the close scores possible this week, something to keep in mind.
T3? I’ve been crunching numbers all night and there’s a scenario where we end up in T2 next week:
-Maguuma beats IoJ about 6:1 and YB about 3:1 (the MG-YB score actually matters very little in this scenario).
-FA/DB both end up with a lead over CD in T3. Doesn’t need to be much.
-TC loses badly on T2 (I haven’t checked the score lately, but an early score indicates this might happen).
That puts all of Mag, TC, and CD around 1780-1800 rating, and at that point a few thousand WvW points either way makes the difference of which of those are 6th, 7th, or 8th. Right now TC is hanging around 1820, and CD around 1800, so this isn’t too likely.
I’ll put together something that lets me generate these numbers easier later on, but right now here’s the latest ranking estimates I have for next week, based off the latest scores I checked:
7. CD (1803.1)
8. Maguuma (1773.9)
9. FA (1750.6)
10. DB (1748.9)
11. YB (1657.8)
12. Kaineng (1560ish, don’t have exact scores) or IoJ (1533.5), so probably Kaineng.
Are the deviation/volatility numbers posted rounded or truncated? I’m trying to write my own calculator and I’m getting numbers that are off by a couple points.
e: Also, any chance of getting these as a CSV for easier parsing?
e2: After checking with some other Glicko2 calculators, it seems as if my numbers are accurate and Millenium’s are off somewhere. So all’s good. CSV request stands, though.
(edited by Xerol.1578)
I’ve been writing an “RPG Soundtrack” as a side project for a while. (Making the RPG itself is also a long-term goal, but not one I’m anywhere near being able to finish.) When I found out GW2 has custom playlists, I took a bunch of them, edited some, and put them together into a compilation. Here’s the description I wrote up a while ago:
This is a bunch of tracks by me, organized for use as custom music in Guild Wars 2. Included is 20 tracks, playlists to put in the game’s /Music folder, and an instruction file. I may add more tracks in the future. These may just be added to this release or published separately.
There’s some cross-over between the playlists since I think some of the tracks work well in multiple places, and the game doesn’t let you be very specific about what plays where, and there’s cross-over in the game as well (“NightTime” gets played in cities and in the wild at night, for example).
A couple notes on how GW2 is handling the custom tracks:
1 – It seems to disproportionately play the first track in a playlist a lot more than the rest, and the second track gets played a ton too. There doesn’t seem to be a fix for this right now (except for maybe editing the playlists and randomizing the order from time to time) but once the initial rush dies down I might send in a ticket.
2 – NightTime seems to replace both Ambient and City at night.
3 – The Underwater tracks don’t seem to want to play unless you’re underwater a few seconds before the previous track ends. And they tend to cancel out if you surface, too.HOW TO DO the whole adding-to-game thing:
1 – Read the instructions, that has detailed info on everything you should need to do.
2 – Bandcamp is awesome and let me upload the playlist files, so the step listed here is no longer necessary. Yay, Bandcamp!
3 – Enjoy.I am not affiliated in any way with ArenaNet, NCSoft, Guild Wars, or Jeremy Soule. All tracks are written and produced by myself, mostly from other projects of mine.
You can get the official GW2 soundtrack here: http://www.directsong.com/mobile/productdetails.php?productid=2250
tl;dr version: Everything’s included to just plop the files into the custom playlist folder, playlists and all. I’ve since fixed the playlist thing by adding a short ‘intro’ track that’ll make the playlists randomize correctly.
You can grab the whole thing over here: http://xerol.bandcamp.com/album/gw2custom