Rating System (Elo Based) Development
So many threads mentioning the leaderboard squatters who snuffed a few games and get top slots. Anet is silent about this when it makes the pathetic class balance pale in comparison.
All:
I think what I am more referring to is that the current rating system (be it based on Elo or not) has a ton of statistical analysis that could go into it to make it more unique and versatile. Now, I am still unsure on what they are putting into the model, but I am always letting my colleagues/employees know at my work with any modeling, “Garbage in equals garbage out.” I am not saying it is garbage, but what I am saying is that if you do not account for the right parameters in the correct proportions going in, this can become little to wildly inaccurate and possibly change your results to an incorrect solution.
I am sure that the developers and workers of Anet are enjoying their holidays and time off right now. I do want to thank them for developing things over time to make things better. I think a lot of people give them a lot of hard time when in the background they are probably hard at work trying to make this more “fair” for everyone. In the end, I am more curious as to what they did and thought that if they wanted I would give them some input from an engineer/statistician train of thought. That’s all.
Happy Holiday’s
1.) They were always using elo (it’s actually Glicko2) for your mmr. If you’re a new player to ranked it starts at 1200. If you start playing ranked this season it starts out as your old mmr averaged with 1200 before placements (they hard reset deviation and volatility). Only in the previous seasons was matchmaking also constrained to a pip range plus your mmr (no longer the case this season)
2.) The 13-0 people are pro-league player alt accounts who’ve used math manipulation to get there. That’s another discussion in and of itself
3.) Sounds like that would cause rating inflation in the end.
All:
I think what I am more referring to is that the current rating system (be it based on Elo or not) has a ton of statistical analysis that could go into it to make it more unique and versatile. Now, I am still unsure on what they are putting into the model, but I am always letting my colleagues/employees know at my work with any modeling, “Garbage in equals garbage out.” I am not saying it is garbage, but what I am saying is that if you do not account for the right parameters in the correct proportions going in, this can become little to wildly inaccurate and possibly change your results to an incorrect solution.
I am sure that the developers and workers of Anet are enjoying their holidays and time off right now. I do want to thank them for developing things over time to make things better. I think a lot of people give them a lot of hard time when in the background they are probably hard at work trying to make this more “fair” for everyone. In the end, I am more curious as to what they did and thought that if they wanted I would give them some input from an engineer/statistician train of thought. That’s all.
Happy Holiday’s
If you read up on the glicko2 algorithm you’ll see it adjusts on wins and losses and against who (team mmr) you won/lost against. If you try to start taking into account many more variables you start trashing occam’s razor (which I’m sure you are familiar with). Indeed, the pvp programmers have admitted to tinkering around with other variables over the 4 years since release and have found that sometimes less is more.
Thank you for the input.
Real quick, Occam’s razor is a an interesting phenomena (appreciate the college days). Without going into great detail, the problem with this is it is touching the basis the scientific method and hypothesis development, which might be carried over to numerical simulation and model theory. The problem is that while it suggests the fewest variables is to be selected, the problem is determining that in itself. There are plenty of systems/models that use a ton of variables, while they are all needed, there is usually no more then needed unless there is a halo of uncertainty which then has to be accounted for. So ideally, trial and error is utilized for an appropriate amount of time (in the background with Anet developers).
I do appreciate you informing me what the system was and that I am not crazy. Thanks again and I hope we can hear back from them at some point.
In a team game the only way to kitten someone’s skill towards the team objective is to consider the team objective only. Any auxiliary measurements (such as kills, damage done, etc.) all have the problem that they can be exploited thereby compromising the team objective and hence the spirit of the game.
I somewhat agree with you, but there are ways to make those secondary variables a part of the normalization rather then the major factor(s). I realize this is not an easy solution and stated that in my first post, but I still hold the opinion that there is ways to account for this without them becoming “exploits”. I do have to state that if they really did do something like that, it would have to be so masked and untold to the gamers so that it would not become a problem. That is problem in itself because of questions like we have already. Always interesting.
john, this issue is that there should be a benefit to contribution in some way. an afk’er can get the same points as someone who busted their kitten all game and I think that’s wrong