What I would change in the ranking system
This doesn’t work because,
- You would need to play a large number of games for your calculations to be anywhere near accurate, skill wise. So a single day for these calculations to finalize per player is no more than the “luck of matchmaking.”, more so than it currently is.
- The ‘end score’ comparing averages calculations between the winning & losing team has been proposed, but falls short. The average losing score (according to Anet’s statistics) is almost always around 300 – 350 range. Not to mention that Foefire map will heavily skew the results when you get a 250+ lead on another team, making the game seem like a large point difference. Averaging a player’s end score from wins & losses hardly corellates to a player’s skill rating… there’s still no efficient way to calculate an individual’s player MMR efficiently other than what’s already in place.
Anet could make Leagues more strict but then you’ll have that “stuck in mmr hell” playerbase ever more irate for not progressing through the leagues. Then more irate for not gaining a darn thing when they actually reach the division they’re suppose to be in, again, halting progression.
Honestly, I felt the league progression was right on the money. Competitive players progressed quickly while casuals got to Diamond/Legendary towards the end of the season. I think Anet would like to keep this type of league progression. That said, something could still be done for players who stay in a certain division. Maybe something like, play 75 games to complete the current division – you earn the rewards but you don’t actually go to the next division.
Rank: Top 250 since Season 2
#5 best gerdien in wurld
(edited by Saiyan.1704)
Thanks for your input. Let’s work through your objections.
We first have to agree that skill of an individual player can be estimated by the outcome of the games he played in. Otherwise, we would have to reject any automated ranking, which would be impractical.
Given that, we assume that each end score has an unknown probability distribution of which the skill of the player is but a single variable. Therefore our estimate must take into account the outcome of many games if we aim to isolate the effect of the player. In that respect you are right. However, only a relatively small number of games are necessary as the average distribution of each score will quickly approach a Normal distribution (the famous bell curve). In the absence of bias for team formation, the average opponent team’s score would follow a Normal distribution centered around the expected team score over all possible team formations (something Anet can estimate with high accuracy as they have access to all games) while the player’s team score would follow a Normal distribution centered around an expected score highly correlated to the player’s skill. A 100 games may seem a small number but I assure you that it is sufficient from a statistical standpoint IF there is no bias in team attribution AND the population remains constant.
To address your concern, we could wait until a player has participated in a certain number of games in a division, say 25, before he can be eligible to move up. That way, his estimated skill would be impacted by the actual conditions of the division and we could compare it with some confidence against the skill of that population. After 100 games without moving up, the player’s estimated skill could be said to be an accurate reflection of his rank within that division.
About the luck of matchmaking. There isn’t any. The current algorithm is clearly not a random assignment within the population of the division. If that were the case, there would not be such long winning or losing streaks. Therefore, we have to assume that there is always a possibility of bias introduced by matchmaking and we have to neutralize it otherwise the outcome of matches are not independently distributed and the above argument does not work anymore (no bell curve, no way to estimate skill with any degree of confidence). That’s why it is so important to discriminate between wins and losses and give them equal weight in any estimate of the player’s skill.
You say that the average losing score is almost always around 300-350 range. I say that this is to be expected over all games and in no way an argument against using score as an estimate. To the contrary, the average score of losing games involving a particular player would be tightly distributed around a precise value that is highly correlated to that player’s skill. That is the precise property upon which I base my suggestion.
Regarding the Foefire map, you raise a valid point. Killing the opposite lord can cause a large score difference. However, I argue that players take that into consideration when they pick a strategy. You will occasionaly see a team reverse the outcome of a match by giving up defending the caputure points and go for the opposite team’s lord. In any case, any difference in score distribution by game types will average out and approach a Normal distribution after a number of independent games. Therefore I cannot accept your conclusion. There could be a bias if players focused on specific games types, but then you could just take that into account, just as we did in the win/lose imbalance bias. But I don’t believe it’s an actual problem.