As many other PvPers I’ve been thinking about the new algorithm that has submitted me to countless ridiculous losses (funniest I had was something like 500v24) and equally ridiculous wins.
As I said in a previous post, the matching algorithm should accomplish two objectives:
1. give rank based on actual skill (ie discriminate/classify players)
2. make for fun matches
The actual algorithm probably accomplishes #1, at least based on very competent players who got endless winning streak and climbed very quickly. However, I feel that matches are less fun because they are either very easy to win or not even worth fighting. The situation has improved slightly (I feel) the last 24-48h, but I don’t know if this is going to continue and how it is going to end.
My input, as a PhD bioinformatician, would be to consider two additional elements:
1. matches that are very well balanced are most informative at the end: if your prediction algorithm is worth anything at all, then a match ending 500vs100 is probably a huge loss of time for both teams. The algorithm probably already knew what was going to happen and did not gain significant knowledge from this match. However, if team A vs team B is ranked 50% win probability, then the actual output of the match will matter, because the algorithm has no idea what is going to happen (this is what it means to predict 50% win probability, ie flip of a coin). Obviously, one should integrate some degree of luck/flexibility, meaning that a win at 500vs498 is not the same as a win at 500vs440 (clear win, but balanced match). So, my argument is that closely balanced matches will be fun AND provide more information where it matters, ie between teams that are quite close in skill.
2. The evolution of skill measures could be based on a bayesian “learning” framework. The divisions are built a posteriori after the skill has been computed with reasonable confidence. This is a bit technical, but the basic idea is that players start with a broad range of expected skill. Say their MMR is 2000 /- 2000 (ie from complete newb to pro player). Every match contributes to the refinement of our understanding of a player’s skill as an average but also as a variance: what is the “minimum” skill we expect this player to have at 99% probability and what is the “maximum” skill he could possibly have? So, progressively, this becomes finer to something like 1650/-500 then 1973+/-30 etc. In order to calculate divisions you just give bonuses and “tiers” every time the minimum at 95% (ie average-variance up to 95%) goes over some preset value or over some quantile (percentage) of the population. Also, you can give out a “pvp present” every day based on where the player stands, for example, so that low ranking players get some daily bonuses after their first daily match or every time their average increases (or their variance decreases). The advantage of this solution is that there is no need to actually fix tournament dates: a bayesian framework progressively improves the quality of the prediction over time, so you can reset it to zero, if you want, or just let a “perpetual” leaderboard and distribute titles or whatever every month.
Anyway, there is a rich literature on the subject of machine learning for classification and bayesian models but in the end I think it should be feasible to make for tight matches between teams of comparable skill AND also put every player where he belongs.
Let me say otherwise that GW2 is a great game. I hope you’ll find a fun and fair solution!