Re: School Rankings
Posted: February 16th, 2021, 2:23 pm
It seems to me that the largest tournaments (particularly MIT) are drowning out other competitions. Nearly every team that attended MIT had by far their best performance of the year there, according to the spreadsheet, which seems suspicious. Consequently, most teams that did not attend are underrated. The problem seems to be giving tournaments different "weights" and then making team's scores for the tournament some fraction of that weight. This makes it impossible for a stellar performance at a medium-sized meet to make a big difference. Maybe something that standardizes the "weight" so that it corresponds to a given superscore?
A simple way (I'm sure you can think of a better system, this is just an example lol) of doing this would be to assign a score of 100*W/SS for a given tournament, where W is the weight of the tournament and SS is the team's superscore. A team's superscore will be higher at larger, more heavily-weighted tournaments, so it would give medium-sized tournaments a chance to make a difference while still reserving the highest scores for the largest tournaments (if team X sends equal-caliber teams to one tournament that has 60 teams and another that has 120, with similar quality of teams at each tournament, the weight of the latter would be about twice that of the former, but team X's superscore would not quite be twice as high at the larger tourament, thus making the larger tournament worth more).
This would also give an intuition for what differences in the rankings actually mean - a team with twice another's rating would be expected to score half as many points at a given tournament, assuming full stack. If New Trier has a score of 71 and Naperville North has a score of 31 (strange considering UChicago results), it is unclear how they would be expected to fare against one another in competition, other than New Trier doing better. Of course, different ranking systems are ideal for different data sets. I think the current system is generally adapted well to produce reasonable results for the top 5 teams in the nation, but beyond there it loses some of its accuracy.
A simple way (I'm sure you can think of a better system, this is just an example lol) of doing this would be to assign a score of 100*W/SS for a given tournament, where W is the weight of the tournament and SS is the team's superscore. A team's superscore will be higher at larger, more heavily-weighted tournaments, so it would give medium-sized tournaments a chance to make a difference while still reserving the highest scores for the largest tournaments (if team X sends equal-caliber teams to one tournament that has 60 teams and another that has 120, with similar quality of teams at each tournament, the weight of the latter would be about twice that of the former, but team X's superscore would not quite be twice as high at the larger tourament, thus making the larger tournament worth more).
This would also give an intuition for what differences in the rankings actually mean - a team with twice another's rating would be expected to score half as many points at a given tournament, assuming full stack. If New Trier has a score of 71 and Naperville North has a score of 31 (strange considering UChicago results), it is unclear how they would be expected to fare against one another in competition, other than New Trier doing better. Of course, different ranking systems are ideal for different data sets. I think the current system is generally adapted well to produce reasonable results for the top 5 teams in the nation, but beyond there it loses some of its accuracy.