In the Information Technology industry, it is known as garbage in, garbage out, or GIGO. It is the tenet that the output from a computer system or mathematical algorithm is only as good as the input. Technology cannot make wine from lemons. It can only make lemonade.

That’s the problem with the RPI (Rating Percentage Index) method of calculating Strength of Schedule (SOS) and rankings: the inputs are all garbage.

RPI attempts to calculate the strength of Team A, an admitted unknown, by first calculating its winning percentage (wins divided by total games played) and giving that number, a number for which it is trying to determine a value, a weight of 25% in its algorithm. So, we start with the unknown value we are attempting to solve for as part of the answer. Sorry, but that doesn’t sound mathematically logical.

Since RPI doesn’t know the value of Team A’s record, it calculates the winning percentage of its opponents by summing all wins and dividing by all games played. Now, the value of each of those won/lost records is just as unknown as the value of Team A’s won/lost record (in fact, RPI will go through the same illogical process with each of those teams) but Team A’s Opponents’ winning percentage is given a weight of 50% and added to Team A’s winning percentage. So, now we have Team A’s unknown value added to approximately thirty other unknown values and we haven’t solved for anything yet (in the mathematical sense).

Step three is to calculate the winning percentages for all of Team A’s opponents’ opponents’ won/lost records, give them a weight of 25%, and add those hundreds of unknown values to the ~31 unknown values we already had in the algorithm.

We still don’t have a single known or absolute value in the calculation but that’s it: a string of unknown values is thrown together in an abominable stew to produce a meaningless number that will mislead selection committee members as they seed the brackets and sports fans as they fill their March Madness brackets.

Beyond the lack of mathematical logic, the biggest problem with RPI is the premise that won/lost records have meaning when judging the quality of a team. We know better. Wins and losses are pass and fail grades and tell us nothing about how well either team played. We only know that the winner played relatively better than the loser. We also know that a team can play poorly and yet win or play well and still lose. Ergo, if we are judging how good a team is, wins and losses can’t be part of the equation. Just as in school, we must use numerical grades representing playing performance to differentiate wins and differentiate losses. That is the basic premise of the Relative Performance Grading system (RPG).

This year the committee has stopped using RPI directly but still uses it indirectly in something it calls the quadrant system. To discern good wins and bad losses, the committee divides the 351 Division I basketball teams into four quadrants based upon their (illogical and misleading) RPI ranking.

In quadrant I we have:

  • Home games against teams ranked 1 – 30
  • Neutral site games against teams ranked 1 -50
  • Away games against teams ranked 1 – 75

In quadrant II we have:

  • Home games against teams ranked 31-75
  • Neutral site games against teams ranked 51 – 100
  • Away games against teams ranked 76 – 135

In quadrant III we have:

  • Home games against teams ranked 76 – 160
  • Neutral site games against teams ranked 101 – 200
  • Away games against teams ranked 136 -240

In quadrant IV we have:

  • Home games against teams ranked 161-351
  • Neutral site games against teams ranked 201 – 351
  • Away games against teams ranked 241 – 351

Wins in quadrant I are good; losses in quadrant four are bad. Presumably wins in quadrant II are pretty good and losses in quadrant III are pretty bad.

My first question would be: Why abandon the RPI and then use it as the basis for the quadrants? My second question would be: What is the science behind the subdivisions? Is it anything more than a guess?

RPG uses the Massey Composite Ratings (https://www.masseyratings.com/cb/compare.htm) to establish the relative strength of the 351 Division I teams. Massey averages 67 ranking services to create a wisdom of the crowd consensus. RPG then subdivides the 351 Division I teams into seven categories of relative strength:

  • Category I = 1-12, elite teams
  • Category II = 13–37, teams that are ranked by someone or could be ranked
  • Category III = 38–75, competitive teams, mostly from power conferences
  • Category IV = 76–100, low level power conference teams and good mid-major teams
  • Category V = 101–150, bad power conference teams and average mid-major and minor conference teams
  • Category VI = 151–250, weak teams no matter the origin or conference affiliation
  • Category VII = 251–351, teams so weak they don’t matter

These subdivisions weren’t chosen at random. After measuring playing performances throughout the season, RPG knows that teams 1-12 play equally well and better than teams 13-37. RPG knows that teams 13-37 play equally well and better than teams 38-75. RPG knows that teams 38-75 play equally well and better than teams 76-100. RPG knows that teams 76-100 play equally well and better than teams 101-150.

You’ll notice that our subdivisions of the top 150 teams are far more granular than the quadrants because there are significant playing ability differences among the top 150 teams. We’ve scientifically measured the difference in playing ability of teams in each of our categories. We know exactly how much better a Category I team is than a Category II team, how much better a Category II team is than a Category III team, and so on.

We can tell you that the biggest difference in playing ability is between Category VI (teams ranked 151-250) and Category VII (teams ranked 251-351). The last 101 teams can only beat one another. The second biggest difference is between Category I (teams ranked 1-12) and Category II (teams ranked 13-37). If you watch the wire service polls you might think that all ranked teams are similar and jump up and down the rankings with each win or loss, but this season the twelve elite teams have separated themselves from other good, but not great, teams. We can also tell you that there is little difference between Category V (teams ranked 101-150) and Category VI (teams ranked 151-250). Beyond the top 100, there’s just a clump of undistinguished teams that lose regularly to better teams.

As we compare categories against quadrants we see teams of considerably different ability are lumped together in each quadrant. Home games against teams ranked 13-30 are considerably easier than games against teams ranked 1-12 but they’re lumped together in quadrant I. Games against teams ranked 76-100 are considerably tougher than games against teams ranked 101-160 but they’re lumped together in quadrant III.

Other arbitrary subdivisions seem to have been used for neutral site and away games. Our third question would be: Is there any scientific evidence that neutral site games are easier than away games or is it just a supposition? RPG knows that teams play precisely 7.5% better at home than they do anywhere else. At neutral sites like Maui, the Bahamas, Alaska, etc., we treat the games as non-home games for both teams.

It’s unfortunate that the contrivance of the quadrants will influence the Selection Committee as it seeds teams in brackets, but we won’t pay any attention to quadrants as we make our bracket predictions.