About RPG

About RPG2017-08-20T10:52:52+00:00

Relative Performance Grading

Since the first college basketball game, since the first college football game, fans have struggled to answer one very simple question: How good is my favorite team? And, the follow-on question is: How good is my favorite team relative to all other teams? In other words, where should my favorite team be ranked and when should it be favored to defeat another team? These two questions form the basis for all the good-natured arguments fans have with their friends as each season progresses.

When fans attempt to predict the winners of sixty-three NCAA tournament games, or when fans attempt to predict the four teams selected for the College Football Playoff field, the task becomes nearly impossible. For example, the odds against picking sixty-three game winners—a perfect bracket—in the NCAA Men’s Basketball Tournament are 128 Billion to 1.

These tasks are daunting because there is no scientific measurement of how good a team is at any point in a season. A team is good if it plays the game well and a team is better than other teams if it plays the game better than the other teams. Seems simple enough, but without a scientific measure of playing performance, all we have is conjecture. The polls are conjecture. Expert opinions are conjecture. Game predictions are conjecture. Conjecture is a guess without a factual basis. Every year the polls are proven wrong by actual game results. Every year expert predictions are proven wrong by actual game results. The 2017 college football preseason polls have recently been released and the only thing we know for certain is that they are wrong and that the end-of-season rankings will look nothing like the preseason rankings.

For fans and experts alike, the shorthand substitute for a qualitative measurement of playing ability is the won/lost record. However, the won/lost record is the most deceptive of all measures of playing performance. We know instinctively that a ten win/two loss record in the MAC isn’t as good as the same record in the SEC or ACC, but by how much? And, how do we distinguish between two 10-2 records inside the SEC or ACC?

Won/lost records are like pass/fail grades on tests in school. In order to rank two students with the same number of passes and fails, we need to see the letter grades—A, B or C for “wins”, and D and F for “losses”—or better yet, the numerical grades to distinguish one A from another or one B from another. A win is produced when one student (team) receives a higher grade on a test (game) than another student (opposing team), but the actual grades—the measure of how well the student (team) knows the material (plays the game)—can vary from good to poor for either or both students (teams). For example, a team can get a win for a D grade if its opponent gets an F grade but despite the win, neither team is very good. This is called “winning ugly.” On the other hand, a team can get a loss for an A grade if its opponent receives a higher A grade but both teams are very good. When the grades for an entire school year (season) are summed into an overall grade, all students (teams) in the class (NCAA) can be ranked and the valedictorian (best team in the country) can be identified.

Therefore, college basketball and college football need a method of grading playing performance that is a more accurate assessment of team quality, and is a substitute for the won/lost record. In order to grade playing performance, we must identify the factors that determine the outcome of every game. We must have statistics that measure how well the team did the things that produce victories. The first place to look would be the mountain of statistics already collected that record the action in every game. The disappointing findings were that the traditional statistics collected and published about college basketball and football games have nothing to do with winning and losing. Traditionally, we’ve been measuring effort—yards and first downs gained, steals and rebounds—but not results—points on the scoreboard. In football, teams can gain more yards and rack up more first downs and still lose the game. Therefore, these traditional statistics are not deterministic—they do not determine the outcome of football games. In basketball, teams can shoot a higher percentage from the floor and grab more rebounds and still lose the game. Therefore, these traditional statistics are not deterministic—they do not determine the outcome of basketball games.

As a result, the factors that are deterministic—the factors that determine the outcome of games—had to be identified for both basketball and football. There are five factors that decide winners and losers in every college basketball game, and there are eleven factors that decide winners and losers in every college football game. New statistics then had to be invented to represent these factors and produce a numercal grade for the playing performance of each team in a game. Grades are then adjusted based upon strength of opponent—incremented for good opponents and decremented for weak opponents—and for playing location—grades are incremented for playing on the road or at neutral sites. The team with the better grade is always the winner and receives a “W”, but the grades can vary from good to poor depending upon how well the teams played.

When the grades for all games played are added together, teams can be compared and ranked accurately. More importantly, the outcome of future games between any two teams can be predicted with a factual basis for the prediction. The deterministic factors, their statistical representations, and their algorithmic calculations, comprise what we call the Relative Performance Grading (RPG) system.

This means that fans can fill NCAA tournament brackets with some confidence that they are picking winners for good reasons. It also means that fans can levy criticism at the College Football Playoff selection committee for reasons other than, “They didn’t pick my favorite team.”

The five factors that determine the outcome of every college basketball game are identified and explained in my book 128 Billion to 1, which will be released on December 12, 2017. Also detailed in the book are the rankings for the 2016-2017 college season which expose the mistakes made by the NCAA Men’s selection committee, explain why Wisconsin’s victory over top-ranked Villanova wasn’t an upset, and explain how Gonzaga parlayed a string of weak performances into a championship game appearance.

The eleven factors that determine the outcome of every college football game will be identified and explained in my book Lies, Damned Lies and Statistics, which will be released in 2018.

On this Website, weekly football and basketball rankings will be published along with explanatory blog posts from September through April of each year.