2020 Projection Systems Comparison – A Game Theory Approach

Introduction

Back in 2018, I introduced a game theory approach for comparing baseball projection systems. Proudly, the article was nominated for Baseball Article of the Year by the Fantasy Sports Writers Association (FSWA). The game theory methodology is now back for its third straight year.

This approach is not the standard projections comparison analysis that most others embark on. The typical comparison makes use of some type of statistical measure. The standard analysis involves calculating least square errors, performing chi-squared tests, or perhaps even hypothesis testing. My method does not use any of these capable methods.

Instead, I look to determine the profitability potential of each projection system by simulating what would have happened in a fantasy auction draft. Instead, I game the projections.

What do I mean by this?

Let’s think about what happens in a fantasy baseball draft auction.

Suppose that Mike Podhorzer (or anyone who exclusively uses his Pod projections) walks into a rotisserie auction league prior to the 2020 baseball season. Let’s say that Mike decides to participate in an NFBC auction league. Mr. Podhorzer would take his projections and run a valuation method through them to obtain auction prices. He would generate a list that looked something like this …

Pod Projected Values: Max Scherzer 43, Christian Yelich 42, Mike Trout 41, Gerrit Cole 41, Ronald Acuna 38, Jacob deGrom 38, …, Evan Longoria 1, Aaron Hicks 1, Matt Barnes 1, etc.

In addition to the raw projected values generated by the Pod projections, Mike would then establish a price point that he is willing to pay for each player. There might be a premium that he will pay for the top ones, and a discount that he expects to save on lower cost players. He may be willing to bid up to $46 on Max Scherzer (valued at $43), but would only pay $1 for a $4 Gavin Lux, etc.

Players would now fall into two simple categories:

  1. Players that are too expensive to buy. [Auction cost > Price Point]
  2. Players that can be purchased. [Price Point >= Auction Cost]

#1 is easy to deal with – Mike won’t purchase them. If the cost to buy them during the auction exceeds what Mike is willing to pay, then he will simply pass on the player and will cease to bid. Typically, some 65-80% of all auctioned players will fall under this category.

For #2 – We can break this down a little further:

  • Players that Mike will purchase [exactly 1 out of 15, or 6.7% of all auctioned players].
  • Players that Mike will not purchase [somewhere between 13-28% of all auctioned players].

Mike cannot draft/buy all players that fall under his acceptable price point. He only has room for 23 players on his roster. Hopefully, and if he drafts successfully, he will end up as close as he can to the 23 players that will earn him the largest profits in the aggregate.

Now, there will be a number of players in the pool who will have a similar auction price & price point. If Podhorzer did his valuations correctly, he should be indifferent/apathetic to choosing any specific player – all things being equal.

Measuring Success

Now that we have set the stage at the 2020 NFBC auction, let’s continue with measuring the success of Podhorzer’s projection system. I re-introduced these concepts last year, but it is worth going through yet again.

Let’s say that Jared Cross (or someone exclusively using the Steamer projections) is one of the other teams competing in this auction against Mr. Podhorzer. It is late in the auction, and both Jared & Mike need one more outfielder.

Here are the $2-3 OFs who are still available to be purchased in the auction:

Available OFs
Player Pod Steamer Auction Market Value 2020 Earnings Profit
A.J. Pollock 4 1 2 24 22
Andrew McCutchen 10 8 3 19 16
Shogo Akiyama 9 1 3 -2 -5
Sam Hilliard -1 4 3 -3 -6
Kevin Kiermaier 1 1 2 4 2

For 2020, A.J. Pollock and Andrew McCutchen turned out to be the most accretive players among this group of outfielders. A team purchasing Pollock would have amassed a $22 profit. Andrew McCutchen owners would have gained $16. If a team had purchased Sam Hilliard, they would have realized a $6 loss.

Question: Which fantasy team is in the best position here – Mike/Pod’s team or Jared/Steamer’s team?

The answer is Mike/Pod’s Team.

All five of these players are similarly priced by the market. Mike had higher values than the market for three players (Pollock, McCutchen, Akiyama), and Jared had two higher values (McCutchen, Hilliard). Mike will end up buying one of his three possible players, and Jared would purchase one of his two possible options. They will pass on players who they value below the market.

Being in the same draft, Mike & Jared might have pushed up Andrew McCutchen’s auction price a bit over the fantasy baseball average auction values. Yes, perhaps Podhorzer would have had a bigger propensity to purchase Akiyama over Pollock. But in the long run – if players are randomly nominated*, and if Mike/Jared were in different auctions, by definition – all of these players would still only be purchased for an average of $2-3.

*As an aside, auction player nominations are extremely important. You can read my 2020 LABR Mixed Auction League Draft Recap, which goes through the game theory behind making auction nomination decisions.

If Podhorzer happened to buy Pollock or McCutchen, he would have profited. If he happened to have bought Akiyama, he would have suffered a loss. On average, his projections would have earned him a $9.3 profit for his final outfield roster slot (see below).

Jared’s random player on the other hand, could only be McCutchen or Hilliard. If he had purchased McCutchen, he would have profited. If he had bought Hilliard, he would have suffered a loss. On average, Jared’s projections earned him only a $2.0 profit.

Podhorzer had a 67% chance to pick a profitable player, while Jared only had a 50-50 shot.

OF Profitability
Pod Steamer
# Players to Buy 3 2
# Profitable Players 2 1
# Unprofitable Players 1 1
Total Gains 33 10
Total Losses -5 -6
Total Profit 28 4
% Profitable 67% 50%
% Unprofitable 33% 50%
Avg Gain Per Profitable Player 16.5 10
Avg Loss Per Unprofitable Player -5 -6
Average Profit Per Player 9.3 2.0

Looking only at these five outfielders, anyone employing the Pod projections instead of Steamer would have had a better chance to pick a profitable player and would have earned more expected profit. For this limited example, one would rather have been using Mike’s projections than Jared’s.

The Projection Systems

We will now extend the above process, for all auctioned players during the past season. Below are the projection systems that I have analyzed for 2020:

2020 Baseball Projection Systems
Projection System Creator
Pod Mike Podhorzer
ATC Ariel Cohen
THE BAT Derek Carty
Razzball Rudy Gamble
Steamer Jared Cross
ZiPS Dan Szymborski

The members of the projections panel for 2020 are identical to last year’s compliment. I will be once again be comparing the ATC, THE BAT, Steamer and ZiPS projection systems which were available earlier this year on FanGraphs. I will also be including my colleague Mike Podhorzer’s Pod projections as per usual, as well as Rudy Gamble’s Razzball projections.

Methodology

The game theory methodology of comparison is almost identical to last year. For completeness, I will once again list the details of the procedure.

1) Start with the raw projections data (AB, H, HR, IP, K, etc.). For this analysis, I have assembled each projection system’s stats as of about July 22, 2020.

2) Produce a projected value for each player, by system. For this valuation, I use my own auction calculator, which follows a Z-Score methodology (similar to the FanGraphs auction calculator). So that I can best compare projected values to “market,” I use the NFBC main event settings (15 teams, mixed AL/NL, $260 budget and positions, standard 5×5 scoring). I also assume that players are eligible only at their original 2020 positions + any positions that they were expected to gain in the first 2 weeks of the season.

3) Adjust the projected player values to obtain a Price Point for each player. For this, I have assumed the following:

Projection Price Points
Projected Price Price Point
< $1 Do Not Buy
$1 to $4 $1
$5 to $9 $3 Discount
$10 to $14 $2 Discount
$15 to $19 $1 Discount
$20 to $27 At Cost
$28 to $35 $1 Premium
$36 to $40 $2 Premium
> $40 $3 Premium

For example, if Steamer projects a player for $17 – I assume that the maximum that it would pay for the player is $16. If it projected a player for $42 – I assume that it would pay up to $45. Any player below replacement will not be purchased in this exercise.

4) Obtain an Auction Price. We will use an average auction value (AAV) for each player. For this, I am using actual NFBC AAVs for auctions in the month of July. I previously debated using March data instead of July market data – but decided on using information as close as possible to the season. The projections used in this analysis match the timeframe accordingly.

For the players who were not drafted or who were only drafted as a reserve in the NFBC, we will assume that they will not be bought in this exercise. Even though ATC projected Ji-Man Choi for $3, and THE BAT projected Aaron Hicks for $2, we will not include either of these players, as they were undrafted by NFBC auction participants.

5) Compute the rotisserie player values for this season. This will represent what a player was worth in 2020. It is computed using the same methodology as above in #2. Despite the short season, these will be formulated on an annualized basis.

Note that for all of the above, I have let the Z-Score method determine the inherent Hitter/Pitcher split of the total auction dollars. This will differ from the NFBC AAVs, which is typically pitcher heavy (and was 64/36 this past year).

6) Players were then “purchased” for a system if their Price Point was higher than the player’s AAV.

Terminology – We will identify a player as “purchased” as long as they appear to be a bargain for the given system.

I then tracked the number of players purchased who were profitable, the number of players purchased who were unprofitable, and their respective gains and losses.

Results

First, let’s look at the number of players that each system would “buy.” To get a sense of where the projection systems purchase their players – displayed are the number of players that would be bought by each system, for the top N cumulative players, ranked by AAV.

Pod, ATC and THE BAT distributed their players differently than last year – away from the top, and towards the middle & bottom. Razzball and ZiPS produced a similar distribution to their norms, while Steamer bought more players early on than it typically does. As the number of players purchased is heavily dependent upon the market, it is possible that the NFBC market simply moved more towards the excellent Steamer system.

As we have seen in previous years, ZiPS advocates more of a stars and scrubs strategy than any other system. Steamer and Razzball had a larger focus on the players ranked 50-100. Pod, ATC and THE BAT had the largest focus on players ranked 150-200. In the end, each system gave the green light to purchase approximately 30% of the total player pool, which was up slightly from last year’s 26%.

Onto profitability …

In the below:

GREEN colored figures represent the more successful projection results. RED colored figures represent less successful results. The “All players” column displays the figure for purchasing every player.

Due to this year’s short season, hit rates were abnormally low, especially for elite players.

Out of the most expensive 50 players of the auction, only 7 turned a profit. This is smaller than the typical 10 that do so (20% success rate), and was likely due to the variance of the short season. Profitable players inside the top 50 included Shane Bieber, Trea Turner, Fernando Tatis and Jose Ramirez.

Pod, ATC, THE BAT and Razzball were not able to identify any of the seven profitable players of the top fifty. Steamer had the virtue of purchasing Shane Bieber, and ZiPS acquired an accretive Xander Bogaerts.

In the top 50-100 range, THE BAT was king – with a 42% success rate in that range (a 33% overall top 100 success rate). ZiPS was abysmal in the top 100 – showing just a 17% success rate. The 17% is lower than the “All Players” figure of 19%, which means that one would have had a higher success rate by simply bidding randomly on every player within the top 100.

In total, ATC was able to identify 66 of the 146 profitable players – the most of any system. Its success rate was an excellent 46%, the highest of any projection system. Steamer took 2nd place for the total player pool. ZiPS finished last with only a 36% success rate.

ATC was the clear winner in the 200-300 rank range. It was able to identify 22 profitable players, the most of any projection system. ATC had a massive 58% hit rate for that part of the curve.

Judging by the number of green boxes in the quantity exhibit, generally speaking – Steamer had the best success across all player ranges for 2020.

Now let’s look at players purchased for a loss. We expect most of the top players to be unprofitable. The rate table shown here is the compliment of the profitable one above (all percentages will sum to 100%). The new information here is the quantity of failures. Pod, ATC and THE BAT purchased the least number of failures.

If we pair the number of green boxes from both the quantity and rate charts, it is clear that THE BAT was best in 2020 at avoiding traps, with Pod not far behind. ATC did a great job in the lower priced players (after player 200), but not as well with expensive players this year.

ZiPS and Razzball were the laggards, with more red boxes than anyone else.

Now onto the magnitude of player acquisition …

For the magnitude of gains, it will greatly help to first see how profitability has changed over time.

While the frequency of successes year over year has deteriorated, the dollars of profit earned per player has improved. In fact, other than in a few portions of the top 150, the magnitude of gains has also improved over 2018 levels.

It is difficult to fully tell whether projections were more successful this year vs. the NFBC drafters (and hence, the NFBC AAV), or if it purely was the result of the increased variability due to the short season. I believe that the latter reason might have had the largest influence. With more variance present – deviations from projections were larger both above and below. Since we are only capturing the higher portions in this exhibit – we should expect to see larger peaks.

ATC was the worst performer of any system inside the top 150. Almost all of the other projections produced 2-2.5x the amount of profit that ATC walked away with, both in total and on a per-player basis. However, the magnitude of ATC’s gains after pick 150 was tremendous, and overtook the field by pick 250.

Razzball was the winner for almost every range below pick 200. It also performed quite well in its endgame picks. Razzball’s magnitude of gains were incredibly consistent up and down the curve.

This year’s gain per profitable player chart was very telling of the optimal drafting strategy that each projection system should have employed.

Look at the tightness of Razzball & THE BAT’s profitability. Other than in the first 50, the figures do not vary from 9.2-10.9. It means that similar gains (on profitable players) were available to Razzball & THE BAT drafters up and down the curve. Drafting off of either of these projection systems gave users more draft flexibility.

ATC & Pod’s profitability showed a large tilt towards the middle/end of the player pool. Using those two projection sets, player picks should have been shifted away from the higher valued players. I personally advocated skewing one’s roster construction towards more of the middle-middle players during this short season – which aligned well with both ATC & Pod’s intrinsic strategy.

THE BAT seemed to provide the best blend of flexibility and value in this aspect.

Onto losses …

For the unprofitable players, as always, an important adjustment has been made to the figures. All 2020 final values have been capped at -$5. That is, we will not let a player’s obtained value in 2020 fall below the threshold of -$5. A player who was injured all season, or who was clearly droppable, should not be penalized with an overly negative final valuation, which would skew results. I have previously written about the concept of capped values more in depth here.

For 2020, the results for losses provide us more separation between the projection systems. ZiPS not only lost the most value up and down the player pool, but it also lost the most on a per-player basis. Razzball did a great job (relative to the others) in avoiding traps in the top 100 players, but not as well later on.

ATC was the clear winner this year up and down the entire player pool. At every depth, it was the best in class on a per-player basis. Especially at the top-200 level, ATC saved owners almost $2 of loss per player versus most of the other projection systems.

Onto total profitability …

Now comes the part where we put it all together, and look at the total profitability by projection system. All of the dollars gained are added up, and all of the dollars lost are subtracted out. It is the total summary of system profitability.

Before we look at the individual projections, let’s first check in to see how profitability on the whole has fared over the past few seasons.

Compared to prior seasons, there is an enormous drop in the overall level of profitability of 2020. Due to COVID (yet again), projections inherently contained more variability than in a normal season. In 2020, a top-X drafted player was far less likely to end the season as a top-X player. More than anything else, profitability was all about minimizing attrition this year.

While the above chart depicts the “All Players” profitability perspective, take a look at how the average projection system fared year over year:

At some depths, the average projection system lost almost twice as much value per player than ever! That is absolutely telling of the drafting landscape of 2020.

Below are the total profitability charts by projection system:

ATC, THE BAT and Pod were the only projection systems to turn a profit for the total player pool. ATC returned $128 of profit, which comes to almost $1 of profit per player. No other system came close.

THE BAT and Razzball did well inside the top 100 players. Steamer did well with players 100-150. Pod did fabulously with players 150-200, and then ATC dominated thereafter.

The fact that ATC was able to gain $247 of value across all players after pick 300, tells us that that ATC was able to identify a large amount of value for players who were either not drafted by other systems, or who were essentially final round selections.

Looking up and down the player pool, there is no universal winner. Overall, ATC seemed to be the most accretive for fantasy owners, while THE BAT and Pod were the runners-up for 2020. ZiPS was clearly the projection system to avoid. Steamer was the pure frequency winner of 2020.

Three-Year Results

A nice feature about this analysis, is that the currency is auction value dollars. These figures are annualized and are comparable year over year. Yes, the variance of 2020 makes its distribution somewhat different from normal, but in the end, even ’20 was still a zero-sum game.

In other words, a victory in 2020 was worth just as much as it was in 2019. If you regularly play in a fantasy baseball home league where the winner receives $600 – the same was also true this year. Finishing 1st in 2020 was worth the same as it was in 2019. A $100 buy-in for 2019 was a $100 buy-in for 2020. Dollars of profit are comparable across seasons, even if it contained a slightly different player or statistical landscape, or that it was of a shorter duration.

Let’s take a look at results over the past three seasons on a profit per player purchased basis. Doing so will smooth out projection system results over a longer period of time (2018-2020).

Over the past three seasons, we can visually inspect that ATC has offered the most projected profit on a per player purchased basis. In fact, at many player depths, ATC has been twice as good as most of the other projection models.

THE BAT has shown the 2nd best three-year returns on investment. Pod ranks as the 3rd best, with Steamer ever so slightly right behind. Pod and Steamer have a remarkably similar shape of profitability, especially in the middle of the curve. I find this interesting, being that they are completely different types of projection systems. Pod is a fully manually projected set, while Steamer is an automated/formulaic one.

I will argue that the most crucial range for a projection system to find bargain players is the 100-300 player range. That is where fantasy leagues are won. ATC has been far ahead of the pack in that area on a three-year basis. While any one projection-season can have aberrations at various parts of the curve, ATC has shown the best long-term return on investment.

Finally, note that each projection system (at almost every level) has beaten the “All Players” perspective over time. Using projection systems is clearly the way to draft, as opposed to simply going by ADP/AAV alone.

Most Profitable Players

Just for fun, let’s take a look at a few of the most profitable hitters and pitchers from 2020, along with the projection systems that projected their highest values.

2020 Most Profitable Players
Player AAV 2020 Profit Top Projected System Runner-Up System
Luke Voit 6 36 30 ATC Razzball
Teoscar Hernandez 1 31 30 ZiPS THE BAT
Trent Grisham 1 26 25 Steamer ATC
Jose Abreu 19 43 24 Razzball ATC
Wil Myers 5 28 23 THE BAT Pod
A.J. Pollock 1 24 23 THE BAT ATC
Mike Yastrzemski 2 25 23 Pod ATC
Dominic Smith 0 23 23 ZiPS Pod
Marcell Ozuna 21 43 22 ZiPS THE BAT
Kole Calhoun 1 23 22 ZiPS ATC
Devin Williams 0 25 25 ATC Razzball
Zach Plesac 1 21 20 THE BAT ATC
Trevor Bauer 19 38 19 ZiPS THE BAT
Jeremy Jeffress 0 17 17 ATC ZiPS
Marco Gonzales 0 17 17 ZiPS Pod
Kenta Maeda 12 29 17 Pod THE BAT

As you can see, each projection system had several highly profitable player picks this year. ATC shows up 9 times, THE BAT and ZiPS each appear 7 times, and Pod shows up 5 times.

Assorted Notes & Method Limitations

  • ZiPS does not project saves. For the ZiPS projections, I simply used the Steamer saves.
  • As mentioned in the past, playing time estimates are vitally important to a projection system, and are directly factored into this method. Systems which have poor playing time figures, but good rate stats per playing time are penalized in this analysis; it is the raw counting stats which are used to evaluate player values.
  • A possible future analysis using this game theory method would be to evaluate projection set rates alone. I would do so by holding one set of playing time projections constant. For example, I would use ATC playing time for all systems, and apply each projection’s rates. We could also test playing time in a similar manner by using one set of rates (say Steamer), and varying the PT. All said, a fantasy baseball valuation encompasses both elements, which is exactly what this analysis seeks to evaluate.
  • The intrinsic Z-Score hitter/pitcher split of the auction dollars between the various systems was much closer together than in prior seasons. ZiPS was the heaviest towards pitchers at 63/37, while Steamer was more tilted towards hitters at 67/33.
  • The player value distribution for ZiPS was far more realistic this season than it has been in the past. In prior years, ZiPS would show some players valued over $50. It is hard to tell whether ZiPS made a change, or if it was an artifact of more variability embedded in the short season.
  • In this analysis, the categorization of a “Profitable Player” vs. an “Unprofitable Player” is defined by whether the final accumulated roto value exceeded the initial draft price. However, if a player purchased for $35 returned $34 of value, I would hardly call that a failure. Not only should there be a market pricing curve (as I already have), but there should also be a success curve. A $5 loss on a $40 player should be categorized as a win, whereas a $1 gain on a $2 player could be construed as a loss. I might look at visiting this in a future revision of my methodology.

Once again, I hope that you have found this game theory method of evaluating projection systems to be different, yet insightful. While other more statistically based methods are certainly valid, no one method is perfect and without limitations. In choosing the projection system(s) to incorporate into your fantasy preparation, this article should add one additional point of reference.

Please comment below if you have any thoughts on either my method or to the conclusions drawn.





Ariel is the 2019 FSWA Baseball Writer of the Year. Ariel is also the winner of the 2020 FSWA Baseball Article of the Year award. He is the creator of the ATC (Average Total Cost) Projection System. Ariel was ranked by FantasyPros as the #1 fantasy baseball expert in 2019. His ATC Projections were ranked as the #1 most accurate projection system over the past three years (2019-2021). Ariel also writes for CBS Sports, SportsLine, RotoBaller, and is the host of the Beat the Shift Podcast (@Beat_Shift_Pod). Ariel is a member of the inaugural Tout Wars Draft & Hold league, a member of the inaugural Mixed LABR Auction league and plays high stakes contests in the NFBC. Ariel is the 2020 Tout Wars Head to Head League Champion. Ariel Cohen is a fellow of the Casualty Actuarial Society (CAS) and the Society of Actuaries (SOA). He is a Vice President of Risk Management for a large international insurance and reinsurance company. Follow Ariel on Twitter at @ATCNY.

33 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Brad Lipton
3 years ago

These articles are always so interesting and useful. Kudos to you, Ariel, for working on this hard problem.