# 2018 Projection Systems Comparison – A Game Theory Approach

__Introduction:__

Today, I will introduce a game theory approach for comparing baseball projection systems. The day’s venture will not be a typical statistical analysis. I won’t be using any Chi-squared tests, nor will I calculate Type I or Type II errors. I won’t be evaluating MSEs or the like.

Instead, I will look to determine the profitability potential of each projection system by simulating what *would* have happened in a fantasy auction draft. Instead, I’ll play a game.

*What do I mean by this?*

Let’s think about what happens in a real fantasy [pun intended] draft auction.

Suppose that Derek Carty himself (or anyone exclusively using THE BAT) walks into a roto auction prior to the 2018 baseball season – say it is the NFBC. Derek takes his projections and runs a valuation method through it to get *auction prices*. He generates a list that looks something like this …

__THE BAT Projected Values:__ Clayton Kershaw 45, Mike Trout 43, Chris Sale 42, Giancarlo Stanton 40, Corey Kluber 40, Max Scherzer 35, Jose Altuve 34, Nolan Arenado 32, … , Greg Bird 1, Kurt Suzuki 1, etc.

Derek will then establish a *price point* that he is willing to pay for each player. There might be a premium that he will pay for the top ones, and a discount that he expects to *save* on lower cost players.

Players can now be classified into two simple categories:

- Players that are too expensive to buy. [Auction cost > Price Point]
- Players that can be purchased. [Price Point >= Auction Cost]

#1 is easy to deal with – Derek won’t purchase them. Typically, some 65-80% of all auctioned players will fall under this category.

For #2 – We can break this down a little further:

- Players that Derek will purchase [exactly 1 out of 15, or 6.7% of all auctioned players].
- Players that Derek will not purchase [somewhere between 13-28% of all auctioned players].

Derek cannot buy all players which appear to be a bargain. He only has room for 23 players on his roster. Many players will have a similar auction price & price point. If his valuations are correct, he should be indifferent/apathetic to any specific player.

__Measuring Success:__

Let’s say that Mike Podhorzer (or someone exclusively using Pod) is one of the other teams competing in this auction against Mr. Carty. It is late in the auction, and both Mike & Derek need an outfielder.

Let’s take at a few possible OF choices:

Player | THE BAT | Pod | Auction Value | 2018 Earnings | Profit |
---|---|---|---|---|---|

David Peralta | 5 | 6 | 3 | 22 | 19 |

Mark Trumbo | 5 | 0 | 3 | 0 | -3 |

Aaron Altherr | 5 | 0 | 1 | -5 | -6 |

Josh Reddick | 6 | 9 | 2 | 4 | 2 |

David Peralta obviously turned out to be the best player of the group for 2018. A team purchasing him would have amassed a $19 profit.

__Question:__ *Which fantasy team is in the best position here – Mike’s or Derek’s?*

The answer is *Mike’s Team*.

THE BAT | Pod | |
---|---|---|

# of Acceptable Players to Buy | 4 | 2 |

# of Profitable Players | 2 | 2 |

# of Unprofitable Players | 2 | 0 |

Total Gains | 21 | 21 |

Total Losses | -9 | 0 |

Total Profit | 12 | 21 |

% Profitable | 50% | 100% |

% Unprofitable | 50% | 0% |

Avg Gain Per Profitable Player | 10.5 | 10.5 |

Avg Loss Per Unprofitable Player | -4.5 | 0 |

Average Profit Per Player | 3 | 10.5 |

Derek has all four players similarly priced. He will end up buying one of the four players, likely randomly. If he happens to buy Peralta or Reddick, he will profit. If he happens to buy Trumbo or Altherr, he will suffer a loss. On average, he stands to make a $3 profit.

Mike’s random player on the other hand, could only be Peralta or Reddick. Since his valuation of the other two are below replacement, Mike’s projections will afford him never to purchase Trumbo or Altherr. On average, Mike stands to make a $10.5 profit.

Looking only at these four outfielders, anyone employing the Pod projections instead of THE BAT would have earned more expected profit. For this limited example, I would rather have been using Mike’s projections than Derek’s.

__Projection System Comparison: __

Let’s extend the above process, for all auctioned players, for various 2018 projection systems.

Projection System | Creator |
---|---|

Pod | Mike Podhorzer |

ATC | Ariel Cohen |

THE BAT | Derek Carty |

Steamer | Jared Cross |

ZiPS | Dan Szymborski |

For 2018, I will be comparing the ATC, THE BAT, Steamer and ZiPS projection systems which were available earlier this year on FanGraphs. In addition, I will also include Mike Podhorzer’s Pod projections, which he has graciously provided to me.

__Methodology:__

1) Start with the raw projections data (AB, H, HR, IP, K, etc.). For this analysis, I have assembled each projection system’s stats as of about March 20, 2018.

2) Produce a projected value for each player, by system. For this valuation, I use my own auction calculator, which follows a Z-Score methodology (similar to FanGraph’s auction calculator). I assume NFBC standard settings (15 teams, mixed AL/NL, $260 budget and positions – 9 P, 2 C, 1B, 2B, 3B, SS, CI, MI, 5 OF, U). I also assume that players are eligible only at their original 2018 positions + any positions that they were expected to gain in the first 2 weeks of the season.

3) Adjust the projected player values to obtain a Price Point for each player. For this, I have assumed the following:

Projected Value | Price Point |
---|---|

< $1 | Do Not Buy |

$1 to $4 | $1 |

$5 to $9 | $3 Discount |

$10 to $14 | $2 Discount |

$15 to $19 | $1 Discount |

$20 to $29 | At Cost |

$30 to $39 | $1 Premium |

>= $40 | $2 Premium |

For example, if Steamer projects a player for $17 – I assume that the maximum that it would pay for the player is $16. If it projected a player for $41 – I assume that it would pay up to $43. Any player below replacement will not be purchased in this exercise.

4) Obtain an Auction Price. We will use an average auction value (AAV) for each player. For this, I am using the average of a set of actual NFBC online auctions run by Andy Saxton that Todd Zola of Mastersball had provided.

For those players who weren’t drafted or who were only drafted as a reserve, we will assume that they will not be bought in this exercise. Even though ZiPS projected Miguel Andujar for $4, and ATC projected Pedro Strop for $5 (both being profitable in 2018), we won’t include either of these players, as they were undrafted.

5) Compute the 2018 full season player values. This represents what a player was worth in 2018. It is computed using the same methodology as above in #2.

Note that for all of the above, I have let the Z-Score method determine the inherent Hitter/Pitcher split of the total auction dollars. This will differ from the NFBC AAVs, which is typically pitcher heavy (and was 63/37 this past year).

In a recent twitter poll that I conducted, I asked if fantasy players typically match league H%/P% tendencies or do their own thing. 78% either voted to use their own splits (58%), or to slightly skew tendencies towards the league (20%). Thus, for this analysis – I have decided not to adjust any H/P splits. I let the Z-Score methodology run its course. [I have also run this analysis fully adjusting H/P splits to match the NFBC AAV, and there wasn’t much of a difference. Some players who were overvalued became undervalued for a system, and vice versa.]

6) Players were then “purchased” for a system if their Price Point was higher than the player’s AAV.

Terminology – We will identify a player as “purchased” as long as they appeared to be a bargain for the given system.

I then tracked the number of players purchased who were profitable, the number of players purchased who were unprofitable, and their respective gains and losses.

__Results:__

First, let’s look at the number of players that each system would “buy.” To get a sense of *where* the projection systems purchase their players – displayed are the number of players that would be bought by each system, for the top N cumulative players, ranked by AAV.

ZiPS seems to have bought more top 100 players than any other system. ATC bought more lower end players. THE BAT focused on middle valued players. In the end, each system gave the green light to purchase roughly 30% of the total player pool.

Onto profitability …

__In the below:__

GREEN colored figures represent more successful results. RED colored figures represent less successful results. The “All players” column displays the figure for purchasing every player.

Out of the most expensive 50 players of the auction, only 10 turned a profit. This is not untypical. Profitable players inside the top 50 include Mookie Betts, Francisco Lindor, Jose Ramirez and Justin Verlander.

Of the top 100 players, ZiPS identified 6 of the total 22 players who were profitable, while ATC only identified 2. However, ZiPS purchased 35 players while ATC only purchased 9. ATC’s success rate was 22%, while ZiPS’s was only 17%.

In total, THE BAT was able to identify 72 of the 153 profitable players – the most of any system – and their success rate was an excellent 49%. Only ATC was just slightly more successful, at 50%.

Now let’s look at players purchased for a loss. We expect most of the top players to be unprofitable. The most expensive 100 players turned a loss 78% of the time.

Steamer had the most trouble of all systems in the top 100. 17 out of their 19 players in this range were purchased at a loss – an 89% failure rate. I would argue that the top 50 or 100 is more important than the All (459) range here. It is like the adage – You can’t win a draft in your first few picks, but you can lose it there.

By the look of the saturated green colors above, ATC was the best system at avoiding failed picks in 2018.

Now onto the magnitude of player acquisition …

Previously, we only looked at the sheer quantity of players purchased; but what is also important is the value gained by these players. Buying 4 players who earn $1 of profit, is not as helpful as buying 1 player who earns $20 of profit, etc.

The gain on a per player basis, to me, is the most important item to look at. THE BAT is clearly the winner here, with ATC solidly in 2^{nd} place. THE BAT was able to scoop up an average profit of $9.0 on the top 300 most expensive players. Steamer and ZiPS had the lowest profitability, with Pod smack in the middle.

For the unprofitable players, one important adjustment has been made to the figures. 2018 final values have been capped at -$5 (minus 5 dollars). A player who was injured all season, or who was clearly droppable, shouldn’t be penalized with an overly negative 2018 final value. This would skew results. The player should dinged for being below replacement, so a cap of -$5 was decided and imposed.

Again, I draw your attention to the average losses per player, rather than to the total. For the top 50, ATC purchased the least number of players, yet on average, it’s losses would cost you $19.5 per player. ZiPS made the most purchases, but they only lost you $13.6 on average. For the next 50 (50 to 100), ATC was the best system, as their full 100 total beats 2^{nd} place Pod by $1.6.

Later in the draft, it is ATC and THE BAT who were the best at avoiding major losses, with Pod coming in 3^{rd}.

Now, let’s look at total profit for the various systems. All of the dollars gained are added up, and all of the dollars lost are subtracted out. It is the total summary of system profitability.

It seems odd at first that there are so many negative values here, but it does make sense. There were 91 undrafted players who ended up above replacement, pushing value out of our original auction pool.

Looking up and down the player pool, ATC is the clear winner. THE BAT is a close runner-up. ATC earned an average per player profit for fantasy auctions of $1.1. THE BAT earned $1.0, Pod lost $0.2, Steamer lost $0.6 and ZiPS lost $2.8.

To get a sense of what that means, on average, in a $260 auction – ATC would have drafted $285 of total value, THE BAT $283, Pod $255, Steamer $246, and ZiPS $196.

Finally, for fun – let’s look at the most profitable players for 2018 (players with an AAV only), with the projection systems that had the highest values for them.

Player | AAV | 2018 | Profit | Top Projected System | Runner-Up System |
---|---|---|---|---|---|

Blake Snell | 7 | 35 | 28 | Steamer | ZiPS |

Trevor Story | 11 | 38 | 27 | THE BAT | Pod |

Javier Baez | 11 | 37 | 26 | THE BAT | ATC |

Blake Treinen | 10 | 31 | 21 | ATC | Pod |

Christian Yelich | 26 | 46 | 21 | ZiPS | Steamer |

Michael Brantley | 3 | 23 | 21 | THE BAT | Steamer |

Mitch Haniger | 5 | 25 | 20 | ATC | THE BAT |

David Peralta | 3 | 23 | 20 | Steamer | Pod |

Scooter Gennett | 5 | 25 | 19 | THE BAT | ATC |

Matt Carpenter | 7 | 24 | 18 | THE BAT | Pod |

Eugenio Suarez | 8 | 25 | 17 | THE BAT | ZiPS |

Stephen Piscotty | 2 | 18 | 16 | Steamer | ATC |

Josh Hader | 3 | 19 | 16 | THE BAT | ATC |

Yasmani Grandal | 4 | 19 | 15 | ZiPS | ATC |

Miles Mikolas | 2 | 18 | 15 | Pod | Steamer |

Matt Chapman | 3 | 18 | 15 | Steamer | ATC |

As you can see, each projection system had several highly profitable player picks this year. ATC and THE BAT show up 8 times each, Steamer shows up 7, Pod 5 and ZiPS 4 times.

__Assorted Notes & Method Limitations:__

- There are many methods of determining the adequacy and virtues of a projection system. Each method has its own biases.
- ZiPS does not project saves. For the ZiPS projections, I simply used the Steamer saves.
- Player time estimates are vitally important to a projection system. They are directly factored into this method. Systems which have poor playing time figures are penalized here, as the raw counting stats are used to evaluate player values. A system that has the perfect SB/PA rate but missed badly on plate appearances would fare far worse than a system which missed on the SB rate but got the playing time correct.
- This analysis was done irrespective of player position, or statistical balance – i.e., I didn’t make sure that systems had to balance either condition.
- If a system projected 48 SBs for Khris Davis with 0 HRs, as opposed to 48 HR and 0 SB, it might still come up with a similar price valuation in this exercise. You can certainly argue that such a projection missed horribly. You could also argue that since the buying decision would still be the same, it doesn’t matter. That’s for a later debate.
- ZiPS had the lowest H%/P% split, but it was the closest to the NFBC AAV split. It was the only system with a lower H%/P% split than the NFBC. THE BAT was by far the highest, with a 72/28 split.
- This game theory-based analysis is a rules-based, “push-button” methodology process. A player will be purchased if circumstances meet certain criterion. [How else can we automate this?]
- ZiPS valued Clayton Kershaw at $57. A reasonable fantasy player would likely not spend $57 on his top pitcher. Some of the projected value figures here might need to be adjusted or smoothed to more closely reflect the auction market curve. Although I kept uniform, the true value to price point conversion may likely differ by system.

I hope that you find this game theory method of evaluating the 2018 projection systems both different and insightful. I am interested to hear your thoughts and/or conclusions on both the method and results.

Ariel is the 2019 FSWA Baseball Writer of the Year. Ariel is also the winner of the 2020 FSWA Baseball Article of the Year award. He is the creator of the ATC (Average Total Cost) Projection System. Ariel was ranked by FantasyPros as the #1 fantasy baseball expert in 2019. His ATC Projections were ranked as the #1 most accurate projection system over the past three years (2019-2021). Ariel also writes for CBS Sports, SportsLine, RotoBaller, and is the host of the Beat the Shift Podcast (@Beat_Shift_Pod). Ariel is a member of the inaugural Tout Wars Draft & Hold league, a member of the inaugural Mixed LABR Auction league and plays high stakes contests in the NFBC. Ariel is the 2020 Tout Wars Head to Head League Champion. Ariel Cohen is a fellow of the Casualty Actuarial Society (CAS) and the Society of Actuaries (SOA). He is a Vice President of Risk Management for a large international insurance and reinsurance company. Follow Ariel on Twitter at @ATCNY.

Very cool thought process. For a more traditional perspective, there is a great comparison on https://www.reddit.com/r/fantasybaseball/comments/9reu0d/mlb_win_projection_comparisons_538_baseball/