Are Foul Balls Good or Bad?

I’ve had the question written on my whiteboard for ever: are foul balls good or bad? It’s a glass-half-empty, glass-half-full conundrum. The former group might think a foul ball is simply a barely-missed opportunity at in-play contact. The latter group might view that same event as a positive — that the poor quality of contact on a foul ball is indicative of an ability to induce poor contact quality in general, and it’s not inherently different from a swinging strike.

In my heart of hearts, it makes more sense to me that a foul ball is closer to in-play contact than not. Considering the diameter of both a bat and a ball, and the nearly physically impossible feat of connecting the two in motion, a foul tip has a margin of error of mere inches, whereas a swinging strike, fully sans contact, can have a margin of error measured in feet. Yes, it seems like getting a piece of the ball suggests, from the pitcher standpoint, makes the glass appear more half-empty than otherwise.

I wanted to finally tackle the subject, but I didn’t really know how. I first looked at the outcome of the pitch directly following foul and non-foul pitches, but it was a bit noisy (although, to be fair, I may have missed clear patterns in that noise). I imagine the effects spawning from a foul ball are not exclusive to the next pitch; rather, they may manifest two or three or even four pitches deeper into the plate appearance. In other words, a pitch-sequencing analysis might be prohibitively difficult, at least for someone like me who lacks the brainpower or mental stamina to pull it off.

Instead, I opted for something a little easier yet arguably just as telling. Using all Statcast data from the start of the 2017, I grouped every plate appearance into bins according to how many foul balls were generated in each. Then I looked at two very basic outcomes: strikeout rate (K%) and weighted on-base average (wOBA), expressed as averages within each foul ball bin.

The results, at first, were damning:

All PAs
# Foul wOBA K%
0 .368 12.0%
1 .288 31.3%
2 .258 39.9%
3 .283 38.5%
4+ .300 35.0%
SOURCE: Statcast

Pretty clearly, plate appearances with zero foul balls result, on average, in significantly lower strikeout rates and better hitter production. However, it occurred to me the results for plate appearances with zero foul balls would be biased by virtue of plate appearances ending before a pitcher even has a chance to strike out the hitter — that is, in two pitches or fewer. (Plate appearances with one foul ball likely face a similar bias, but less dramatically so, I presume.)

Accordingly, I removed all plate appearances that lasted less than three pitches, in order to implicitly allow pitchers the opportunity to strike out the hitter. It’s a small, imperfect adjustment, but it’s better than nothing. The results are shown below. (Note that, because plate appearances with two or more foul balls take at least three pitches to complete, the outcomes for these rows have not changed.)

PAs > 3 pitches
# Foul wOBA K%
0 .350 21.3%
1 .281 33.9%
2 .258 39.9%
3 .283 38.5%
4+ .300 35.0%
SOURCE: Statcast

Controlling for possible bias certainly helped pitcher outcomes for zero-foul-ball plate appearances, nearly doubling the strikeout rate and shaving a few ticks off the average wOBA. But, all told, it still appears, at this point, accruing at least one foul ball in a plate appearance is better than none.

Now, it’s possible that all foul balls are doing at this point is telling us the number of guaranteed strikes per plate appearances. With zero foul balls, a pitcher is guaranteed no strikes; one foul ball, at least one; two foul balls, at least two, and on the cusp of a strikeout. It’s no surprise pitcher strikeout rates escalate so rapidly from zero to one to two foul balls.

Alex Fast of PitcherList (with whom I ping-ponged ideas for this post) inquired what these numbers look like exclusively in two-strike and full counts. If we normalize the number of strikes (and balls) for each foul ball bin, it helps address the bias discussed in the previous paragraph.

Two-strike counts:

Two Strikes
# Foul wOBA K%
0 .319 33.8%
1 .290 35.5%
2 .287 37.3%
3 .298 35.5%
4+ .312 33.8%
SOURCE: Statcast

The outcomes are much more equitable this time around, although still slightly favoring one- and two-foul PAs. Interestingly, the relationship is slightly parabolic: outcomes improve through one and two foul balls but get slightly worse through three and four (or more) foul balls. The differences between bins may not be statistically significant, but if they were, it would suggest some element of tug-of-war in the power dynamics between the hitter and pitcher within the plate appearance.

Full counts:

Full Counts
# Foul wOBA K%
0 .392 25.7%
1 .380 27.9%
2 .385 29.4%
3 .389 27.3%
4+ .414 28.6%
SOURCE: Statcast

All outcomes here are significantly worse given each plate appearance’s proximity to the issuance of a walk, but they seem to be more equitable than in any other cross-section of plate appearances. Granted, when you’re in a full count, all a foul ball is doing is letting you live to see another full count. Not sure how much meaning is found here.

Maybe the most important question is not related to two-strike counts and strikeouts. What happens when we look at some form of the inverse — say, hitter production on non-strikeout and non-walk plate appearances? In doing so, we’d be isolating all balls in play to find differences in hitter production.

PAs ≠ K, BB
# Foul wOBA K%
0 .392 0.0%
1 .384 0.0%
2 .381 0.0%
3 .406 0.0%
4+ .403 0.0%
SOURCE: Statcast

The differences are so small, I would hardly say that foul balls are more valuable than not in terms of affecting hitter production on contact. The table above inherently includes two-strike counts, during which hitters are inherently disadvantaged. What about all counts with less than two strikes, regardless of plate appearance result?

PAs < 2 strikes
# Foul wOBA K%
0 .422 0.0%
1 .425 0.0%
SOURCE: Statcast

Effectively no difference.

So, where does that leave us? I thought a foul ball might be some kind of bad omen for a pitcher, as it suggests a hitter was closer to putting a ball into play than not. However, foul balls seem to have no discernible effect on a hitter’s contact quality. Furthermore, the gains a pitcher sees within a plate appearance generally disappear in two-strike counts.

Overall, the results suggests foul balls do little more than guarantee an extra strike for a pitcher, which inches the pitcher closer to a strikeout, with little influence on what might be called the “power dynamics” of the plate appearance. A foul ball does not put a hitter in more control; if anything, it puts the pitcher in more control, simply by virtue of having one more strike. The foul ball is effectively no different than a swinging or called strike until two-strike counts, in which the hitter stays alive.

Interesting. Boring, but interesting.

As a footnote, it’s worth noting that there are probably other (better) ways of looking at this. But from a high level, this approach fails to bring resolution to the half-empty/half-full quandary.





Two-time FSWA award winner, including 2018 Baseball Writer of the Year, and 8-time award finalist. Featured in Lindy's magazine (2018, 2019), Rotowire magazine (2021), and Baseball Prospectus (2022, 2023). Biased toward a nicely rolled baseball pant.

10 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
anthonydelagemember
4 years ago

Have you looked at this at the pitcher level, rather than the PA level? There could be some macro trends around foul ball rate (in various contexts) and outcomes like strikeouts and contact quality.