Grab the David’s sunflower seeds and a baseball cap, it’s MLB season. While we’re all anxious to see how the season unfolds over the next 2,400 games or so, let’s not forget how this years’ players ended up on the field in their respective jerseys for the 2019 season.
Scouts are the gatekeepers for most players. Tasked with scouring talent worldwide to recruit the best players for their professional teams, they analyze a player’s every move. Prior to today's analytics-heavy environment, there were old school ways for scouts to collect these various statistics. When it comes to recruiting young players, every team has had more misses than hits. Data from the Baseball Prospectus reveals that in 2006-2008 only 13.6% of players who entered the minor leagues ended up making the majors.
The Ringer recently wrote a series on the data the Cincinnati Reds scouts collected and utilized. In retrospect, the Reds have lessons to learn about how to collect and leverage data, especially for one of their key metrics, Overall Future Potential (OFP)1. OFP is a team-based standard used by scouts to estimate an amateur player’s potential professional career outcome. While this was just one data point for the Reds, it was poorly measured off the bat and resulting in skewed data and poor data-based decisions for the team.
As the manufacturing sector enters the fourth industrial revolution, highly focused on digitization and utilization of performance analytics, these baseball lessons can apply to them, too. While a scout needs to be a “detective, bloodhound, and diplomat”, a manufacturing leader needs the same skills to understand how to navigate challenges with their shop floor data today. Understanding how data is derived—and where the flaws are in the collection process—will help leaders leverage data effectively for decision-making.
Scouts weren't required to document information on the players they didn't like, resulting in a lack of OFP grades on the bottom end of the 20-80 scale and an incomplete picture of data. While it saved time, it limited the scout's records to only positively reviewed players, completely ignoring their negative opinions, thus making the scout's evaluation methods difficult to measure.
While key data points should be prioritized, the scope of the data is equally important. Manufacturers need to weigh the pros and cons of the data collection methods used today to understand the comprehensiveness of the results generated. For example, manufacturers should determine whether overall equipment effectiveness is measured by a machine and/or production line, to give the best picture of performance.
Scouts often misused the OFP scale when evaluating players. In the original OFP scale, one standard deviation was a difference of ten points, meaning a scout could account for 99.7% of the population by rating anyone within three standard deviations of the mean. In reality, scouts were more conservative in their grading and rated players very closely to one another. There was a lack of normal distribution resulting in grade compression across players.
It’s important to reinforce metrics in the way they are intended to be collected. Setting best practice within teams will help drive consistency in data and allow for improved assessment of the issue at hand.
Data is used to inform decision-making, but how do we know if the metrics actually show correlation to the anticipated outcome? The Red’s reports for pro and amateur players went beyond the standard set of skills and tools to include metrics such as “Hit Style” that had a strong correlation to career outcomes. Other statistics, like “Pitcher Aggression”, had a very minimal correlation. In hindsight, this is partially due to the vague terms (things like careless, competitor, timid, bulldog) that players were rated on as part of the metric. Scouts had varying opinions on how to evaluate this, ultimately resulting in skewed data.
Understand the makeup of key metrics— how reliable are the input methods? How is the data collected? What other variables could be contributing to the outcome? When possible, manufacturers should proactively seek to understand if the metrics are correlating to the expected outcomes. For example, how is labor cost calculated as a percentage of COGS? Overhead cost as a percentage of COGS? What other factors are influencing the correlation and outcomes?
1: OFP (overall future potential) metric is composed of 5 key metrics related to hitting, power, speed, throwing and fielding. Players are evaluated on a 20-80 scale that is added up to get the overall future potential of the player. Ideally, the score accounts for 99.7% of players within 3 standard deviations of the mean.
I am even more accessible than the other modals.