After you’ve decided to make the shift from pen and paper evaluations to using software, you might be faced with a tough decision — deciding which software platform to go with.
The two most reputable companies you will likely consider are TeamGenius (that’s us, hi!) and Skillshark.
Although at the core, both companies focus on helping organizations with player evaluations and tryouts, there is a stark difference in the way each calculates player rankings.
While it might seem like straightforward math to rank players, Skillshark deploys a “data normalization method” that, although well-intentioned, does create a different final player ranking output than TeamGenius, and its player value ranking method.
To demonstrate the differences between how TeamGenius and Skillshark rank players, let’s start with a hypothetical evaluation.
Coach Jim and coach Steve are evaluating a group of 9 players at soccer tryouts. They have both coached soccer before, have a baseline understanding of the game, and know what to look for when evaluating players.
Coach Jim was assigned to a station that tested each player’s technical ability and coach Steve was assigned to a station that tested each player’s shooting ability. Both are instructed to score each player on that respective skill, using a scale of 1-10.
The difference between Jim and Steve is that Jim has no problem giving players a low score if they lack ability in a given skill. Steve, however, grew up in Minnesota and doesn’t want to give any player a low score because he’s too “Minnesota nice”. The lowest score he will give is a 7.
Here’s how the evaluation turned out:
You’ll notice that although Jim and Steve have slightly different scoring tendencies, both are consistent and give seemingly fair scores.
The math used to rank players in the above example is what we commonly see organizations use — add up the scores and rank players based on the total.
Let’s put that same set of scores through both the TeamGenius and Skillshark player ranking systems.
First up is Skillshark:
They use a data normalization method that rescales each evaluator’s scores based on the evaluator’s range of scores. You’ll notice that both Jim and Steve’s scores are rescaled, even though they are both scoring players on a scale of 1-10 that was set by the organization.
Coach Steve’s low score of 7 is not equal to 0, which has a much more meaningful effect on the Total Adjusted Points column than coach Jim’s wider range of scores.
Next, here is the same data set using TeamGenius:
TeamGenius uses a player value ranking method of calculating overall player rankings by calculating the average score for every criteria on a per-athlete basis. Then, each athlete receives an overall score on a scale of 0-100 to extrapolate out the data. For example, If one of the athletes above would’ve received a 10 on both skills, they would’ve received a 100 on their overall score.
Giving each player a 0-100 score helps organizations make tough decisions when there are large numbers of players, multiple evaluators, and criteria involved.
Although not shown in this example, TeamGenius handles measurable (objective) test results, like a 40-yard dash, by age group to better incorporate the results into an overall player ranking. This is where data normalization makes sense because a good time for the 40-yard dash by a ten-year-old isn’t a good benchmark for the same time as an eighteen-year-old.
TeamGenius v.s Skillshark comparison:
After taking a look at the final player rankings for each platform, you’ll notice that Player 9, who had the best overall score across both evaluators, finishes as the second ranked player when using Skillshark. Player 3 jumped up to the number one ranked player because they received a 10 from Steve, which is now disproportionately worth more than any other score.
In the original evaluation, Player 5 and Player 6 were tied for fourth in the overall rankings. This is because both evaluators felt their skill level was average compared to the group. This overall ranking stays consistent when using TeamGenius.
Looking at Player 5 and Player 6 in the Skillshark example, Player 5 drops down to fifth overall, while Player 6 stays at fourth in the overall rankings. Player 4 dropped from the second overall ranked player to tied for fourth now.
By using data normalization for final player rankings, Skillshark essentially “stack ranks” players in order from highest to lowest. This eliminates each individual coach or evaluator’s judgment and ignores the scoring range set up by the organization.
As you can imagine, with more evaluators, players, and criteria, the small discrepancies shown in the above example become more significant.
Player evaluations are crucial to the success of every organization. Ranking players in an unintended manner can have trickle-down effects felt throughout the entire season by placing kids on a team they aren’t ready to compete on from a developmental standpoint.
The goal of this article is to be transparent about the different player ranking methods used by TeamGenius and Skillshark. It’s a big decision for any organization to move forward with a player evaluation platform and important to know how the platforms are different beyond cost and functionality.
If you have any questions about TeamGenius, you can learn more at teamgenius.com.