In the interest of over-analyzing the recruiting thing, I decided to take a closer look at UT's class. This was partly because I'm new to the recruiting game and wanted to learn more, partly because I'm a total geek and I do this kind of thing, and partly because I had some suspicions about the rankings I wanted to check over.
First things first: I used data from Scout.com for a very simple reason - it was easy. Everybody has their preferences between Scout and Rivals, so I simply took the one that I knew I could use without any trouble. I really don't have a reason to favor any one of the rankmakers over another.
Second things second: because the rankings are all done on perception and projection, they are necessarily fraught with error. How much? Too much for this kind of analysis to be very reasonable, but it's fun anyway. To make things worse, there's really no good way of estimating the error. Even if the rankings of past classes were compared to their on-field results, there's still the pesky problems of things like injuries, coaching, academics, schedules, and so on that might affect the perception of a class. So rather than try to estimate the error, I'll just assume it's massive and suggest that everything I say be taken with a shaker's worth of salt.
On to the show:
Before even analyzing the data, it seemed to be a good idea to see how Scout arrived at their [team] rankings. From their own statement, it's a combination of *Talent, Need and Balance*. Talent is self-explanatory. Need is their attempt to determine how well a recruit fits a team's need. Balance seems to be some kind of overarching factor on how well-built a team is. That is, are there any holes in any positions, etc. Naturally, no more detail is given since the method is proprietary, but that's enough to go on.
The first insight (before even looking at the data!) is that the star-system is based only on talent. If a 4-star signs with a team that's absolutely overloaded at his position, he's still a 4-star. If he were to sign with a team that needs him with utmost desperation, he's still a 4-star. So, right away, we can note that talking about a player's stars only tells us of the evaluation of talent, and nothing about how well they fit any particular team. This is worth noting for later.
With the Scout data in Excel, I first looked at the distribution of points.
The lowest bin only exists to account for poor Western Kentucky, whom Scout has ranked at the bottom by a fair margin. Otherwise, we can easily see that the rankings system tends to cluster teams together at the lower end, while spreading teams out at the upper end. Again, another useful insight. This would suggest that Scout's rankings use multiplicative factors in some fashion. (For example, when they account for Need in the rankings, a highly-needed player may be worth twice a player of average need. A completely unneeded player might not be worth anything. I don't know if that's how it works for them, but the distribution suggests it's a system similar to that.)
There are two things to take out of the distribution. First, if one team gets twice as many points as another class, it's not necessarily twice the value. Second, the margin of error increases as the number of points increases. Those top few teams may have a margin of error of a few hundred points, while the lower teams may only have a margin of error of a few dozen points. (Not that we'd know the ranges; this is just for intuitive purposes.)
Now for a closer look. Since different teams have different class sizes, the next step is to compare points per player.
The distribution looks similar and may even be more bottom-heavy than the point totals. The lowest bin has a lower limit of 16, so the range here runs from 16 up to about 205. That means that the teams with the most value per player have players that are more than ten times as valuable to their team than those at the bottom. Think about that for a second. Does that make sense? Sure, the top teams got players with much better projections, but are the players for the bottom-end teams of 1A football really only a tenth of the worth as the players for the top end? That just doesn't make sense. So we can definitely see the spread inherent in the points.
What this means is that differences in points between teams at the high end mean less than they do between teams at the low end. So, rather than worrying about who's number 1, the more telling metric is who are in the top 10, or 20, or so. It's more informative to group the results than to compare adjacent teams. Not to take any thunder from Alabama, but that means that their class is effectively equal to Notre Dame's and Miami's. That's still a really, really good projection; it just doesn't sound as sexy to say "We're in the number 1 group!" as it does to say "We're number 1!". (Besides, any chance to refer to Notre Dame as "number 2" should always be taken.)
Now I'll just look at the schools in the major conferences. There is a decided advantage in recruiting for the major conferences that we all understand, and it's very easy to see numerically in the rankings. Separating the majors from the mid-majors, a Oneway plot (a plot, for the nonstatistic types) looks like this.
On an aside, an interesting thing happens with the mid-majors. Most of their dots are so tightly clustered together that the top-ranked mid-major actually appears as a statistical outlier. Translation? Given the performance of the mid-majors, the top-ranked mid-major had a class above and beyond the expectation for this group. That lucky team? Fresno State. Make of it what you will (or won't).
So, here's the graph for the majors. [Ooh! Orange! -- ed.] It's still a similar trend, but there is a decided bottom-end tail. (The bottom-dweller? Washington State.)
This graph does turn out to be a little more useful. There's more of a clustering in the middle of the graph, which is more in-line with how we tend to think of distributions like this. Additionally, it's a better graph to use to discuss Tennessee's performance against our competition, so to speak. Where is UT? Their player average is 128.9, so UT would be in that middle "125" column. Who does that compare UT to? Well, the next-highest score is 131.1 for Colorado, aided largely by nabbing the top-rated running back in the nation. The next lowest score is really not any lower than UT's at all: a 128.9 for Florida State.
Wait. FSU?!? Didn't they just have a top-10 class? [Checks on Scout.com. Yup.] Not just a top-10 class, but a top-five class. Does that mean the only difference between UT and FSU is the number of players? Sort of. Remember that the score accounts for "*Talent, Need and Balance*". It's a bit much to carry it that far, but it is fair to say that our recruits are comparable to FSU's.
Ok, who's top? Ohio State scores the highest at 205, followed by USC and Notre Dame at 200. Then a rather large drop occurs: Georgia is fourth at 179. Alabama, you ask? 12th at 150. Miami? 16th at 139. Yup, Miami's 3rd-ranked class is only a smidge higher per player than UT's. They simply got more. Speaking of UT, their per-player ranking is 21st.
All right, all right. What about the SEC? After all, why would we even care about the rest? Ok, I'll talk. In this fashion, UT ranks 5th in the SEC. The order is Georgia (179), Florida (169), LSU (150), Alabama (150), UT (129), Auburn (109), Spurrier (107), Arkansas (107), MSU (96), Kentucky (86), Ole Miss (72), and Vanderbilt (54). In pretty picture form:
So UT is not quite with the elite four, but not quite with anybody else, either.
So why does this all matter? Well, remember that UT didn't have a lot of slots available. The '05 and '07 classes were both huge and stocked with a lot of starters (or potential starters). There wasn't a whole lot of room to bring in people. Meanwhile, places like Miami, FSU, and Notre Dame all need to overhaul their teams and start over. (Translation: lots of opportunities for freshmen starters.) Places like Alabama and Michigan have huge coaching turnover, and incoming players who fit the new systems better than the returning players feel they have a good chance to start. So it's not so much that UT didn't
draft recruit as well as some of the top-10 programs, but that they didn't draft recruit as many. Update [2008-2-8 10:54:58 by Joel]: The word "draft" in this context only applies to schools in the state of Alabama. -- ed.
Granted, UT could have used about 2 more on the D-line and about 3 more on the O-line to be really, truly happy. But other than that, the incoming class looks about as good as a typical major-conference class.
Oh, and for one more item to think over. UT has two incoming players who aren't accounted for on any ranking system. Courtesy of a well-written article by Jeffery Stewart, you'll note that we get Brandon Warren, a Florida State transfer who was a 4-star tight end in '06 and actually contributed nicely for the Seminoles that year. The other is Demetrice Morley, a five-star DB who was actually part of UT's '05 class. (Morley gets an '08 scholarship after sitting out the '07 campaign.) So we are giving two more schollys this year than what the boards account for, though it's debatable whether you should count them for this year's class. If you did include them in the total point rankings, UT would certainly be top 25 somewhere with only 20 players. If you included them in the per-player rankings, UT would probably land close to 15. Not bad for a year with fewer slots available, huge coaching turnover, and unending off-field problems.