[back to Home Page] [back to Articles Page]

Computer Ranking in Cross Country?

by Bill Meylan (TullyRunners Webmaster)

(update draft:  January 2001)

Warning: This article will put most people to sleep ... it's boring unless you are really interested.


What is computer ranking in cross country? Is it really necessary? I will answer the second question first ... No, computer ranking of runners is definitely not necessary ... coaches and fans can certainly live without it. But I personally find it interesting and useful ... besides, I'm a scientist, and analyzing information with a computer is something I enjoy. A computer is simply a tool that helps in the evaluation process. Coaches already evaluate runners by how well they train and look in practice and races. This type of evaluation comes from coaching experience and is something a computer can never do. However, a computer is very helpful in organizing information, doing math and storing information for later use ... and these abilities can have useful applications in cross country and track & field. Common examples are the scoring programs used to organize, score and report results at many cross country and track meets ... the computer result listings make it easier for coaches, athletes and fans to see the actual results .... you can do it by hand, but it's a lot easier and faster with a computer. This article briefly describes my computer ranking process for cross country and how it helps in rating running performances. 

Computer evaluation of runners is not a mysterious or magical process ... it is simply a logical series of steps that allows runners to be rated relative to each other. Coaches do it all the time without computers ... they look at race times, they look at who-beat-who, and they speculate about who is improving and who isn't ... using their experience and judgment, they can do a very good job of ranking runners. Usually, coaches consider only their own runners and some competitors. The advantage of a computer ... the ability to consider many runners and teams, all at the same time.

I Need Data!

The computer evaluation process starts with data ... hopefully, lots of data ... the more data, the better (as long as it's good data). What kind of data? ... results from large cross country meets ... a runner's name and school (for identification), their race time, and their overall placing in the race. This data is entered into a computer database (my Section 3 databases are available here on-line on the Database web-pages). This is the "raw" data (data as it actually happened ... unmodified by computer or human interference). By itself, the "raw" data for an individual runner is quite interesting ... viewing the data for a number of sequential races, you can get a reasonable idea of a runner's ability. [Note ... the data in the databases can be used directly to make a program booklets that lists the performance data for runners in a race by school (similar to a horse racing program) ... doing this, a casual XC fan can get some idea of who the runners are and what they have accomplished]. More importantly, the "raw" data is the necessary starting point for my ranking method!

Standardized Data?

Now I have "raw" data, so what do I do with it? The objective is to rank runners relative to each other. If all runners race on the same course on the same day, relative ranking is straight forward, even with separate races for different classes ... the different race times can be merged to a single list ... it's not perfect (nothing is), but it works fairly well ... coaches do it all the time after an invitational or sectional race ... they want to know how their team would have done in the "other" race. I like merging race times ... But I need to know more ... I want to know how to rate different runners racing on different race courses on the same day ... I also want to know how to rate the same runner racing on different courses on different weekends ... I want to rate each individual race for each individual runner relative to all other runners where possible. To do this, I use a process called "data standardization" ... data standardization is a fancy name for simply converting "raw" race times to "standardized" race times for a standard race course. What does this really mean?? ... I think some analogies in track & field can illustrate the process of converting "raw" race times to "projected" race times.

(1) a runner runs 1500m in 4:00 ... how fast would he have run 1 mile?
(2) my distance runner runs 5000m in 18:00 ... how fast can she run 8000m?

Track coaches have conversion factors available that can reasonably answer these questions and other time/distance conversion estimates. The conversions are not perfect, but they don't need to be ... they just need to be reasonable. The conversion factors are easily derived by examining lots of "raw" data from lots of runners and finding averages.

Now back to cross country ... if I run 19:00 on the FM race course, how fast would I run on the SUNY Utica Tech race course? That's the type of question that "data standardization" can answer ... and that's exactly what I do to "raw" race times ... I convert them to "standardized" (projected) race times for SUNY Utica Tech. The overall conversion process requires sufficient raw data; otherwise, you can not "project" standardized times. I use "standardized" race times to rank runners relative to each other.

A complete discussion of the conversion process is too long for this article. In general, it's boringly statistical in nature (identification of patterns and averages in the data). This is where a computer is really helpful ... it can keep track of many runners and statistics simultaneously. However, in simple terms, a "standardized" race time is obtained by taking the actual race time ("raw" race time) and adding or subtracting how fast the race was relative to the standard race course. My standard race course is SUNY Utica Tech as it existed on Nov 6, 1999 (date of the Section 3 sectional championships) ... any race course can be selected as a standard course (it doesn't matter) ... I selected the 1999 sectional course because everybody in Section 3 was there (maximum amount of raw data for a single day) and the weather and race course were consistently good all day.

Now the real key to my method is determining how fast a race is relative to the standard race course. Some methods (such as the method used to rank runners in the Syracuse newspaper) use a pre-determined list of all XC race courses that sectional runners compete on with an average rating of each course's speed (actual race times are corrected by finding the difference in the average ratings from the list) ... this is simply not accurate enough for my method! Average race course speeds do not adequately consider weather conditions (heat, cold, rain, wind, mud, etc) and other factors. This may surprise some people ... but my method does not need to know where a race was was run to rate how fast or slow it was relative to the standard race course.  I use two separate methods to make this determination, (1) a graphical - statistical method and (2) a "reference" runner method.  An example graphical determination is illustrated in the article "How Much Faster Was the SUNY Utica Course at the 2000 Sectionals Compared to the 1999 Sectionals??" ... this article also contains additional information about my method. Information about graphical determinations is available in the article "Graphical Determination of Race Course Speed".
   The "reference runner" solution assumes most individual runners are somewhat consistent in terms of their speed ability ... it requires a database of their previous races (which I have) ... you then determine the speed difference between the appropriate database speed and the current race for as many runners as possible and find the statistical variance ... exceptionally bad or exceptionally good races for individuals are excluded from consideration. I then compare the graphical and reference runner determinations to derive a final correction factor ... for example, the SUNY Utica course was 20 seconds faster at the 2000 sectionals compared to the 1999 sectionals. "Standardized" times for each runner are now converted to "speed ratings" ... for a description, see the article "Standardized Race Time Ratings".

Now I realize that some people will not agree with the concept of "standardized" race times ... and there are some valid concerns. But experience and statistical results prove otherwise. And one primary point needs to be emphasized ... the conversion process does not need to be exact ... it just needs to be reasonably close.

Computer Simulated Races

This section describes computer simulation of cross country races and how I perform them. A race simulation begins by selecting all runners in a race ... I need each runner's identity (name/school) and their "standardized" race times (actual race times can not be used). Using this information, the computer performs an operation that statisticians call a "Monte Carlo Analysis" ... it sounds complicated, but it's really quite simple ... I will explain a basic simulation step-by-step: (1) the computer looks at each runner's list of standardized race times and randomly selects one race time (using a random number generator) and assigns that race time to that runner for that race, (2) after the computer randomly selects a race time for each runner, the race times are sorted and the "order of finish" for that one race are derived ... since each runner has multiple standardized race times, the "order of finish" for one random race is only one possible "order of finish" and there can be many possible "orders of finish" ... so, (3) I save the "order of finish", then (4) I let the computer run thousands of random races and I save the "orders of finish" for all of them, and (5) the computer totals the final results.

The final results can take different forms ... typically, the "orders of finish" are averaged to yield a single "order of finish". But a Monte Carlo Analysis allows you to see much more ... from a mathematically viewpoint, you can see a "distribution" of results. For example, consider a 1999 race that includes Christine Campbell (Waterville), Colleen Eccles (JD), Jackie Kosakowski (Sauquiot) and Lia Cross (Skaneateles) ... four evenly matched runners ... if I consider just these four runners, a computer simulation might indicate that Christine Campbell will win 34% of the time, Colleen Eccles 29%,  Jackie Kosakowski 24%, and Lia Cross 13% ... that's the winning distribution for each runner for that specific computer simulation.

Standardized race times are the data that drive a computer simulation. In the step-by-step example above, the computer randomly selected actual standardized race times ... but the computer can be programmed to do something a little different if I choose. Here are some examples:

(1) I can restrict the computer to randomly select from only the most recent standardized times ... may be a better gauge of current fitness in some cases.

(2) I can graph each runner's standardized times with a "best-fit" equation and randomly select any point (race time) within the range on the graph.

(3) I can run a simulation I call a "predictive" simulation ... here, I apply actual standardized times to some equations that attempt to predict a "most-likely" range of race times for the next race ... I then randomly select from within the predictive range ... [I can hear professional statisticians groaning in the background ] ... [My response:

Actual Race Predictions

In 1999, my only real interest was accessing Tully's chances in the Sectionals (versus Beaver River) and States (versus Bronxville). I collected race data solely for this purpose. For the Sectionals, my computer simulations found a statistical dead-heat between Tully and Beaver River ... anybody following this match-up throughout the season already knew it was very close; they didn't need a computer to tell them ... the final team score was Tully - 35 and Beaver River - 37 ... a computer simulation predicted Tully-43 and Beaver River-43, almost the exact score before runners from incomplete teams were removed from the scoring (didn't realize that Little Falls and Faith Heritage would be incomplete).

I can't gloat about the 1999 Sectional prediction ... the race was effectively a duel-meet between Tully and Beaver River with a hand-full of other runners thrown in ... you only had to predict the relative positions of the Tully and Beaver River girls ... Tully Coach Michelle Franklin predicted a score of 40-40 without any help from a computer.  For the 1999 State Class D Girls Championship contested at Westchester Community College, I predicted Tully would beat Bronxville by 3 points ... Bronxville beat Tully by 2 points ... the computer prediction had a margin of error of 6 points, so the computer prediction was accurate (within the margin of error); but that didn't make me feel any better, because Tully lost.

I opened this web-site about one month before the start of the 2000 XC season ... I decided to make my rankings,  predictions and databases available to anybody that was interested. The season-long database entries allowed me to predict all eight Section 3 sectional races and three races at States (I didn't have enough free time to predict the other five State races) ... the sectional predictions & results are available at Boys 2000 Team Page and Girls 2000 Team Page ... the State predictions & results are available at Boys Class D 2000 State Page, Girls Class D 2000 State Page and Girls Class C State Page.  I'm reasonably satisfied with the team prediction results. The computer correctly predicted the winners in 10 of the 11 races with acceptable accuracy ... the class B girls sectional race was actually "too close to call", but the South Jefferson girls were predicted to be 3rd (with a 22% chance of winning), and they won. Overall team places and scores (top to bottom) were predicted fairly well.

For individual runner rankings, see the Girls 2000 Final Rankings Page and Boys 2000 Ranking Page ... these pages compare the actual sectional championship results with my ranking method as I applied it all season-long ... I think the results are reasonably good.

Concluding Remarks

I think computer ranking of cross country runners is interesting and fun. I also believe my process is both valid and relatively unbiased. But I also realize that computer ranking should not be taken too seriously. It is simply one tool available for evaluating running performances. Computer ranking should not be a replacement for qualified human judgment ... it's meant to help human judgment.

Speaking as a cross country fan, I'm glad computer predictions aren't perfect ... it would take all the fun and suspense out of big races ... I wouldn't be able to enjoy watching an "anxious" Coach Franklin before big races (she gets a little hyper) . Besides, I really enjoy high school competition ... runner against runner, team against team. One reason I like cross country as a team sport ... the number #5 runner is just as important as the #1 runner. My computer rankings help me identify runners who step-up with exceptional individual performances ... and frequently, it's a #3, #4 or #5 runner that can go unnoticed. I hope to identify those runners and give them special recognition.