ABS Nationals ’16

The scoring for ABS Nationals continues to be a concern. As I see it, the scoring is simply not fair. It creates a climate of abstraction, so to speak, that makes things unclear and equally unfair as to who’s ahead and who appears to win, the primary claim by others. That said, I’d like to focus on what can work for the future and what has worked in the past.

First, I’d like to explain the primary concern for the current scoring system. A simple example between two competitors goes like this:

In this scenario, who's ahead?

In this scenario, who’s ahead?

Though this is a simplified example and doesn’t include the geometric mean (which in this case yields the same result)  for a competitor’s score, it shows that differences in performance are nullified and then treated as an amalgamation of the whole comp. And at the same time, viewers would assume competitor B is the winner from watching the comp:

Why make it so complicated?  The new system rewards competitors for doing consistently better than the others, across multiple problems. – Crank Chronicles

If this is what’s important in a competition then yes, it’s a good idea. But my problem with it is that all forms and types of competitions merit effort over anything else. And effort is the obvious thing anyone sees when watching the competition. This effort is reflected in the actual output, and high point, the climber achieves on each problem.

In the end, the competition still promotes competitors to try their hardest since they need to be consistent, but it nullifies individual effort, the very essence of any competition. That individual effort doesn’t get reflected in the score, just their overall placing compared to others. And here is where I have a bigger problem with the scoring: climbing is an individual sport where the individual has zero interaction with other competitors, yet in this scenario, they are in effect competing directly with invisible competitors. Each person is clueless of the ongoing battle of athletes. It’s as if two tennis players are playing against a robot, and the one that players better against the bot wins. Exciting! (Or, imagine a surfing event where multiple heat results are geometrically averaged before a winner is announced.)

This, of course, looses the spirit of competition. There are two scoring formats that do work, however. One has been tried many times and connects the setting to the scoring. The other hasn’t been tried yet, but the zone system has been a way to get close to it. First the tried and true way: score each hold/move and set all moves to be approximately equal in difficulty. This was the standard for lead comps, and it still is, but it doesn’t fit well with bouldering or real climbing. It’s as if the climbing was on a rotating wall with never-ending equivalent moves, and the person who goes the farthest by rotating the wall the most wins.

The second scoring format is already “practiced” in difficulty bouldering sans competition, especially longer problems. Each micro crux (micro sequence) is sussed out for difficulty and combined with the other micro sections to arrive at a total overall rating for the problem.  As a comp, this means the setting style can be any form. But it also means the setters will have to judge the relative difficulty and assign points for each micro crux out of a set total for each problem (each being the same). This is obviously the most fair (assuming we get better at it over time), the most accurate and easily the hardest to achieve. As you might guess, why we have our current system… because the setters don’t want to face the dilemma of scoring difficulty cruxes. If they did, they’d only get better and better, and that type of system would bring greater depth and sophistication.

Basically, what we have today is a serious lack of understanding of our sport and a bunch of patchwork attempts to quantify competition performance differences without really understanding the nuances and priorities of what we as a body of doers, facilitators and observers are actually doing. And it is a harsh statement. But I’ve always set for and organized comps with the emphasis on getting boulderers to do the hardest thing. It wasn’t about numbers, but it was about the display of power and skill that at it’s apex became the strongest memory of the competition. And why it’s worth watching; that defining action was credited accordingly, and not removed as a secondary element in a geometric mean.

I used to think “tops” were a clear cut thing for competitions, but now not so much. In normal outside bouldering (and trad climbing), topping out is part of the very spirit of climbing. In competitions, if the top is a jug, then it doesn’t indicate anything special about “topping” out. It’s just a nice finishing point. If the top is super hard to match, it’s nothing more than another technical transition stopped short because the wall ended. In other words, nobody tops out. And yet boulderers know this all too well. It’s that finality that’s in the spirit of outside bouldering because… one summits. These outside references have a purpose… they frame the game we already know and love: the “real thing,” and they should help guide us forward in making the best competitions possible; therefore, I’d say it’s not worth scoring additionally for a flash other than doing the whole line on the first go, quantifiably better than on the second go.

The future seems clear, but will it be worth fighting for?

Here is a link to Daniel Woods’ comments on scoring. Here is the USA Climbing opinion on their scoring choice.

Daniel Woods has some excellent points about the audience and scoring from the point of the competitors. I’d like to say that a successful sport in competition is one that has depth to it too. Baseball has the stats. Football has the plays, Basketball has the transition game. And as Woods says, you have a clear winner… something all sports have.

Woods also articulates the need to keep crux sections defined and scored, and by using a simple 5 point then 10 point then top bonus, each problem isn’t scored on difficulty… which is reasonable justified because the only level of climbing we get in a competition should be the best the competition offers at that level. Therefore, difficulty is mute, and each problem should be equally challenging. This I agree with, yet I prefer that the flow of the difficulty within the problem should be reflected in the score distribution. The setters know this; it’s just a matter of applying numbers and some judgement. Not like the entire process of setting isn’t about judgements anyways!

Also, to a non-climber, the 5 and 10 point sections would appear as if the first crux is only half as hard as the second crux. Numbers are very telling either in a clarifying way or to confuse.

This approach also adds depth to each problem, not as tricks to be performed but, as anyone that’s ever worked a problem into submission knows, but as a complex sequence with technical cruxes. By using a fixed amount like 5 or 10, it appears more like skateboarding tricks. I say this not to demean anyone, but the audience will not really take the time to differentiate what one or another move means on the whole, especially for non-climbers. And so it should make sense for this non-climber: assign difficulty points distributed based on moves or sections of moves (with a fixed total per problem) to reflect more precisely what the audience witnesses in context with the struggle. Here is an example:




~ by r. mulligan on 2016/03/19.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: