Jump to content
Brian Enos's Forums... Maku mozo!

USPSA Updated Production Classifiers


Recommended Posts

  • Replies 91
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

A shooters classification is supposed to represent their ability against the best in the sport. This further reinforces my opinion that they are instead being compared to an imaginary person, an NPC.

 

Competitors are spending money chasing a classification bump, while the org is arbitrarily moving the goalposts.

 

I think USPSA  is selling a product under a false pretense.

Link to comment
Share on other sites

36 minutes ago, shred said:

This is because of the vast advantage 15 rounds gives them now or what?

Probably, it appears they are just pulling numbers out of thin air.
 

Other HHF’s are being pushed up by people gaming the system:

  • Shooting the classifiers multiple times until they get a score they like.
  • Not setting the stage up correctly.
  • Hero-or-zero runs that don’t count if they are 5% below current classification.
  • Editing scores after the fact.

All of the above artificially raises the HHF’s which affects everyone. I doubt that many of the people that have posted 100% on some of these stages could repeat the performance on-demand. 

Link to comment
Share on other sites

2 hours ago, BritinUSA said:

Probably, it appears they are just pulling numbers out of thin air.
 

Other HHF’s are being pushed up by people gaming the system:

  • Shooting the classifiers multiple times until they get a score they like.
  • Not setting the stage up correctly.
  • Hero-or-zero runs that don’t count if they are 5% below current classification.
  • Editing scores after the fact.

All of the above artificially raises the HHF’s which affects everyone. I doubt that many of the people that have posted 100% on some of these stages could repeat the performance on-demand. 

you make some reasonable points, but the classification system obviously isn’t set up to reward those who shoot at a repeatable match pace. a good reason imho to ignore it and focus on getting better.

Link to comment
Share on other sites

I wonder how many people (who pay their classification fees each week/month) know how flawed the process actually is. 
 

Perhaps instead of ignoring the problem USPSA could look into alternative methods of ranking competitors, and give the members a product that is worthy of their revenue/attention.

Link to comment
Share on other sites

37 minutes ago, motosapiens said:

…. a good reason imho to ignore it and focus on getting better.

How can someone at the L1 level know if they’re getting better if they don’t have a reliable metric to measure their performance against ?

Link to comment
Share on other sites

13 hours ago, BritinUSA said:

How can someone at the L1 level know if they’re getting better if they don’t have a reliable metric to measure their performance against ?

at my local matches we have good experienced shooters who show up. the rest of us try to get closer to them.

 

classifiers give a very flawed view of your possible improvement because some of them are easier than others and because the percentages change when enough people hero/zero and hq decides to screw the hhf up. i have found it to be a much more reliable indicator to simply compare my overall score to the winner. it may change slightly based on who shows up, but i can generally assume that whoever won had a good consistent match.

 

admittedly that method may not work if your local matches are sparsely attended and no one is very good.

Link to comment
Share on other sites

58 minutes ago, motosapiens said:

classifiers give a very flawed view of your possible improvement because some of them are easier than others and because the percentages change when enough people hero/zero and hq decides to screw the hhf up.

This is all about fixing that.  With a little real statistics work (not "Eyeball 'em and take a SWAG") you can remove the shenanigan runs and level out the scores so the % you shoot on any given classifier actually is representative of where you'd end up versus a few top shooters on that same stage.

 

Link to comment
Share on other sites

The ‘shenanigan’ runs are a problem only in that they are affecting the HHF for everyone else. Some of them are easy to spot, others less so. This is why I think the HHF set by the single best run at Nationals should be immutable.

 

Introduce each classifier at Nationals, the best run in each division sets the HHF, it should never change for the life of that classifier (3-5 years).

 

If someone beats it at a local match, they get 100% on their classification, but the HHF should not change, we don’t know if the stage was set correctly, we don’t know if they practiced for hours at a time, we don’t know if the time was recorded correctly etc..

 

If someone wants to game the system it’s tricky to prevent it at the L1 level, but we can easily prevent those people from impacting the system for everyone else.

 

We would need a process to deal with new divisions; the best way would be to organize a match with the top shooters and run through as many classifiers as possible to set the HHF’s. Admittedly this would be costly but it would be more accurate than the current methodology.

Edited by BritinUSA
Link to comment
Share on other sites

24 minutes ago, BritinUSA said:

The ‘shenanigan’ runs are a problem only in that they are affecting the HHF for everyone else. Some of them are easy to spot, others less so. This is why I think the HHF set by the single best run at Nationals should be immutable.

 

Introduce each classifier at Nationals, the best run in each division sets the HHF, it should never change for the life of that classifier (3-5 years).

 

If someone beats it at a local match, they get 100% on their classification, but the HHF should not change, we don’t know if the stage was set correctly, we don’t know if they practiced for hours at a time, we don’t know if the time was recorded correctly etc..

 

If someone wants to game the system it’s tricky to prevent it at the L1 level, but we can easily prevent those people from impacting the system for everyone else.

 

We would need a process to deal with new divisions; the best way would be to organize a match with the top shooters and run through as many classifiers as possible to set the HHF’s. Admittedly this would be costly but it would be more accurate than the current methodology.

 

This will lead to a lot of paper GMS, which is one of the things people gripe about right now

 

 

Link to comment
Share on other sites

26 minutes ago, RJH said:

 

This will lead to a lot of paper GMS, which is one of the things people gripe about right now

 

 

 

Wait a minute, I want to be a paper GM.  That sounds way easier than practicing.  Where do I sign up???

Umm, what is a paper GM?

 

Link to comment
Share on other sites

3 hours ago, motosapiens said:

because the percentages change when enough people hero/zero and hq decides to screw the hhf up.

 

So far I'm not seeing a correlation between top scores and HHF.
Some classifiers literally don't have historical GMs or Hundos, much less current adjusted GMs or Hundos.

Personal hero or zero by local M/GMs that I witnessed did result in pretty high placement and even some current records, but weren't higher than 105%.

There are some statistical anomalies, with people putting 110 and even 120%, but they're insignificant and don't affect the any of the recommended HHF.
 

I believe that a lot of issues that classification system has right can and will be solved.

Link to comment
Share on other sites

5 minutes ago, Cuz said:

 

Wait a minute, I want to be a paper GM.  That sounds way easier than practicing.  Where do I sign up???

Umm, what is a paper GM?

 

 

That's a term for somebody who reshoots a classifier until they can get classified in a higher class than they really are. It doesn't happen as much now as it probably used to since they don't allow multiple  it doesn't happen as much now as it probably used to since they don't allow multiple reshoots on classifiers like they used to. People use  paper GMS as a way to show the classifier system is completely broken. 

 

 

But it turns out, even paper GMS are still pretty good. And, in reality the only person they are hurting are themselves if they try to classify higher than their actual skill level. So in other words, it's not really the issue that some people try to make it out to be

Link to comment
Share on other sites

imho, if you actually wanted to improve the classification system, the hhf’s are only a small part of the problem. there are two much more important issues;

 

first is excluding your zero runs and only keeping the hero scores. i would suggest changing that to include all scores, and compute based on the best 6 of the most recent 10, or similar. something like that would reward shooters who perform consistently instead of those who shoot 5 C scores that don’t count for every gm run.

 

second is to stop relying on a fixed percentage of the hhf, and instead use a percentile approach, which would also be self adjusting over time. 

Link to comment
Share on other sites

Classification should also fluctuate, the reality is that a persons ability will decline over time compared to younger competitors. This needs to be reflected in the classification system.

 

If HQ is altering HHF's to limit the number of GM's then it will be a perpetual problem under the current system as classification goes up and never down

Link to comment
Share on other sites

2 hours ago, BritinUSA said:

 

Introduce each classifier at Nationals, the best run in each division sets the HHF, it should never change for the life of that classifier (3-5 years).

 

That is what they used to do, in the pre-Foley era, despite whatever the official policy stated-- I had the data and they never changed them.  The HHFs were set at Nationals or an Area match (with enough GMs) and then never changed, with very few exceptions.

 

It worked OK, although there were some screwed up HHFs there too.

 

For a start I'd be happy if they just level-out the current HHFs properly across the board and see how that does.

 

Interesting anecdote about Paper GM and repeated runs.  I used to attend a match that allowed unlimited classifier reshoots and saw many people shoot run after run trying to get the magic number. But, from watching, if it didn't happen in the first 2 or 3 tries, it was very unlikely to happen no matter how many more they did.

 

Link to comment
Share on other sites

1 hour ago, RJH said:

 

That's a term for somebody who reshoots a classifier until they can get classified in a higher class than they really are. It doesn't happen as much now as it probably used to since they don't allow multiple  it doesn't happen as much now as it probably used to since they don't allow multiple reshoots on classifiers like they used to. People use  paper GMS as a way to show the classifier system is completely broken. 

 

 

But it turns out, even paper GMS are still pretty good. And, in reality the only person they are hurting are themselves if they try to classify higher than their actual skill level. So in other words, it's not really the issue that some people try to make it out to be

anyone that shoots a match below their class must be a paper GM or grand bagger and practiced classifiers..
Then anyone that shoots above their class is a sand bagger... excuse for all the outliers , VOILA ! system is perfect.
If it was up to me,,, I say get rid of the classifier stages. They dont jive with matches anyways.
Use your score at L2 and up matches against the winner... done. HHF dont matter, classifiers dont have to be built to scale, as stages change design the classes will change with them.

But from a finacial point ? Most people never shoot L2 and up.. or maybe only 1 or 2.  Lots of local shooters do care about the class to the point its the only reason they pay dues.

 

Link to comment
Share on other sites

An ELO ranking system would probably be a better fit for USPSA, though the analysis method might suffer when getting down to L1 matches. I think around 90% of members don't shoot outside of their state, so any process needs to take that into account.

 

I don't think the current system works too well, and I suspect USPSA won't fix it. If people saw their classifications go down, then they would be less likely to shoot classifiers and that hits the bottom line. The classification system is a cash-cow for the org.

Link to comment
Share on other sites

Am I missing something?  A paper GM doesn't effect me.  I'm bothered by the C class shooter that shoots at M or high A level at a major, wins the class and doesn't get bumped up.  Next match, I have to deal with the same people again!

Link to comment
Share on other sites

52 minutes ago, motosapiens said:

the hhf’s are only a small part of the problem.

Yes, bad HHFs is only one of the problems, but it has to be solved before everything else. For example, fluctuating classification will be completely broken if we don't fix HHFs/percentile first.

 

56 minutes ago, motosapiens said:

include all scores, and compute based on the best 6 of the most recent 10

planned. First I need to implement classification calculation to mimic USPSA.

Then we can start playing with algorithms / strategies.

 

58 minutes ago, motosapiens said:

use a percentile approach, which would also be self adjusting over time. 

this is what recommended HHFs are using right now. Although, honestly, it looks like without some major breakthroughs they won't be changing much by self-adjusting. The graphs already look like classic normal distributions. Just click around the app and look at the data. 

Speaking of which, I've just deployed an update with all public USPSA numbers, production HHF updates and 23-series classifiers: https://violent-beverley-howlermonkeys.koyeb.app Check it out.

Link to comment
Share on other sites

11 minutes ago, CutePibble said:

 

this is what recommended HHFs are using right now. Although, honestly, it looks like without some major breakthroughs they won't be changing much by self-adjusting. The graphs already look like classic normal distributions. 

have you found that some classifiers have a tighter distribution?

anecdotally it seems the more difficult shooting creates a wider percentage spread between good shooters and average shooters, whereas others seem to group the scores more tightly at our local matches.

Link to comment
Share on other sites

26 minutes ago, AKGrahamw said:

Am I missing something?  A paper GM doesn't effect me.  I'm bothered by the C class shooter that shoots at M or high A level at a major, wins the class and doesn't get bumped up.  Next match, I have to deal with the same people again!

If the number of GM's gets too high, they may adjust the HHF's higher which will make it harder for you to up in classification.

 

If people are sandbagging and getting away with it then that's a problem that the org needs to acknowledge and rectify. If you have specific examples then please send them to the org and ask them why it has not been fixed.

 

It's possible that they are not checking for this kind of stuff.

Link to comment
Share on other sites

2 minutes ago, motosapiens said:

have you found that some classifiers have a tighter distribution?

Shapes look slightly different in hoser vs standards, the lower end “tail” gets flattened in high risk classifiers with people zeroing them more often. And fixed time ones look funny because they have whole numbers HFs. 
 

but overall distribution itself looks pretty normal with enough data. 
 

The scale just shifts around it depending on HHFs. I put dots on intersections on 1/5/15 percentile and corresponding % and you can tell if it’s easy or hard depending where relative to the dot scores fall 

 


 

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...