Jump to content
Brian Enos's Forums... Maku mozo!

malobukov

Classified
  • Posts

    91
  • Joined

  • Last visited

Everything posted by malobukov

  1. Should hero runs count? If yes, then everyone is expected to either hero-or-zero classifiers, or accept a classification that is lower than what their current level allows them to get. There's quite a bit of difference between YouTube speed and speed one can replicate on demand, under major match pressure. If no, then how can you tell a hero run from just a good, solid, repeatable run? Current classification system discards results a certain threshold below current classification, presumably to prevent sandbagging and avoid lowering classification of people who just leveled up. If you care about "making GM" more than winning your local match, this gives a strong incentive to hero-or-zero. And if you look at the distribution of hit factors, it's clear that many competitors do hero or zero on classifiers. This pushes HHFs up, especially for popular classifiers.
  2. Can you give a specific example? There are many 99-XX classifiers, I'd rather not pull all of them again to avoid hitting the rate limiter. I checked 99-08 and got the same results as you (8.14 hit factor corresponds to 101% with the old HHF and 86% with the new). Imputation of the old HHFs was somewhat automated so I trust them more, but the new ones are completely manual so can indeed have errors.
  3. That might not be enough. The change in HHF among unpopular classifiers must be noisy simply because there's not enough data to figure out where the tail is. So some of the classifiers with big swings might be actually easier now, and some classifiers with no change might remain easy (but nobody shoots them so nobody knows).
  4. Here's production: Classifier Old New 03-02 6 7.067 03-03 6.972 8.086 03-04 8.328 9.309 03-05 9.998 11.75 03-07 7.348 7.157 03-08 8.762 9.647 03-09 9.64 10.91 03-11 5.91 6.853 03-12 6.218 6.887 03-14 120 93.9 03-18 7.989 7.911 06-01 8.264 8.361 06-02 7.11 7.51 06-03 14.92 15.94 06-04 12.7 14.53 06-05 11.69 13.26 06-06 7.68 7.809 06-10 8.475 9.92 08-01 5.93 5.053 08-02 6.773 6.753 08-03 10.2 11.23 09-01 120 91.5 09-02 10.5 12.03 09-03 7.7 8.629 09-04 10.7 11.9 09-07 6.5 6.264 09-08 7.936 7.644 09-09 90 86.8 09-10 9.667 10.36 09-13 8.25 8.055 09-14 8.055 8.649 13-01 9.343 9.105 13-02 13.23 12.89 13-03 5.289 4.886 13-04 11.65 12.3 13-05 10.84 10.59 13-06 8.29 9.202 13-07 10.75 10.08 13-08 9.343 9.177 99-02 6.905 6.792 99-07 6.259 7.158 99-08 8.058 9.392 99-10 9.025 9.584 99-11 10.26 11.56 99-12 8.111 9.473 99-13 8.74 8.946 99-14 90 77.2 99-16 9.017 10.05 99-19 6.218 6.511 99-21 11.17 11.91 99-22 10.13 10.28 99-23 12.7 12.83 99-24 11.41 11.03 99-28 9.276 10.43 99-33 9.025 10.14 99-40 120 101.2 99-41 9.741 10.42 99-42 9.1 10.03 99-46 9.525 9.494 99-47 5.964 5.67 99-48 9.83 10.29 99-51 7.152 6.095 99-53 8.084 7.014 99-56 8.075 8.196 99-57 7.505 6.756 99-59 6.508 5.247 99-61 4.75 4.334 99-62 11.59 12.56 99-63 5.045 5.169 Typed in by hand, might have some errors.
  5. USPSA web site has all results. It takes a bit of effort to comb through and aggregate, but the data is available. I'm only interested in Production, so this is what I'm tracking.
  6. In the last two months, El Prez (99-11) has 639 results in Production, more than any other classifier. The close second is 06-03 (Can You Count) with 607 runs. For contrast, during the same period 09-10 was shot 12 times. 99-61 14 times, and 99-59 16 times. HHF for 99-59 changed from 6.508 to 5.247, a 19% difference. This leads me to believe that many of those changes are simply noise due to low sample size and questionable methodology (looking at top N results instead of picking a high quantile and estimating it from the entire distribution). But the methodology is not published, so this is only a guess.
  7. It's irrational. Eye protection that complies with MIL-PRF-31013 like Revision Stingerhawk costs peanuts compared to the a decent scope. It's been tested, and it works. You can have Rx inserts if you need correction, with high quality lenses of your choice. You can even do it online if you have the prescription. In a PRS match it's unlikely that a bullet fragment will come back at you, or that you get pierced primer. But you don't have too many eyes either, and they don't grow back.
  8. Here's how the distribution by class looks in Production now: GM 1.9% M 5.1% A 7.7% B 18.2% C 25.4% D 9.2% U 32.5% Those are not official total numbers, just aggregation of recent matches.
  9. It's a bit tricky to show all that data on one graph, but here's the closest I have. This is based on all 2017 area matches combined in production division. I don't have it for nationals, but you can tell from the other two graphs shown before that it should be similar. R2≈0.57, so classification does rank major match performance, albeit not perfectly. The dashed line is OLS regression. It's easy to see why the slope is slightly below 1. Note the point at (100%,100%) where all match winners end up. That point is by definition above average GM result, regardless of which kind of average you pick (mean, median, etc.) That does not mean that current classification system is perfect, either before or after HHF update. I would actually prefer to have HHFs be based on some quantile (e.g., mean + 2.5 standard deviations). This way it won't be affected so much by outliers (real or statistical), judgemental calls, and hero-or-zero. Another alternative is to assign class by aggregated match rankings. There are algorithms like PlackettLuce that are designed specifically for that purpose, but they lack the simplicity and transparency of the current system.
  10. There is some difference between classification and major match performance. The comparison below is for 2017 area matches combined in production division. I don't think it's disproportional, tough. Match percent is pegged to the winner of that particular match, so even GMs on average are getting less than 100% simply by construction.
  11. My current classification percentage in Production is 63.63%. If I re-calculate it today using the same hit factors and new high hit factors, I'm at 58.75%, back in C class. Overall average across all remaining production classifiers is about 2.9% (that's how much lower the new percentage is, based on the same hit factor).
  12. What does "10-1 group" mean, a ten shot group excluding the worst shot? If so, this is actually very good.
  13. Had the same issue with several different extractors - factory, PD, Ben Stoeger's Pro Shop. The fix described by Shmella in this post worked. The key insight was that extraction happens when the barrel has already been unlocked and is lowered. So it does not really matter how tight the extractor holds the casing when the casing is centered relative to the firing pin, or how strong the extractor spring is. If the bottom (curved) part of the extractor claw cannot go close enough to the centerline because extractor foot contacts the slide, the extractor can't get enough purchase on the rim. You do the shake test or see extractor marks on the bottom of the groove and think that extractor can reach deep enough, but that's only when the casing is all the way up the bolt face. Filing just a little (0.1 to 0.2 mm) off the extractor foot turned out to be sufficient. Ejection pattern became predictable, and number of stoppages went from 1 in 200 to 1 in 1,000. Tried with two different extractors.
  14. I'm having a similar problem with close targets. On distant targets I have enough time to see the front sight veer left and stop that movement. It is less pronounced if I position my strong (right) hand so that more of the knuckle of the middle finger of the strong hand is under the trigger guard. I suspect that if it is a bit to the right, then I tend to push the gun to the left with it. The downside is that this position makes it harder to reach the mag release, so I'm still trying to find some compromise.
  15. I'm using 4.0 grains of Alliant Sport Pistol with 125 grain round nose Blue Bullets at COAL 1.120 for PF of 130 out of Tanfoglio Stock 2.
×
×
  • Create New...