Jump to content
Brian Enos's Forums... Maku mozo!

FingerBlaster

Classified
  • Posts

    73
  • Joined

  • Last visited

Everything posted by FingerBlaster

  1. great question.. The pi is WAY to slow to train. You need a TPU or GPU with CUDA to do the training. My laptop has a CUDA capable GPU, but i can't get it to work with python/TF, so i gave up. The method I use: I copy all the images to my desktop, i zip them and upload them to google drive, from google drive i load them into a free google colab session and train the model there using google's GPU's, it takes about 10 minutes to train. Then i download it the model and put it on the pi. Not ideal, but it works. If you have a desktop with an nvidia gpu you should have no problem training on your computer. Maybe with a usb coral TPU the pi could do the processing, i honestly don't know. My gut feeling is an NVIDIA jetson is most likely the fastest SBC to do the training with since it has a cuda gpu. The bench marks i've read comparing the 2 say the jetson is faster than the coral tpu for TF applications. But i honestly don't know if it's fast enough to be practical. If it's going to take more than an hour to train the model, it's more practical to just keep using colab. I do plan on testing this with a jetson, in fact i feel like i'm at the point where i can order one now to try it. I also found a huge bottleneck in the system, it takes about a half second to capture a still frame from the camera i'm using. Then another 1/4 second to write a mask over the primer. That's .75 of a second give or take. Right now for manual classification it works fine, but it's not ideal once the system starts going in an automated mode. This is definitly a limitation of the camera or usb. I can get it to go quicker if i drop the resolution or change the mode of the camera. Issue with changing the camera from 'camera-mode' to 'video-mode' is it adds a lot of noise to the images. If i drop the resolution the images wind up being too small after cropping them. Once the components I need to add more motors are in stock i'm going to order a new camera which i was planning on doing anyway, going to try the raspihq cam along with a macro lens. Don't need to use a high resolution if a lens has a small enough field of view. I was planning this anyway because i don't like using the tripod mount for the camera i have right now, it's too easy for the camera to move as is.
  2. I also tried to improve the Neural Net's accuracy by blanking out the primer. The Neural Net doesn't know how to read, you show it hundreds if not thousands of pictures and it learns to identify features. Marks on the primers are features that are totally irrelevant. I used an image processing library that's able to identify circles in an image, and you give it paramaters like how many circles, and the max and min sizes. I did this because the brass isn't always in the same spot, sometimes it's at the top or bottom of the tube. Then annotate a copy of the image by putting a mask over it. I'm still saving unmasked images to disk just incase this idea doesn't pan out as good at it seems to be. What's cool about this is it will also measure the size of the circle. So once i get to .45 it should be trivial to identify small primer pockets. I was able to just edit all the existing pictures i took, and add a little code to process the image in real time before passing it Tensor Flow to classify it. I just ran through a batch of 200 pieces of brass and with the "assisted training mode" i talked about in the video. The accuracy was amazing given i still have an incomplete training set. Not good enough to have it run on auto, but i only had to override it's decision about 10 peices of brass. Even the mode this is awesome. I can't wait until it's running full auto. At the moment i think the biggest issue is the distribution of brass that I have. These are how many pieces of each headstamp that I have so far, as you can see the distribution is very heavily weighted towards FCNT and then another 3 headstamps. Ideally I would have an even distribution across the board. 1 : GFL 2 : A-Merc 2 : IMI 2 : MFS-380 2 : Midway 2 : Pierce 2 : RWS 3 : Aguila 3 : C-B 3 : PPU 3 : WCC 3 : WMA 4 : S&B 7 : Geco 18 : CBC 23 : Tula 46 : Empty 48 : Speer 58 : Blazer 65 : PMC 325 : WIN 344 : FC 436 : RP 1174 : FCNT
  3. mounted the next flappy gate assembly, worked perfect, had to do zero tuning and frankly i think you could add 3-4 more before having to start adding delays to slow things down. I'm super pleased with the performance of that. The unfortunate part is that the $20 motor controller board I'm using is currently out of stock, as are the $2 dc motors i'm using the control the gates. So i can't add any more output buckets at the moment.
  4. Huzzah! It's working! I didn't have a chance to mount the second module, but the computer vision algo is working. I think it still needs a LOT more training to get more accurate, like i still need to classify and tag a couple thousand more pictures, but now i can use a partially trained model to help the system train itself in a supervised mode. Basically less keystrokes. Here are 2 videos 1 showing the mechanical design and the other showing the software including a decently working model. You may need to wiat until youtube processes it in HD to be able to read the screen.
  5. I made 2 small tweaks to the cad model and I'm printing another flappy gate assembly. The one I have works fine, just making a couple small improvements. I'll get to test mating them and running them together, and hopefully I should be able to sort 5 different headstamps when done. After I get all that working that I'll record a quick demo video of how everything works doing semi-manual sorting and acquiring images. The Plan is to acquire an entire new set of training images, at least 500 of each headstamp. Which may be a challenge since most of my 9mm brass is the same headstamp of Federal Non toxic, a couple buckets i got from a police range. Stupid stuff has a larger than normal flash hole... My dillon case feeder keeps jamming. 9mm cases keep binding up between the bowl and the plate and stall the clutch, so i have to baby sit the thing constantly until i figure out what's wrong with it, or take a different case feeder off of one of my other presses. That way I can just semi-manually sort brass while I'm sitting on the couch taking conference calls and the brass sorter is chilling in the basement.
  6. No need, but thank you. I want to get this working with one caliber before I move to another. I'm trying to limit variables like lighting, focal distances, etc. So any pictures you take may not work when i'm ready to test. And now i'm at the point where I can automate image capture, so why go through the pain. I have everything mounted to a more solid assembly, just a quick trip to home depot and some scraps I had. Actually no longer having any double feed issues and for the moment everything's going smoothly. My janky code is a huge bottleneck at the moment, once i get the image classification working better it's going to need some TLC. I'm really not a mechanically inclined person at all, and i have very limited programming experience, but i am really good with computers and tech in general. I honestly I have no clue what I'm doing, I'm just plowing forwards and i keep trying things until i find something that works. I've learned enough basic concepts to putz my way through thus far. So i really believe anyone with enough determination can get as far as I am today. I'm actually pretty happy with the mechanical portion of this to the point where I think I'll be ready to share the files soon. I have a few more tweaks I want to make, but everything is working mechanically. I'll try to take a video to post in the next day or two, but here's the current progress/workflow: Every time a piece of brass is "loaded" the camera takes a snapshot. I have a web page built right now that is a live stream of these snapshots, and updates every time a new snapshot is taken. (eventually and this is currently way out of my wheel house i want the whole app to be managed through a web interface, so I'll have some more learning to do) I then have a very basic python program that runs in a text console that processes each image, it predicts what handstamp it is (this part is not working well at all, maybe 25% accuracy at this point), and gives you the option to manually select what it actually is. After you select what it is, it saves and tags the image for training the model, then triggers the "flappy gates" for where the brass should go (this part works perfectly), and drops the brass. So manually selecting the brass, and the flappy gates sorting the brass is working great. My code overall is slow, i know of a few issues that I can address to speed it up, but first i need to get it working right, before i get it working fast. The initial model i generated SUCKS. You could roll a dice to get a better prediction of what the brass actually is. I've been trying to change the parameters and retrain the model over and over with no luck, i feel like I'm hitting a wall. Which is frustrating given how great my initial test case was. I'm going to capture more training images and start from scratch, maybe i don't have enough images? maybe i have too many? maybe something changed that i can see and all the old images are junk. I have no idea. I really don't know enough about how tensorflow image classification works to know the issue. I just bought a book on tensorflow and i've started reading it, my hope is that I might learn something, but i'm going to keep putzing along and trying stuff. I feel like I'm really close.
  7. I'm not using any sensors right now and relying on nowing the cases fall at a known speed, gravity. So they should all behave the same for the most part unless the odd one gets hung up for some reason such as debris or case deformation. That said I ran into a few issues with running things fast on the edge of maximum speed and a full tube of cases. I think it was related to the weight, or the position of the bailing wiring not getting enough force on the cases... I was having 2 cases drop at a time, I think there there wasn't enough friction between the bailing wire and the cases... I didn't even try to find the sweet spot, I just doubled every singe sleep in between the servo control steps and stopped getting double feeds. I just want to do image aquisition and work more on the training and image classifcation portion of the project... I'll work on speed if i have down time, otherwise the priority is classification now. I took about 1,000 pictures today of 4 different types of brass. FC, FCNT, WIN, RP. Those were all I had pre sorted and dry tumbled. I'll train the algo on these 4 brands, tumble some fresh brass and see if i can identify these 4 brands plus the output bin. If it works i'll print set up the flappy gates and see if i can start pre sorting at least on a couple brands, then generate more training data and more brands to be able to identify. Here's a video i took of today's progress, ignore the messy workbench. the contraption is haphazzardly put together, including the ring light is kind of just "hanging", and hooking it up to the case feeder was just done with some tubing i had on hand plust a coupler i quickly sketched and 3d printed. Nothing more than a temporary assembly right now to take some pictures. I went through all the images captured and only a handful had to be thrown out.
  8. Thanks for the thoughts. I'm mixed on having the camera straight underneath or not, debris is definitely a concern. However, for the time being i'm going to keep things at an angle since i already have a mechanical prototype that seems to be working. And working probably faster than my case feeder can spit out brass. I spent a little time today cleaning up the code. My next block of free time will be spent hooking it up to my case feeder, and working on model training and classification. Pending no major issues i think a working prototype can be done with a couple more days work. Of course that still might mean a few weeks in real world time. After I'm satisficed with 9mm, 5.56 will probably be my next iteration. From there, other pistol and rifle calibers should require minor changes to the files. I definitely plan on handling .45acp and .308 as well. Later on down the line I'm strongly considering utilizing some hinged micro switches though to detect the presence of brass. see the attached image.
  9. Sensors could be helpful, even a couple basic micro pressure switches would help with some of the guess work. Might be worth considering. However at the moment since i'm not sure a case feeder can even push cases fast enough unsure if it's necessary. Yeah i didn't explain the buffer issue well. I request Image from the webcam, but the buffer has an image that's 1/6th of a second old. So the case is ready and in position for it's close up, but the buffer on the webcam has an image from 1/6th of a second ago when the case was still falling making it appear that i took the shot early. Took me a while to figure out what was going on.
  10. Had a little time to work on image capture this afternoon. Was working on taking pictures and cycling brass. Since this is set up in myoffice right now while i work on the code i'm limited to testing 5 peices of brass at a time. at best speed after a lot of tuning I was able to capture images of 5 pc of brass in 2.1 seconds or about 2.4 pieces of a brass a second. I'm running into 2 issues. 1.) the brass can only move through the chute so fast and trying to tune the servo speed is guess work. 2.) NERD s#!t: there is a small buffer with the usb camera and opencv, it seems to be about 4-5 frames. I can't seem to turn it off. When i ask for an image it may be an 1/6 of a second old so the brass hasn't come to a rest yet.. If I flush 4 images out of the buffer before taking a capture it's fine, but it adds time. Otherwise i have to put a really fairly long sleep before taking the image, which is a lot slower. While, I have a couple ideas on the buffer issue, such as running a second thread that's constantly flushing the buffer, it's not a high priority at the moment since this works at a good enough speed. Next if i'm right and I can classify an image in .3 to .5 of a second, I'm looking at about 2 peices of brass per second [2.4 (to image capture and cycle) - .5 (to classify)]. Which i don't think is that bad. 7,200 peices an hour. I'm honestly not even sure how fast the dillon case feeder can spit out 9mm. Next step is to set this up in the basement with my case feeder and start taking a lot of pictures and train a tensorflow model.
  11. I saw that, very cool. I have a almost a dozen 30 cal ammo cans full of pulled bullets, and I found a few doing random samples that are the wrong weight, 124 vs 115 or 147 for example. So now I intend to weight check every single one of them. Something similar to what you did will be my next gadget. My concern with using a piece of glass is that it's going to get snotty and obscure the image. Consistent lighting is also really important an another concern of mine is glass causing additional glare. Not to mention debris getting on the glass. I haven't tested the image recognition algorithm yet on a raspberry pi but my hope is that it will be capable of at minimum 2 to 3 images per second. An Nvidia Jetson can do much more. I don't think the Coral TPU is as fast as the Jetson, but still much faster than a pi by itself. When I tested this on my computer my model was classifying an image in .05 of a second. Much faster than I can mechanically sort. An Nvidia Jetson is very similar to raspberry pi I think they go for about $100. But they have a cuda gpu that is very suited for neural net applications. Image classification speed if you want to run this off of a computer, Jetson, or even a coral should not be a limiting factor. Your rig that images and sorts the brass will be. Someone messaged me suggesting the cogged belt that the hasgrok sortinator 2000 uses wound be a better idea. And it definitely would be for a few reasons. I might approach a similar concept for v2 of this.
  12. Thanks! My first time taking on anything like this and learning multiple skills as I go. I'll include a basic walk through of setting it up once i get it working. Once I get a working prototype, all the source code and cad files will be shared. Once i put it out there, anyone can do with it as they wish. Hopefully some will help to improve the design and provide feedback back to me. I'm more than happy to collaborate with others. I really have no idea what i'm doing, and there are plenty of people out there much smarter than me that probably have great ideas. To start though I really want to get a fully working prototype before i start sharing cad files and source code. Just as a personal goal to get this going. That's not a bad idea at all, thank you. It hadn't crossed my mind, and it was right in front of my nose!
  13. put a little work in on the sorting mechanism. Still not done. I ran it a little harder today and ran into a few small issues, Going to make a few dimensional tweaks and hope that they're sufficient, some of my tolerance are too tight for what my 3d printer can produce, and i'm not providing the motor shaft enough surface area to interface with the flappy door and the door popped off.. Here are 2 videos Showing how the flappy doors work putting in 3 pieces of brass, 1 to go left, one to go straight down, and one to go right.
  14. So right now this is all conceptual. I have some core components I've tried that worked on a bench as isolated trials. For Image capture, i'm using a usb web cam and the plan is to use opencv to capture a cropped image. I just need to pop over to the hardware store to grab a 1/4-20 machine screw to mount the camera so i can start capturing images. I have 2 ideas on how to do image classification right now. Train a neural net using the tensorflow python library use opencv to do a polar unwrap (ie, unwrap the text on the headstamp so it's in a straightish line) and then use OCR to identify the text on the headstamp. The tensorflow idea is my top choice. It'll be more computationally taxing, but it has more potential. identify nickel plated brass? and small vs large primer 45. I do have a working basic tensorflow model that had a promising accuracy rate during my trial.
  15. Worked around the issue of the servos going nuts and breaking all the mounts, still working on the timing, running it pretty slow right now for testing purposes. Right now the plan is to mount the gate at an angle to take the pics, then have a deflector to have the brass go straight down. I am concerned if the brass is going at an angle that things will run a little bit slower than if the brass was straight vertical. Taking pictures that aren't perpendicular to the brass probably won't be a huge issue, but i'll have to test it eventually
  16. One more update for today, this is the current iteration of assembly that interfaces with the case feeder, just finished the print and assembled it to take a pic, but i'm hesitant to run it until i fix the servo issue. Going to either use a converter to step up the logic signal to 5v or get new servos. Hoping that takes care of the issue. Made a few small changes from the last version of this i printed, but the last version worked on the bench until 2 of the servos went haywire and snapped their mounts. I was able to continuously feed brass into it, and have it spit it out 1 peice at a time. Once i fix the servo issue, i think it's ready to mount up along with a camera and start taking pictures. I'm not sure if the 18ga galvanized steel bailing wire i'm using is the best tool for the job, but it seems to be working and was readily available at home depot.
  17. I think i figured out my random servo movement issue. Servos i'm using need 5V logic, but the controller i'm using outputs 3.3V logic. Going to order new servos that use 3.3V logic input. I changed the design of assembly that interfaces with the casefeeder. The entire assembly will be caliber specific instead of trying to make it universal. Right now the focus is 9mm, since i have buckets full of it. This is easier for now to design and get to a prototype i'm ready to share, eventually i'll try to do something more universal, i have a couple ideas. I also had too many issues with getting the servos and wire at the right angle, so I 3d modeled a mounting plate for the servos to align them properly with the tube, instead of just freestyling it withy plywood... First test seemed to work great, until the whole thing grenaded because the servos moved WAYYY further than designed and broke all the 3d printed mounts. I'm going to try a super small LED light ring,one of the small a neopixel type devices. Might be too small, but going to try the one that's about 1.5" OD and 1" ID.
  18. Made very little progress this weekend, got everything assembled, mounted and starting to try and figure out the timing of the brass control. I didn't get very far, but i was able to get it to control the flow of brass, just not very well. I was also getting some unpredictable behavior from the servos that I have to look into. At times they would just randomly rotate to extreme angles and destroy the bailing wire i was using. Unfortunately I spent most of the weekend with one of my cat's at the veterinary emergency hospital, still not sure what's going on with her.
  19. I haven't posted this anywhere else yet. Once i get a prototype working i'll make a post on arfcom and a few other popular places. I also need to come up with a name for it. Didn't make any progress this week, too busy with life, but going to put another day into it this weekend. My goal is to get the module that takes brass from the case feeder and presents it 1 by 1 to the camera timed properly. Get the camera mounted, and work on the code a bit for image capture and classification. I don't know, that's a good question, but i have another project I want to do to sort bullets by weight. I was thinking of utilizing my mr bullet feeder and a 3d printable robotic arm in combination with my lab scale that has a serial output.
  20. apparently I can't screen shot, trying again Also i didn't answer your camera question. And endoscope could be a good idea, but right now i'm using a usbcam i got off amazon with a variable zoom lens and a manual focus that seems to work well. Trying to zoom in on that headstamp as much as possible and make sure it's crisp/clear.
  21. Yes, the wires/pins are moved by servos. The gates for the sorting mechanism are run off of cheap DC 130 style hobby motors. I'm not sure how the hobby motors will hold out long term since they need voltage applied to keep closed (closed means no brass is going out of the gate), aka they're stalling out and generating heat... Right now (v3) they only swing up to about 80 degrees so they want to fall open at rest (to divert brass into the output tube), so and need more voltage than i'd like (about 50% pwm) to keep them closed.. The next iteration of the sorting assembly will allow them to swing more towards 100 Degrees (past 90), so they should need no voltage to keep closed, but i would still apply like 10% pwm for good measure. I don't want to use here servos on the gates, to keep costs lower, plus this is simpler to implement. You Just press fit the shaft of the DC motor in the door. Probably use a dab of glue for good measure on final assembly. I"ve tested this with 223 and 9mm so far and it worked fine Playing around with some basic code snippets to make it go left door or right door. It should be fine for .45, but i haven't tested any yet. additional components so far are a pi 4 with a heatsink, a servo sheild, and a dc motor sheild from adafruit wired up using ribbon cables. Once i clean up a few more things i'll post the files, the code right now is nowhere near done, all i have is a working TF model, and some simple snippets i wrote to test the servo and motor control. Here's a picture of the various iterations, but you can see the nesting idea to expand it pretty easily to as many output gates as you want to print....The benefit over the hasgrok design is you don't have to wait for a pipe to swing around, the downside is, it's is a lot of 3d printer time to print all these parts. The version on the bottom with the mounted motor is the most recent. The 3d printer supports were still on the bottom part when i took the pic. The idea is to cable clamp some flexible tubing on the end of these output tubes.
  22. I've printed 2 more iterations of the sorting module. So the 3rd one assembled and worked after adding in 3d printed supports! There are still a couple issues i've identified that I will have to work on, but it's functional. Next step is the module that interfaces with a case feeder and both controls the flow of brass and allows to take a picture. I have the first version printed, that on my table top seems to work, i need to get some material from the hardware store. For now Im going to use a series of wires/pins to control the flow of brass, but i'm kind of tempted to try something similar to the jaws on a Forster coax to grab the brass behind the rim for the final stage where it takes the picture.
  23. Throwing up a quick update. I got the first prototype of the sorting apparatus printed Monday. The tolerances and 1 key dimension were off, i had some issues with bridging, will probably need to print with supports or or maybe alter the design to negate the bridging issues. Either way,despite the fact that it needs some refinement, I'm on the right track. Frankly for a first print prototype I'm really happy with how it came out. Hopefully i'll have some time to update the design this weekend and try again.
  24. Oh yes, first and foremost i'm building this for me, but i'm trying to be mindful of what others may want as well as what "future me" may want. I'm posting publicly because i'm interested in feedback, especially from people with more know how than I have on what may be good and bad ideas, or things i didn't consider. Also using these posts a journal of sorts helps me to organize my thoughts. I just ordered all the electronics equipment I'll need to get started on this including another pi and Enough motors and servos and controllers to manage 4 output gates. About $200 in cost, plus the usb camera i'm currently using which was about $80. The first arducam didn't cut it once i started designing it and figured out a test jig, i went with something with a manual adjustable zoom lens. There will be ways of cutting some cost in the future, but i went for ease of assembly, modularity, and some parts being a little overkill vs cost savings that will result in more work and possibly poor results. And i really think that someone with good mechanical skills could even do most if not all of this without a 3d printer. Nothing i'm printing is really that special and i could envision the design working with a little know how around a shop some backyard fabrication The output gate design is totally modular, they just stack on top of each other with a simple slip fit and a 2 screws. Picture each gate module as the center section of a peace symbol. I'm going to start with 2modules . which should allow me to classify 4 types of brass, plus if no gates fire the brass falls straight down into a 5th bin for "unknown brass". Right now i'm going to start off using a dillon case feeder, That will certainly be a bottle neck that to addressed in the future, and will definitely be the limiting factor in speed despite what the mechanism may be capable of. I haven't given it much thought yet as it's not a high priority. I was thinking about the project last night and decided my next steps should be to work on the training module. I'm going to have 2 different training modes. Supervised and unsupervised. After I get that done and get stuff printed and assembled i'll work on the servo and gate timing and if everything works get some training going with more types off brass. Supervised: You dump unsorted brass into the case feeder, and as each piece of brass is loaded to have it's picture taken you assign what it is. Unsupervised: you presort brass and drop a couple hundred cases of the same head-stamp in the collator and we just tell it that every peice of brass it sees in this cycle is X. I'm not sure how many photos will really be required to get a good training model. That's something i'll have to mess with. That's all for now
  25. Hope ya'll aren't getting mad that I'm spamming updates, pretty excited about this. I retook all the images yesterday, about 900 images in total. Worked on the code yesterday and this morning and got a result that I would call a success. It's not perfect, but it works good enough with a low error rate. Everything above 88% certainty was correctly categorized. I still had some, but very few, miscategorizations, but they were also assigned a low certainty. Anything below some certainty threshold, i'm thinking 95%, will get thrown to the reject bin. I threw it a bunch of headstamps it had not been trained on and it would have failed the certainty test on all of them (under 60%) and put them in the unknown bin. It will need further fine tuning, and I think i know some next steps to improve accuracy. But i'm mostly done with this part of the code for now. Once i get the device built, I can easily take a LOT more photos, retrain the model, and probably get some better results. There is still a lot of work to do from a coding perspective, controlling the electronics, getting all the timings right, and a graphical user interface. I was running some numbers on the performance I can expect. using google colab with gpu acceleration i'm classifying each image in .04 second. I will need to convert everything to TFlite and see how it runs on a pi. My hope is .25 sec per image. If not i'll order a jetson and try that with gpu acceleration. I'm not there yet. The maximum speed of the mechanical sorting mechanism will be proportional to the number of output bins desired. I think with 8 output gates, we could do 3-4 pieces per second, almost double that speed if we shrink the main chute lenght to only handle pistol brass, or have less output gates.
×
×
  • Create New...