Jump to content
Brian Enos's Forums... Maku mozo!

Machine vision to sort by headstamp


Recommended Posts

For image capture I was just going to try a glass slide instead of the load cell and have the cases pushed out on to it from the collator one by one, like I did the sorter by weight but a glass slide vs the loadcell.

 

How fast one could get the image capture and logic to work would dictate the direction I went for the sorting device, lots of ways to do that part.

Link to comment
Share on other sites

  • Replies 236
  • Created
  • Last Reply

Top Posters In This Topic

55 minutes ago, jmorris said:

For image capture I was just going to try a glass slide instead of the load cell and have the cases pushed out on to it from the collator one by one, like I did the sorter by weight but a glass slide vs the loadcell.

 

How fast one could get the image capture and logic to work would dictate the direction I went for the sorting device, lots of ways to do that part.

I saw that, very cool.  I have a almost a dozen 30 cal ammo cans full of pulled bullets, and I found a few doing random samples that are the wrong weight, 124 vs 115 or 147 for example.  So now I intend to weight check every single one of them.  Something similar to what you did will be my next gadget.

 

My concern with using a piece of glass is that it's going to get snotty and obscure the image.  Consistent lighting is also really important an another concern of mine is glass causing additional glare.  Not to mention debris getting on the glass.

 

I haven't tested the image recognition algorithm yet on a raspberry pi but my hope is that it will be capable of at minimum 2 to 3 images per second.  An Nvidia Jetson can do much more.  I don't think the Coral TPU is as fast as the Jetson, but still much faster than a pi by itself.  When I tested this on my computer my model was classifying an image in .05 of a second.  Much faster than I can mechanically sort.

 

An Nvidia Jetson is very similar to raspberry pi I think they go for about $100.  But they have a cuda gpu that is very suited for neural net applications.

 

Image classification speed if you want to run this off of a computer, Jetson, or even a coral should not be a limiting factor.  Your rig that images and sorts the brass will be.

 

Someone messaged me suggesting the cogged belt that the hasgrok sortinator 2000 uses wound be a better idea.  And it definitely would be for a few reasons.  I might approach a similar concept for v2 of this.

Edited by FingerBlaster
Link to comment
Share on other sites

Had a little time to work on image capture this afternoon.  Was working on taking pictures and cycling brass.  Since this is set up in myoffice right now while i work on the code i'm limited to testing 5 peices of brass at a time.

 

at best speed after a lot of tuning I was able to capture images of 5 pc of brass in 2.1 seconds or about 2.4 pieces of a brass a second. 

 

I'm running into 2 issues. 

1.) the brass can only move through the chute so fast and trying to tune the servo speed is guess work. 
2.) NERD s#!t: there is a small buffer with the usb camera and opencv, it seems to be about 4-5 frames.  I can't seem to turn it off.  When i ask for an image it may be an 1/6 of a second old so the brass hasn't come to a rest yet.. If I flush 4  images out of the buffer before taking a capture it's fine, but it adds time.  Otherwise i have to put a really fairly long sleep before taking the image, which is a lot slower.  

 

While, I have a couple ideas on the buffer issue, such as running a second thread that's constantly flushing the buffer, it's not a high priority at the moment since this works at a good enough speed.

 

Next if i'm right and I can classify an image in .3 to .5 of a second, I'm looking at about 2 peices of brass per second [2.4 (to image capture and cycle) - .5 (to classify)].  Which i don't think is that bad.   7,200 peices an hour.  I'm honestly not even sure how fast the dillon case feeder can spit out 9mm.

 

 

Next step is to set this up in the basement with my case feeder and start taking a lot of pictures and train a tensorflow model.

 

4.jpg

5.jpg

1.jpg

2.jpg

3.jpg

Edited by FingerBlaster
Link to comment
Share on other sites

perhaps using sound or a touch sensor or light break to synchronize picture to case.

Have a picture frame request and let the case advance the next sensor,

that will be known to be that just pictured case.

I think I'd test how long each picture takes to process.

see who arrives first, case at next sensor or case identified.

 

eta  my case feeder is definitely faster than 800 rounds per hour.

I have no way to easily test past that.

my guess is that with the feeder relatively full, it can go at 1600.

 

and I see I miss-read your speed tests and buffer problems.

case will arrive first.

 

miranda

 

Edited by Miranda
forgot part of my answer
Link to comment
Share on other sites

Sensors could be helpful, even a couple basic micro pressure switches would help with some of the guess work.  Might be worth considering.  However at the moment since i'm not sure a case feeder can even push cases fast enough unsure if it's necessary.

 

Yeah i didn't explain the buffer issue well.  I request Image from the webcam, but the buffer has an image that's 1/6th of a second old.  So the case is ready and in position for it's close up, but the buffer on the webcam has an image from 1/6th of a second ago when the case was still falling making it appear that i took the shot early.  Took me a while to figure out what was going on.

Link to comment
Share on other sites

I have no input on how to solve any of the software issues you might run into,but I'd like to help as much as I can. 

I wouldn't worry about speed too much, the machine can run autonomous while you do other stuff in the mean time, still a lot of hours saved. 

Maybe you can monkey patch the buffer issue by only requesting a picture every 0.6 seconds? Wouldnt that give the case enough time to get in position and stop moving/bouncing around? 

Also, what will the machine do if it takes a blurry picture or there is no case in position or cas is upside down? Will it wait for a case to arrive or just open the channel so whatever is in there will fall down to a "Human inspection container"?

 

For taking the images I would also avoid to have anything that can get dirty between the camera and the case. I think feeding them as you plan right now is by far the easiest way of doing it. Maybe have the mechanism at a sligt angle to have the camera at 90 degrees to the case and any debris would fall down, not onto your camera. Then have the machine perform the following steps.

1. Feed case to picture station. 

2.Take picture

3.Flappy door opens to redirect case to waiting station

4.Release case from Picture station, so it falls down and away from camera to "parking station"

5.Machine does magic work and figures out the brand of the case 

6.Open and close flappy doors in the sorting tunnel to direct case in the correct exit

7.Release case from parking station wich lets it fall into the correct container.

 

If there is enough processing power, maybe you can have the machine start a new cycle from 1-3 while the first case is still waiting its turn to go down the sorting tunnel. 

 

BTW: I hope to be able to make the needed changes to use the sorter for .233 brass. I dont sort 9mm as the Dillon 1050 will load mixed 9mm to my satisfaction without issues for several hundred cases. But mixed .223 brass is harder to process and load so I will have Primer seating issuse every 10-50 rounds due to inconsistent swaging (My best guess)., just sortet a bucket to see if this solves my problems. 

 

Best regards from Switzerland

 

Link to comment
Share on other sites

Thanks for the thoughts.  I'm mixed on having the camera straight underneath or not, debris is definitely a concern.  However, for the time being i'm going to keep things at an angle since i already have a mechanical prototype that seems to be working.  And working probably faster than my case feeder can spit out brass.  I spent a little time today cleaning up the code. 

 

My next block of free time will be spent hooking it up to my case feeder, and working on model training and classification.  Pending no major issues i think a working prototype can be done with a couple more days work.  Of course that still might mean a few weeks in real world time.

 

13 hours ago, Hansimania said:

I hope to be able to make the needed changes to use the sorter for .233 brass.

After I'm satisficed with 9mm, 5.56 will probably be my next iteration.  From there, other pistol and rifle calibers should require minor changes to the files.  I definitely plan on handling .45acp and .308 as well.

 

 

Later on down the line I'm strongly considering utilizing some hinged micro switches though to detect the presence of brass.  see the attached image.51uFmrKvdHL._SL1002_.jpg

Link to comment
Share on other sites

hmmmm...

ok, I can't think it is possible to make an imaging and sorting system

without at least 3 sensors.  The camera is one.  A way to know 'take image' as the second.

Depending on overall speed at least one more (third) to know you have the case sync'ed

to the image for setting gates or awaiting image processing... if not both of these last two.

 

I would be very tempted to use optical or light break methods for case position sensing.

micro switches tend to wear and can interfere with cases in motions.

dirt is an issue with either choice.  buuuuut light can sense a case in a clear tube.

The camera may have an issue with an optical sensor.

Perhaps the light can be used to light the casehead

and the sensor sees the light go off as the case arrives.

 

planning my own I see I am doing.

 

miranda

 

 

 

Link to comment
Share on other sites

I'm not using any sensors right now and relying on nowing the cases fall at a known speed, gravity.  So they should all behave the same for the most part unless the odd one gets hung up for some reason such as debris or case deformation.  

 

That said I ran into a few issues with running things fast on the edge of maximum speed and a full tube of cases.  I think it was related to the weight, or the position of the bailing wiring not getting enough force on the cases...  I was having 2 cases drop at a time, I think there there wasn't enough friction between the bailing wire and the cases... 

 

I didn't even try to find the sweet spot, I just doubled every singe sleep in between the servo control steps and stopped getting double feeds.  I just want to do image aquisition and work more on the training and image classifcation portion of the project...  I'll work on speed if i have down time, otherwise the priority is classification now.  I took about 1,000 pictures today of 4 different types of brass.  FC, FCNT, WIN, RP.  Those were all I had pre sorted and dry tumbled.  I'll train the algo on these 4 brands, tumble some fresh brass and see if i can identify these 4 brands plus the output bin.  If it works i'll print set up the flappy gates and see if i can start pre sorting at least on a couple brands, then generate more training data and more brands to be able to identify.

 

Here's a video i took of today's progress, ignore the messy workbench.  the contraption is haphazzardly put together, including the ring light is kind of just "hanging", and hooking it up to the case feeder was just done with some tubing i had on hand plust a coupler i quickly sketched and 3d printed.  Nothing more than a temporary assembly right now to take some pictures.  I went through all the images captured and only a handful had to be thrown out.

 

 

 

Edited by FingerBlaster
Link to comment
Share on other sites

Wow I am Amazed of the progress. If you let me know how I need to take pictures I could manually take pictures of 223 Brass. I just hand-sortet 4 buckets of Frontier, S&B, GGG and small amount of other brands. 

To counter on double feeds, cant you just adjust the pressure on the cases by lengthening or shortening you adapter hose? This way you can play with the angle and amount of brass that can be stacked, and therfore the brass will always feed at the same speed.

 

What you call a mess and haphazzardly put together machine is more than I could ever come up with. I have a Mark7 autodrive coming for my 1050super and hope I can get that to work without too many issues. 

 

If you want to add sensors later, why not use inductive sensors? You might even be able to separate the steel and nickel plated cases fron brass if i understand correctly?

Link to comment
Share on other sites

On 4/17/2021 at 4:03 PM, Hansimania said:

I could manually take pictures of 223 Brass

No need, but thank you.  I want to get this working with one caliber before I move to another.  I'm trying to limit variables like lighting, focal distances, etc.  So any pictures you take may not work when i'm ready to test.  And now i'm at the point where I can automate image capture, so why go through the pain.

 

I have everything mounted to a more solid assembly, just a quick trip to home depot and some scraps I had.  Actually no longer having any double feed issues and for the moment everything's going smoothly.  My janky code is a huge bottleneck at the moment, once i get the image classification working better it's going to need some TLC. 

 

I'm really not a mechanically inclined person at all, and i have very limited programming experience, but i am really good with computers and tech in general.  I honestly I have no clue what I'm doing,  I'm just plowing forwards and i keep trying things until i find something that works.  I've learned enough basic concepts to putz my way through thus far.   So i really believe anyone with enough determination can get as far as I am today.

 

I'm actually pretty happy with the mechanical portion of this to the point where I think I'll be ready to share the files soon.  I have a few more tweaks I want to make, but everything is working mechanically.

 

I'll try to take a video to post in the next day or two, but here's the current progress/workflow:

 

 

Every time a piece of brass is "loaded" the camera takes a snapshot.  I have a web page built right now that is a live stream of these snapshots, and updates every time a new snapshot is taken.  (eventually and this is currently way out of my wheel house i want the whole app to be managed through a web interface, so I'll have some more learning to do)

 

I then have a very basic python program that runs in a text console that processes each image, it predicts what handstamp it is (this part is not working well at all, maybe 25% accuracy at this point), and gives you the option to manually select what it actually is.  After you select what it is, it saves and tags the image for training the model, then triggers the "flappy gates" for where the brass should go (this part works perfectly), and drops the brass.

 

So manually selecting the brass, and the flappy gates sorting the brass is working great.

 

My code overall is slow, i know of a few issues that I can address to speed it up, but first i need to get it working right, before i get it working fast.

 

The initial model i generated SUCKS.  You could roll a dice to get a better prediction of what the brass actually is.  I've been trying to change the parameters and retrain the model over and over with no luck, i feel like I'm hitting a wall.  Which is frustrating given how great my initial test case was.  I'm going to capture more training images and start from scratch, maybe i don't have enough images? maybe i have too many? maybe something changed that i can see and all the old images are junk.  I have no idea.  I really don't know enough about how tensorflow image classification works to know the issue.  I just bought a book on tensorflow and i've started reading it,  my hope is that I might learn something, but i'm going to keep putzing along and trying stuff.  I feel like I'm really close.

 

Link to comment
Share on other sites

I made 2 small tweaks to the cad model and I'm printing another flappy gate assembly.  The one I have works fine, just making a couple small improvements.  I'll get to test mating them and running them together, and hopefully I should be able to sort 5 different headstamps when done.  

 

After I get all that working that I'll record a quick demo video of how everything works doing semi-manual sorting and acquiring images. 

 

The Plan is to acquire an entire new set of training images, at least 500 of each headstamp.  Which may be a challenge since most of my 9mm brass is the same headstamp of Federal Non toxic, a couple buckets i got from a police range.  Stupid stuff has a larger than normal flash hole...

 

My dillon case feeder keeps jamming.  9mm cases keep binding up between the bowl and the plate and stall the clutch, so i have to baby sit the thing constantly until i figure out what's wrong with it, or take a different case feeder off of one of my other presses.  That way I can just semi-manually sort brass while I'm sitting on the couch taking conference calls and the brass sorter is chilling in the basement.

Edited by FingerBlaster
Link to comment
Share on other sites

Huzzah! It's working! 

 

I didn't have a chance to mount the second module, but the computer vision algo is working.  I think it still needs a LOT more training to get more accurate, like i still need to classify and tag a couple thousand more pictures, but now i can use a partially trained model to help the system train itself in a supervised mode.  Basically less keystrokes.

 

Here are 2 videos 1 showing the mechanical design and the other showing the software including a decently working model.  You may need to wiat until youtube processes it in HD to be able to read the screen.

 

 

 

Link to comment
Share on other sites

I’m following this and I amazed at your progress. If this ever gets to the “buy the following parts, print the following parts, put it together like this and run this app on your computer” I’m would love to put one together. You got me on the electronic and programming but I can thinker all day long at assembling something. Well done, simply great work. 

Link to comment
Share on other sites

16 hours ago, quiller said:

I’m following this and I amazed at your progress. If this ever gets to the “buy the following parts, print the following parts, put it together like this and run this app on your computer” I’m would love to put one together. You got me on the electronic and programming but I can thinker all day long at assembling something. Well done, simply great work. 

Same here. This is awesome. 

Link to comment
Share on other sites

mounted the next flappy gate assembly, worked perfect, had to do zero tuning and frankly i think you could add 3-4 more before having to start adding delays to slow things down.  I'm super pleased with the performance of that.  The unfortunate part is that the $20 motor controller board I'm using is currently out of stock, as are the $2 dc motors i'm using the control the gates.  So i can't add any more output buckets at the moment.

20210425_230932.jpg

Link to comment
Share on other sites

I also tried to improve the Neural Net's accuracy by blanking out the primer.  The Neural Net doesn't know how to read, you show it hundreds if not thousands of pictures and it learns to identify features.  Marks on the primers are features that are totally irrelevant.  I used an image processing library that's able to identify circles in an image, and you give it paramaters like how many circles, and the max and min sizes.  I did this because the brass isn't always in the same spot, sometimes it's at the top or bottom of the tube.  Then annotate a copy of the image by putting a mask over it.   I'm still saving unmasked images to disk just incase this idea doesn't pan out as good at it seems to be.

 

What's cool about this is it will also measure the size of the circle.  So once i get to .45 it should be trivial to identify small primer pockets.

 

I was able to just edit all the existing pictures i took, and add a little code to process the image in real time before passing it Tensor Flow to classify it.

 

I just ran through a batch of 200 pieces of brass and with the "assisted training mode" i talked about in the video.  The accuracy was amazing given i still have an incomplete training set.   Not good enough to have it run on auto, but i only had to override it's decision about 10 peices of brass.

 

Even the mode this is awesome.  I can't wait until it's running full auto.

Speer-2021-04-22 23.33.08.733.jpg

FC-2021-04-22 23.25.28.219.jpg

 

At the moment i think the biggest issue is the distribution of brass that I have.  These are how many pieces of each headstamp that I have so far, as you can see the distribution is very heavily weighted towards FCNT and then another 3 headstamps.  Ideally I would have an even distribution across the board.

1 : GFL
2 : A-Merc
2 : IMI
2 : MFS-380
2 : Midway
2 : Pierce
2 : RWS
3 : Aguila
3 : C-B
3 : PPU
3 : WCC
3 : WMA
4 : S&B
7 : Geco
18 : CBC
23 : Tula
46 : Empty
48 : Speer
58 : Blazer
65 : PMC
325 : WIN
344 : FC
436 : RP
1174 : FCNT

 

Edited by FingerBlaster
Link to comment
Share on other sites

57 minutes ago, assnolax said:

That's great.  Does the raspberry pi training actually update the tensorflow model or after a training session do you use your desktop to create a new model with the newly sorted pictures added in? 

great question.. The pi is WAY to slow to train.  You need a TPU or GPU with CUDA to do the training.  My laptop has a CUDA capable GPU, but i can't get it to work with python/TF, so i gave up.

 

The method I use:  I copy all the images to my desktop, i zip them and upload them to google drive, from google drive i load them into a free google colab session and train the model there using google's GPU's, it takes about 10 minutes to train.  Then i download it the model and put it on the pi.  Not ideal, but it works.

 

If you have a desktop with an nvidia gpu you should have no problem training on your computer.

 

Maybe with a usb coral TPU the pi could do the processing, i honestly don't know.  My gut feeling is an NVIDIA jetson is most likely the fastest SBC to do the training with since it has a cuda gpu.   The bench marks i've read comparing the 2 say the jetson is faster than the coral tpu for TF applications. But i honestly don't know if it's fast enough to be practical.  If it's going to take more than an hour to train the model, it's more practical to just keep using colab.  I do plan on testing this with a jetson, in fact i feel like i'm at the point where i can order one now to try it.

 

 

I also found a huge bottleneck in the system, it takes about a half second to capture a still frame from the camera i'm using.  Then another 1/4 second to write a mask over the primer.  That's .75 of a second give or take.  Right now for manual classification it works fine, but it's not ideal once the system starts going in an automated mode.

 

This is definitly a limitation of the camera or usb. I can get it to go quicker if i drop the resolution or change the mode of the camera.  Issue with changing the camera from 'camera-mode' to 'video-mode' is it adds a lot of noise to the images.  If i drop the resolution the images wind up being too small after cropping them.  

 

Once the components I need to add more motors are in stock i'm going to order a new camera which i was planning on doing anyway, going to try the raspihq cam along with a macro lens.  Don't need to use a high resolution if a lens has a small enough field of view.  I was planning this anyway because i don't like using the tripod mount for the camera i have right now, it's too easy for the camera to move as is.

Link to comment
Share on other sites

I had the day off. 

 

I worked on a little performance tuning today.  I can't get the mechanical system running any faster.  If i do it starts causing sorting or presentation malfunctions.  Otherwise mechanical portion is working great.  I've put about 4,000 peices of brass through it with no failures yet. 

 

Image recognition is fast and is working perfectly on FCNT and RP headstamps.  So I'm using the partially trained model to my advantage.  I put a couple lines of code  so that if it sees either of those 2 handstamps and is 95% certain or more to just sort them with no input from me.  If certainty is under 95 or if it sees something else it waits for input.  it worked great since about 50% of my brass is FCNT, just doing that alone helped me get through the last of the bucket I've been working through for training in no time flat.

 

I then put a batch of 150 pre-sorted FCNT and RP cases in there mixed together, it actually paused on a few cases that were the wrong headstamp so i had to redo the test a few times.  But once i got a clean run of 150 cases, it took about 191 seconds.  Roughly 1 case every 1.3 seconds or about 2800 cases an hour. 

 

There are definitely 2 places in the code to save time.  As mentioned earlier the next big place to shave time will be the image acquisition.  If I can cut that in half I should be able to do a case per second.  I think I can also save another tenth by threading 1 or 2 functions.  Otherwise I don't think there will be any big chunks of time to shave and anything more will be diminishing returns and not worth it.

 

I still need more brass of other headstamps to do more training, have plenty of brass i just want to run it through the tumbler to clean it up first.  The issue is the distribution of brass I have is very heavily weighted towards 4 headstamps.  So i have to go through a ton to get a small percentage of useful data.  However i'm starting to be able to run through all the brass faster now.  

 

I want to try the different camera, a different lens, and i still want to put a different LED light on there to illuminate the brass, one that i can control with the raspi to turn on and off.  I have one, i just haven't soldered it up and tried it yet. 

 

I ordered a jetson today, also ordered a new lens for my current camera.  Waiting for a few other things to come in stock on adafruit before i order the hq raspi cam.

 

Link to comment
Share on other sites

2800 an hour sounds pretty fast to me already.  

 

I've never bothered to sort 9 brass unless doing load development but I can hand sort and cover shipping on some of the under represented brass to round out your model.  

 

Will camera/lighting changes require taking all new pictures and retraining? 

 

Does patina matter? I've had some range pickup that even after wet tumbling never got shiny shiny.  

 

 

 

Link to comment
Share on other sites

Yeah, I agree 2800/hr is RIPPING.  But if there's low hanging fruit to make it go faster, why not.  Plus larger cases, like 5.56mm will go slower.

 

I sort all my brass by headstamp, i've had a surprises with cheap 9mm cases that are stepped.  And esp now in my case the NT brass i got from the police range has larger than normal flash holes for the non-toxic primers to ignite better, but they do affect the pressure curve.... does it make a difference with my shooting? probably not.  But as you can see I'm a little crazy.

 

Patina shouldn't matter, actually I want a mix.  Everything from bright shiny new cases to pretty heavily patina'ed cases.  I was actually surised when training that it was able to classify cases that I honestly had a hard time reading due to being worn out.  So I want to cover the gamut of the type of stuff it will see in the real world.  But i wouldn't try to run brass through that hasn't at least been dry tumbled first, esp if it's nasty.  Not to mention to make sure there's no debris in the brass to gum up the works

 

As far as camera/lighting.  I really don't know.  Here's my "gut feeling".  If you change the camera and the lighting it will probably decrease the accuracy of the system, but still work.  What i'll do once I change the lens and camera is use the old training model to assist me in taking pictures of brass that I'd then use to retrain the model.  Not to mention I already have a lot of brass sorted into buckets now, so i can just dump a bucket of "RP" brass in there, tell the app it's only RP brass and let it just got full auto and take pictures only. 

 

Once I have a sufficiently trained model and share all the code, If you build the design using the exact same hardware camera, lighting system.  Also if i design standards for mounting everything so it's all in the same relative positions as mine, I imagine that my model should work for you, or again at least work as a starting point.  Fact is I just put a piece of plywood down, and said "that looks like a good spot to put the camera" and eyeballed it.

 

As far as brass goes, that might be a big help, depending on what mix of handstamps you have in your bin?  i do have a couple smaller buckets that i need to work on. They should have a better mix of brass in them.  I'll hold off on saying yes until after i get the new camera though.

 

I'm probably going to start designing the mechanical side of things for 5.56 soon.  I actually don't think there's too much work to do.  I'm 75% sure the way i designed the sorting flappy gate assembly that it should handle 5.56 pretty well.  I just need to redesign the part that allows me to take pictures of the brass, I can easily make it caliber specific parts, but swapping back and fourth may be annoying.  I want to try and do something a little more universal/modular.  I have about 30% of an idea that I just need to flesh out.

 

Link to comment
Share on other sites

Thanks!  I'll keep that in mind and put an ask out once I'm ready.  I just put some more brass in the tumbler and it's stuff i didn't buy from the police range, so hopefully it'll lead to good data.

 

I ordered some components from digikey so i can run up to 12 buckets, I also substituted a slightly more expensive motor, $6 instead of $2.  The one i was using is out of stock.   Same voltage but has more torque, which could be a good and a bad thing.  We'll see if it's worth it or not.  I'm running some more 3d prints to get ready for it.  Also have a new camera and lens coming along with the jetson.  If everything arrives before the weekend I may have some updates by sunday/monday.

 

I manually dropped some .308 cases through the flappy gates and it handles them no problem, so .223 will be no sweat.  I just need to get the presentation module done to be able to feed and image the brass.

Edited by FingerBlaster
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...