In my personal life I write software for the manufacturing industry. Professionally, I am an engineer, and write code for computer-operated equipment (lathes and mills, commonly known as CNC machines.)
For a sense of scale, let me say that most CNC programs are under 250 lines of code. In addition to controlling the cutting tools, the programs will check for operator input mistakes: Tools can only be adjusted within a narrow range, anything outside that range and the machine won't run. Did the operator slow it down to check something and forget to turn it back to 100%? Machine won't run. And so on. Idiot-proofing, dimensional checks and feedback, torque monitoring, etc.
Before we even let the customer see their new machine, we have already run the machine for 8 hours of hands-off auto cycling of the program. We have also run each cutting tool through enough parts to ensure the cutting conditions are optimal. Then for the customer we run an additional hands-off production run of 8 hours or 35 pieces (whichever is greater) and then do a 100% inspection of every feature out to 5 decimal places, followed by some statistical analysis to measure capability. Once the customer is happy, we ship the machine and repeat this on their floor. Then we spend a few days going over the statistical analysis, then a week of training for their operators. Only then is it ready for producing parts that make sure your car door latches with 18lbs of force rather than 19lbs.
Oh, yeah... we provide the computer code to the customer as well, every line commented for clarity.
Doesn't it seem like voting software, which likely is thousands of lines of code, should be made open-source and go through some sort of approval process before being used for real? Isn't this software vetted or tested or examined at all?
-+Edit+- I should clarify... I am not claiming that voting software and CNC programs are similar in architecture, language, layout, complexity, or structure. My point is, if a fairly simple g-code program and its performance is vetted so thoroughly by the end user, at multiple points in its development and prove out, then why in the hell isn't the software that determines how my vote is recorded given the same level of scrutiny? I didn't realize my example was too convoluted for so many snowflakes.
view the rest of the comments →
[–] screamingrubberband [S] 0 points 2 points 2 points (+2|-0) ago
That is a loaded question... I work for a division of a machine-tool OEM, but am not too involved in the 'normal' machine sales. Our machines have thermal expansion checks in the ballscrews, and some level of automatic backlash compensation. Both of these are in the background and are beyond what the control adjusts. Because of that, our service techs mostly replace worn-out motors and seals and boards, but will rarely also rebuild spindles and re-finish/scrape bedways.
Our division typically sells machines that are single-part specific for high-volume runs (automotive, aerospace... like that) with custom workholding and automatic offsetting from a part-specific custom in-process gage. One of our customers has seven machines in production for going on eight years, making 80-lb cast iron housings. The same 2 parts over and over and over. Our service department has been called there one time to perform a ball-bar test on each machine just as a part of the customer's preventative maintenance schedule. They were all within 5 microns except the vertical that drills the fastening holes for a bearing cap (2 per housing); it has drilled and tapped the same four holes in the same location so many times that there is a 15 micron step in the bedways near the edge of the table opposite the tool changer because the machine has NEVER traveled there.
So... almost never!
[–] lipids ago
Thanks for the reply. Interesting stuff.