Apologies if this isn't the right place for this question, please direct me elsewhere if that is the case :)
I was having a discussion with my boss who has experience (but not official education) in software engineering about a member-field in an object that indicates whether the object is active.
The objects in question are built by employees (imagine something like antivirus rules) who would specify the object parameters and attributes and goal of the object.
This object is then compiled into a binary format and distributed to our customer's software installations via mirrors and the software updater.
This object contains a field which indicates whether it is active or inactive, in early design and development phases this was added because it was theorized we may need to turn the object off at the endpoint of distribution.
The objects are stored in separate files on customer software installations and turning the object off would mean actively writing the binary object file to adjust the field inside. There was no real expected use-case for this internal 'active' field, and the customers aren't expected to know anything about these objects. So it was expected that nobody and nothing would ever turn these objects off -- especially because you could just delete the file to turn it off, and updates of the file would turn it back on regardless of whether you deleted it or toggled it.
The idea that we would need to toggle the object at the endpoint was quickly dismissed because the interface reached completion and it is connected to a database which houses the objects and whether each object is active or inactive.
The interface would quite simply not push the object out for distribution if it is listed as inactive in the database, it's a very simple mechanism.
Now fast forward to our conversation we were discussing the purpose of this field in the compiled object file.
I stated that we could simply remove the field from the compiled object because if the object is being pushed out then it must be enabled in the interface anyway.
My boss said that I am assuming UI programmers don't make mistakes and that having the second line of defense would prevent use of an object that wasn't intended to be pushed out but was pushed out somehow in error.
I said I didn't really consider 'developer mistakes' as being a reason to implement a feature (mainly in this case) and that mistakes should generally be caught by testing/debugging/QA and not worked around with side features.
After-all, it would be the interface which populates the 'active' field of the compiled object anyway, so if the interface erroneously pushed out an object which wasn't supposed to be active there's a large possibility the compiled object would also indicate it was active in the internal field because the interface populated that field. (Unless of course the bug is elsewhere like the interface's comparison of the active field)
My boss said "don't think of it as a 'feature' but a safety" while also suggesting that my stance on the matter was naive and the statement 'mistakes should be caught by testing/debugging/QA' certainly isn't true in the real world.
Ultimately this is such a small issue that I'm not aiming to argue whether or not the field is retained, I'm just curious about the principle of the stance taken in the discussion by my boss and I am curious what other professionals in the industry have to say on this.
My reasoning for this being a bad approach is:
If you assume that testing/debugging/QA cannot catch even the simplest of bugs, and that you must code in extra features to protect against these simple bugs -- isn't that indicative that there is a deeper issue in the development process?
Furthermore, if you have to code in extra features to protect against bugs, what if those extra features have bugs? Do you program in even more extra features to protect against possible bugs in the features that protect against bugs?