History is marbled with the blood of those who have been wrongly killed by heavily-armed AIs.
Farthest from significant oversight, and often maintained by those who don’t have the first clue what they’re doing, High Consequence Intruder Deterrence systems found on many spacecraft are among those armed AIs with the highest accidental bodycount.
It’s all down to false positives: the variety of identities ascribed to individuals based on their relative position and value in numerous corporate hierarchies means that setting up a “whitelist” of those who the security system should ignore is a non-trivial problem. In other words, while you might want to tell the AI “this person is a friend, don’t hurt them”, the markers of that person’s identity might shift for countless reasons beyond the comprehension or capacity of the AI. Their parent company might change hands (this tends to happen on an almost continual basis), and apply a new set of identification standards that require them to deactivate their old vascometrics package and add a new one, incompatible with the ship’s security system. A shift in role might mean they have to have their DNA altered in such a way that they become biologically unrecognisable to the AI.
The general solution to this is to set looser identification parameters. Facial recognition; certain other biometric elements such as voice, and so on. The parameters by which such a system sorts friends from threats are fixed when the system is set up. The trouble is, while those parameters stay the same, life drifts and creeps. Situations change, purposes change, personnel change. As it’s expected that most systems will be set-up-and-forget, as the expertise needed to keep a system like this tuned effectively is of extremely high cost, most AIs of this nature have some degree of iterative self-update to their identification algorithms mapped onto chaos-based calculations. With this as a basis, security AIs often try to anticipate social and functional shifts without user intervention, and adjust their identification parameters accordingly. Unfortunately this feature tends to develop and bloom into a logic that no organic lifeform working within the symbolic order could hope to understand, leaving an AI security system whose behaviour is eccentric at best, and fatally unpredictable at worst.
So it’s pretty common for these systems to finally reach a point where their unpredictability makes them a hazard to a ship’s crew. And, as is often the case with problems like this, they’re left til the last possible moment to solve–and when the last possible moment comes, they’re handed off to someone else.
A REZQ subscription makes the decision of who to hand this problem a far easier one to make.