A very key design flaw, when you think about it
Here is something we had not considered until a soon-to-be released video showed us that the way Artificial Intelligence (AI) works. The video we previewed claimed that AI generally “sits outside” security protocols. This means that it can be quite easy to hack AI programs.
Why does it matter?
AI makes decisions you may not want it to make
AI is increasingly “making decisions” about how who to hire, which supplies to order and when, how to drive, etc. You’ll often hear about “machine learning”, run by AI, but the ease with which AI programs can be hacked means that a bad actor can can be “teaching” the machine some fairly dangerous things if you’ve been hacked.
Vehicles (cars, trucks, buses), medical technologies and treatments, critical infrastructure, and other things that humans rely on to keep us safe are increasingly driven by AI. Security protocols were not originally designed to focus on AI which means that these programs can be manipulated to cause programs to function in malicious ways.
When building or designing a new product or tool, senior executives need determine where your AI code sits. Is it behind the security protocols or is your company’s AI a “sitting duck” so to speak? Ask the questions early and often, and direct your teams to do everything within their power to protect both your customers and your company’s reputation.
While it’s doubtful countries will come up with AI recommendations any time soon, people running today’s companies can get out ahead of a potential issue by making sure your developers have done the right thing and not unknowingly left the door wide open to “teach” your AI things you don’t want it to learn.