There are strict regulations for computing devices installed in our cars. Even stricter if they go into airplanes. Implantable devices that go in our bodies apparently don’t have the same level of protection. The manufacturers like to keep the details secret to protect their “intellectual property,” so in most cases we don’t even get to know exactly what is going in there. If there are any problems with security, they like to keep quiet about it to protect their reputation. If an outside researcher discovers a problem, they don’t want to hear about it.
More often than not, the response to the disclosure of a security vulnerability is not a gracious, “Thank you.” It is an impulse to punish. The ethical hackers who find and report flaws are often sued or arrested. It’s as if they’d rather hide the problem than fix it.
Fortunately, that seems to be changing.
This summer, the Food and Drug Administration warned hospitals to stop using a line of drug pumps because of a cybersecurity risk: a vulnerability that could allow an attacker to remotely deliver a fatal dose to a patient. SAINT Corporation engineer Jeremy Richards, one of the researchers who discovered the vulnerability, called the drug pump the “the least secure IP enabled device I’ve ever touched in my life.”
There is a growing body of research that shows just how defenseless many critical medical devices are to cyberattack. Research over the last couple of years has revealed that hundreds of medical devices use hard-coded passwords. Other devices use default admin passwords, then warn hospitals in the documentation not to change them.
A big part of the problem is there are no regulations requiring medical devices to meet minimum cybersecurity standards before going to market. The FDA has issued formal guidelines, but these guidelines “do not establish legally enforceable responsibilities.”
Go to the Motherboard article for the full story.