As medical implants become more common, sophisticated and versatile, understanding the code that runs them is vital. A pacemaker or insulin-releasing implant can be lifesaving, but they are also vulnerable not just to malicious attacks, but also to faulty code. For commercial reasons, companies have been reluctant to open up their code to researchers. But with lives at stake, we need to be allowed to take a peek under the hood.
Over the past few years several researchers have revealed lethal vulnerabilities in the code that runs some medical implants. The late Barnaby Jack, for example, showed that pacemakers could be “hacked” to deliver lethal electric shocks. Jay Radcliffe demonstrated a way of wirelessly making an implanted insulin pump deliver a lethal dose of insulin.
But “bugs” in the code are also an issue. Researcher Marie Moe recently discovered this first-hand, when her Implantable Cardioverter Defibrillator (ICD) unexpectedly went into “safe mode”. This caused her heart rate to drop by half, with drastic consequences.
It took months for Moe to figure out what went wrong with her implant, and this was made harder because the code running in the ICD was proprietary, or closed-source. The reason? Reverse-engineering closed-source code is a crime under various laws, including the US Digital Millennium Copyright Act 1998. It is a violation of copyright, theft of intellectual property, and may be an infringement of patent law.
Why researchers can’t just look at the code
Beyond legal restrictions, there’s another reason why researchers can’t just look at the source code in the same way you might take apart your lawnmower. It takes a very talented programmer using expensive software to reverse-engineer code into something readable. Even then, it’s also not a very exact process.
To understand why, it helps to know a bit about how companies create and ship software.
Software starts as a set of requirements – software must do this; it must look like that; it must have these buttons. Next, the software is designed – this component is responsible for these operations, it passes data to that component, and so on. Finally, a coder writes the instructions to tell the computer how to create the components and in detail how they work. These instructions are all the source code – human-readable instructions using English-like verbs (read, write, exit) mixed with a variety of symbols which the programmer and the computer both understand.
Up to this point, the source code is easily understood by a human. But this isn’t the end of the process. Before software is shipped it goes through one final transformation – it is converted to machine code. It now looks like just a lot of numbers. The source code is gone, replaced by the machine code. It’s now a bit like the inside of your car stereo; it “contains no serviceable parts”. Users are not supposed to mess with the machine code.
The alternative
The alternative to closed-source software is open-source, which is freely available both as source code (published on websites) and as binaries or machine code. The philosophy of open-source software is that there are no secrets and no ownership. In the MIT licence, which is just one kind of open-source licence, anyone can download and use the software and anyone can contribute to it, provided they retain the messages embedded in the source code by its various authors.
The biggest difference between open-source and closed-source is money. Developers of closed-source software get paid because they have a monopoly on their software, and software sales generate income. Open-source software developers have to find another source of income.
You might think that no one can make any money from open-source software, but that’s not entirely true. A lot of businesses thrive on the distribution and support of open-source software. Because open-source software is written by programmers, generally for programmers, it’s not as polished and easy to use as proprietary software. This provides a role for businesses like RedHat, IBM, Oracle, Google and Mozilla, who make the open-source software experience nicer.
Closed-source or open-source?
The argument has raged for decades and centres on issues of code quality and security. Open-source supporters subscribe to a “the more eyes the better” argument. If any programmer can see your code, the reasoning goes, they will discover bugs and tell you about them. The same argument is used to support the proposition that open-source software is more secure.
Both assertions are difficult to prove. Security vulnerabilities in openSSH (an open-source tool for securing connections) are constantly being found, and the 2014 Heartbleed attack was based on bugs in the code that have been there for more than 12 years. On the other hand, a vulnerability was recently found in a Windows (closed-source) automatic printer driver installer that had been in the code for almost 20 years.
Closed-source advocates say that their code is better because professionals (not amateurs and volunteers) are paid to read the code and find the bugs. Open-source people point out that many closed-source products (such as Windows, Microsoft Office and Adobe Acrobat) are so big that no one, paid or otherwise, understands the entire code.
Another argument for closed-source software is that bugs and in particular, security flaws (which do not affect the user) can remain hidden indefinitely. This is referred to as “security by obscurity”. If you can’t see the errors, then they can’t be used in cyber-attacks. The opposing principle is that an effective security system does not rely on secrecy; only good design and a secret key.
How code gets fixed
When it comes to actually fixing code, the real difference between open and closed-source software is who can modify, fix or exploit the bugs when they are found. If it’s closed-source, the user has to report the bug to the author or software vendor. They then replicate the fault, open up their private code repository, find a way to fix the bug, and write a patch. Problems arise when products are no longer supported, or companies extend “security by obscurity” to the point where they will ignore, discredit or even prosecute those who point out flaws.
When dealing with open-source software, on the other hand, you can report the flaw to the team of volunteers who maintain the project through the public code repository (such as GitHub, SourceForge, or GoogleCode). Then if someone is working on the project (many projects are abandoned and are not supported at all), they may fix the bug and you can then download and install the updated version. Of course there is always the possibility of a malicious programmer adding “features” like malware and backdoors to the software, although if others are working on the project they should detect such tampering.
Implant manufacturers should open source software
Having a open-source software run closed-source hardware is nothing new. The Raspberry Pi, for example, uses a proprietary Broadcom graphics processing unit (GPU), the internals of which are kept secret. Broadcomm has published just enough information to allow any programmer to use the chip while maintaining its monopoly on the hardware design. In theory there is nothing stopping implant manufacturers doing the same.
The reason they should do it comes down to real-world experience. When someone’s pacemaker misbehaves, doctors, medical technicians and their programmers do not have the luxury of waiting for a manufacturer to release a patch or update. They need the fix immediately. That’s why manufacturers use only lightweight security on these products. When your doctor needs to access your device, they don’t have the time to mess around with cryptographic keys and authentication protocols. The most they have time for is to look up your device’s serial number and default password in your medical records.
While this is a security flaw, the same medical imperative argument applies to the source code. If your device goes crazy, your programmer needs to be able to find the code fast – that means open-source repository. When lives are at stake, there’s no time for secrecy. Just publish the code.
James H. Hamlyn-Harris, Senior Lecturer, Computer Science and Software Engineering, Swinburne University of Technology
This article was originally published on The Conversation. Read the original article.