Most of you know about this already: the FBI, investigating the December murderers of 14 people in San Bernardino by two Muslim extremists, Syed Farook and Tashfeen Malik, have asked Apple to unlock Farook’s iPhone 5c, which may contain clues about the murders or other terrorists. A federal judge ordered Apple to create the software necessary to unlock that phone. Apple, however, is resisting on the grounds that this might compromise the privacy of its customers. It also argues that this would give the government sweeping powers to make technology companies part of the prosecution in fighting crime.
Current law says that firms or institutions must in general comply with such “unlocking” orders unless they pose an onerous burden on the company. And, it could be argued, asking Apple to create new software to unlock a phone (it would have to try gazillions of passcodes) could be seen as onerous. But Apple’s arguments are really intended to reassure its customers that it cares deeply about their privacyâand that’s important to iPhone users.
The New York Times has a brief explanation of the situation.  What the FBI wants Apple to do is create code to bypass a feature that, after ten failed attempts to enter a phone’s passcode, would erase all of the phone’s data. Only Apple can do this since its operating systems have special code “tags” known only to the company.
The judge agreed that Apple could retain and later destroy the new unlocking software, and that the phone would be “hacked” by Apple in its own secure facility, although of course the information would go to the FBI.
This is a real dilemma. What is to be done? Apple has until February 26 to respond. And this is a question of ethics, somewhat analogous to the dilemma of whether to torture someone who has information that could lead to saving thousands of lives by revealing the location of a time bomb. I am of course sensible of the difference between torturing someone and creating software, and between people dying from a ticking bomb versus having their data compromised; but both dilemmas instantiate a weighing of relative harms.
This, I think, shows the problem of arguing for an “objective” morality. On one hand we have the possible (but not certain) revealing of data about terrorist networks, with the “well being” constituting the possible saving of lives. On the other we have the creation of a precedent that could allow the government to act intrusively, on the merest excuse, to get people’s private data. The “well being” here is the safety of people against losing their private information, and of creating a precedent that could be misused. Now how on earth can you possibly weigh these different forms of “well being”, even if we could know perfectly all of the consequences of both actions? (And, of course, we can’t.)
My own feeling is that Apple should comply with the government’s request, as this is tantamount to fulfilling a search warrantâwith the exception that Apple has to create new software for the FBI. But there are ways to mitigate the harms of doing that. Apple could, as it will, destroy the software so that nobody else can have it. It can extract the data without the government being present when it does so. And, in the future, Apple could, I’m told, even create an iPhone whose passcode could never be hacked by any software, something that seems perfectly legal.
This case is likely to go up to the Supreme Court, for Apple doesn’t want to be seen as compromising its interest in customer security.
I’m asking readers to weigh in below on this issue, as my own opinion, while leaning toward the government (after all, nobody is being tortured here), is susceptible to change. Are you on the side of Apple, or of the FBI?