by David Hoffman, Jolynn Childers Dellinger and Connor Leydecker
Vol. 106 No. 3 (2023) | Forging New Trails | Download PDF Version of ArticleItās 1890.Ā Responding in part to the invention of āinstantaneousā photography, Samuel Warren and Louis Brandeis writeĀ The Right to Privacy, urging legal recognition of āthe right to be let alone,ā which they argue must evolve in parallel with technological development.1 Their cause for concern is that the proliferation of ādevices threaten[s] to make good the prediction that āwhat is whispered in the closet shall be proclaimed from the house-tops.āā2
Over a century later, unease about privacy-invading technologies persists, and numerous companies now possess tremendous power to peer into peopleās personal lives. Take, for example, personal device scanning technology. A practice called āclient-side scanningā allows private technology companies to search the contents of consumer devices, including cell phones. This new capability, and its potential for use in concert with law enforcement and without notice, consent, or accountability, raises a bevy of privacy concerns.
These concerns took on new complexity when Apple recently proposed to scan its usersā phones for child sexual abuse materials (CSAM).3 The purpose of the search was beyond reproach: Apple intended to alert the National Center for Missing and Exploited Children (NCMEC)4 if any found images matched images the center maintains.5 But the announcement unsettled many consumers, advocacy groups, cryptographers, and security and privacy experts who ā despite supporting the ends of this type of search ā found the means overly intrusive and warned that the same technology could be used for insidious purposes.6 What would stop Apple, or other tech companies, from providing other kinds of images or content to law enforcement despite the absence of a warrant? Or keep authoritarian governments from exploiting this new technology to suppress dissent? Apple paused roll-out of the technology, scrubbed its website of references to the most controversial parts of its original plans,7 and then abandoned its plans entirely.8
But the questions of how, when, and whether to use this technology will undoubtedly resurface.
In Summer 2022, we asked two experts to discuss the legal and policy questions raised by Appleās proposal.Ā Jolynn Childers DellingerĀ teaches privacy law and policy at Duke Law School and is the Stephen and Janet Bear Visiting Lecturer and senior fellow at the Kenan Institute for Ethics at Duke University.Ā David HoffmanĀ teaches cyber-security policy at Duke Law and is the Steed Family Professor of the Practice of Cybersecurity Policy at Dukeās Sanford School of Public Policy and the former associate counsel and global privacy officer for Intel Corporation. Their conversation follows.
ā Connor LeydeckerĀ
Editorās note: In early December 2022, prior to this articleās publication, Apple announced its decision to abandon its client-side scanning plan. Because the conversation that follows originally took place in Summer 2022, it does not reflect this decision by Apple.
Why was appleās proposal to scan phones for child sexual abuse materials so controversial?
Dellinger: Of course, we should say up front: Finding and prosecuting people who exploit and abuse children sexually is incredibly important. No one is against that. But there are legitimate concerns about how we do it. The concerns about Appleās plan arose because Apple was not going to scan itsĀ cloudĀ for these kind of images ā but rather was planning to scanĀ peopleās actual devicesĀ ā their phones ā for child sexual abuse material (CSAM). Itās an important distinction because the cloud is usually considered part of Appleās territory, while the phone is traditionally thought to belong to the user.
The technology involved Apple pushing some code and a library of hashed images to peopleās phones that would then alert Apple if any images on a userās phone matched those in the library of images. This library of CSAM was sourced by the National Center for Missing and Exploited Children (NCMEC).9 If the requisite number of matches occurred on a device, Apple would provide that information to NCMEC, NCMEC would provide that information to law enforcement, and then law enforcement could come back to Apple with a warrant to get further information.
Hoffman: Itās also important to note that Appleās proposal was to implement this āclient-side scanningā only for the photos that were designated to be uploaded to iCloud. It wasnāt proposing to scan all the photos on the phone. But it raised concerns that eventually the scope might broaden to include more than just the photos that were going to be uploaded to the cloud. And it also raised concerns about whether people really understood their own phonesā settings as to whether an image was designated to be uploaded to the cloud. This is a setting that most people usually have turned āonā or āoffā for all of the images in their phone ā and if it is set to āon,ā then all the images are designated to be uploaded to the cloud. Itās not like a handful of photos at a time are flagged to be uploaded, so that users have individual image control over which images might get scanned. So there was a concern about transparency.
What kind of privacy concerns does appleās proposal implicate?
Dellinger: Lots of tech companies scan material that is held on their own servers for many good reasons. But historically, when youāre looking at privacy law, we have this public versus private dichotomy that has purportedly informed our expectations of privacy. There are significant problems with this dichotomy because there are plenty of cases in which it is perfectly reasonable to expect privacy even when we are in public. But the idea that we expect privacy in historically private places like the home, for example, is quite resilient. We certainly have this sense that what happens in our homes and the things that we have in our homes are protected from government intrusion. The Fourth Amendment, for example, protects our persons, houses, papers, and effects against unreasonable search and seizure. Those things are seen as intensely private. While the Fourth Amendment does not apply to Appleās conduct here, Appleās proposal calls our expectations of privacy into question.
Client-side scanning means the company is scanning material that is held on a private device. And, inĀ Riley v. California, we have a Supreme Court case that recognizes the privacy of material on cell phones, limiting the scope of the āsearch incident to arrestā doctrine.10 So I think itās just a break, a significant one, with what people understand as private to have a third party scanning a phoneās content for illegal material.
Hoffman: I would say that itās a significant break from what peopleĀ misunderstandĀ as the current separation between private and public. The current public-private distinction for data held on phones and servers is different from what people may expect. Weāre not talking about, at least at the current point, what Appleās recommendation was.
How is fourth amendment jurisprudence implicated by these issues?
Hoffman: Law enforcement is not requesting this information. This is Apple deciding to do the scanning themselves. So the Fourth Amendment arguably doesnāt apply when the scanning is being done by Apple. You could say later that if law enforcement was going to make a demand for this information, then potentially the Fourth Amendment would apply. But we have lots of experiences where people own devices and the companies that operate those devices maintain access to information on those devices.
I think Professor Dellinger is absolutely right that this is out of line with peopleās expectations of their Fourth Amendment protections. But thatās generally because their expectations are out of line with what the law actually is and because, generally, the law does not restrict companies from doing this type of scanning of private devices.
Dellinger: Well, I think itās accurate to say that the Fourth Amendment doesnāt prohibit a private company from deciding to scan a device. I do think itās an interesting situation, though, because Apple has chosen to do this on its own for the specific purpose of communicating any problematic images to NCMEC with the understanding that NCMEC provides those to law enforcement.11 Itās not a situation where law enforcement has asked Apple to do a certain thing that itās now doing. So Apple is not the āagentā of a law enforcement āprincipalā in this sense.
Actually, I think corporations proactively taking innovative steps to protect privacy and to help solve problems is good. Consider what Apple and Google did with contact tracing.12 Those efforts can be very positive, but I do think that there are some issues like the private search doctrine that I worry about in general, not confined to the CSAM issue: the whole concept of creating the client-side scanning technology that allows this matching and searching.
This is a search on a personās device that is limited, right this minute, to CSAM, but thatās a policy decision. They could search for anything they want. I know that we as consumers have choices, and we could say āWell, that doesnāt sound like a good deal. Iām going to go buy an Android.ā Thereās competition, supposedly. But Iām not sure that this supposed alternative really answers all of our questions. We need to carefully look at relationships between law enforcement and companies. We should not restrict ourselves to just looking at Fourth Amendment issues but think more broadly about what behavior may circumvent the Fourth Amendment and accomplish the same things that the Fourth Amendment was meant to protect us against.
Hoffman: I donāt disagree with that. I think that we should separate these issues into two categories. One category is implementation: What was Apple planning on doing, and was that a good plan to achieve the objective? And then the second category is policy. I think you could argue this is a search, though that may depend on implementation. But regardless, one thing thatās very clear to me is that without some sort of government mandate itās not a constitutionally protected search. When weāre outside of those boundaries, what privacy expectations should individuals have of the entities that are providing them with technology and digital services? This is an area where Professor Dellinger and I generally agree ā we need further protections in law and an establishment of norms to determine whatās appropriate.
Now, when we talk about data privacy, oftentimes we gravitate toward talking about what we call ācollection limitationsā ā or the limits on what private information can be gathered. But thatās only one aspect of data privacy. There are actually eight Fair Information Practice Principles (FIPPs), which are the traditional pillars for designing a system that can realize the objectives of using data while still protecting privacy. The most impactful set of FIPPs was developed by the Organization for Economic Cooperation and Development back in 1980. And ācollection limitationā is just one of those eight principles.13 The others are: data quality, purpose specification, use limitation, security safeguards, openness, individual participation, and accountability.14
Those other FIPPs all apply here. Maybe we want to allow Apple to have access to this information, to do this kind of scanning, but one of the other FIPPs, āuse limitationā ā or how the gathered information is used ā is an appropriate control to protect privacy here. So we might say exactly what Apple said: āWeāre only going to use this technology for this particular purpose. And this purpose is clearly something that most people in society agree with ā to reduce sexually abusive materials that apply to children and to decrease the harm thatās created from that.ā
You mentioned accountability as one of the principles used to optimize the use of data while protecting privacy. How does a data controllerās responsibility to demonstrate that it is acting responsibly15 figure into appleās action here?
Hoffman: Apple can make that āuse limitationā promise to us, but how does Apple demonstrate it is living up to that promise? How do we ensure that thereās not a mission creep where Apple starts using the scanning tool for new uses? Thatās a critically important issue and one that Professor Dellinger and I have been exploring in our joint research on platform accountability with the Sanford School of Public Policy, the Law School, and the Keenan Institute of Ethics at Duke.16
Dellinger: I think itās interesting to raise this issue of trust and accountability because, frankly, when you look around, Apple has built a reputation for trust in the privacy area. They have branded their products with privacy. Theyāve taken three or four steps in the past year that try to offer consumers more privacy in terms of things like cross-device-tracking and app-tracking transparency.17 But I think that we shouldnāt need to rely on trust. As Professor Hoffman mentions, there should be laws in place that provide a base level of protection.
I also want to raise the issue of normalization of surveillance, mission creep, and function creep. A lot of the concerns around this type of technology ask: Once itās created, how will it be used? Many people donāt object to finding and eliminating CSAM from the internet by client-side scanning of peopleās devices (which, incidentally would benefit the privacy of the people who were victimized in the process of the abuse). But, once the underlying scanning technology exists and is implemented, many questions about āuse limitationā and āpurpose specificationā (another FIPP) arise. For example, a group of 14 security experts wrote a paper calledĀ Bugs in Our Pockets: The Risks of Client-Side Scanning, raising these types of concerns.18 They say it would not be difficult to reconfigure the scanner to report any targeted content regardless of whether a user intended to upload it to the cloud.19 They mentioned how, in the EU, authorities are already seeking to use this technology to look for terrorist information and other categories of information along with CSAM.20 We also have a situation where Apple, a company about which I have many great things to say, acts a little differently in China and has made different concessions to continue to operate in China.21 The Bugs in Our PocketĀ folks also discuss how client-side scanning could be used as a means of repression and political manipulation.22
Can private companies stop future abuses of this technology?
Dellinger: The question is: Once the technology exists, how will it be used? And further: How much power does Apple really have to tell our government āNo, we wonāt do this,ā when the technology exists and has already been deployed on everyoneās phones? Iām just not sure that people really trust that Apple can resist. If you look at the history of mission and function creep, whenever surveillance technology makes its appearance, it does not take long for that surveillance technology to be repurposed for different and more pervasive uses than those for which it was originally intended.23
As for normalizing surveillance, you can look at something as basic as the social security number or surveillance of employees or post-9/11 mass surveillance. We start data collection and it increases and increases. Often this happens without transparency and in a way that eludes public debate altogether. So, coming back to the idea of the relationship between the government and companies, I donāt think we can just say āWell, this is just a company, and we can just trust the company.ā I think we also have to look at the interactions between companies and the government and consider what the government can ask the companies to do.
Hoffman: I agree with most of that, but where I potentially disagree is that I think weāre already in that situation. Most technologies today, whether hardware or software, already have a mechanism where, if a company wants to do client-side scanning, it can be done. We canāt operate internet-connected technologies without the ability to do software updates, which is another application of client-side scanning technology.
Companies push these updates remotely to usersā devices, and the user has control to accept them or not but has little to no information regarding what that software update is doing. What is the software thatās being installed? We trust these entities that weāre working with ā that they are using those software updates to provide us with more functionality, which they will describe to us, or to deliver security patches through this update process.
If youāre using an Apple iPhone, then you are already trusting Apple not to install software that is doing something different from what Apple says it does. If you are using a laptop or a personal computer that has the Microsoft operating system on it, then you are trusting Microsoft not to push software to your system that is going to do certain things. If you are using a system that has an Intel microprocessor in it, then you are trusting that Intel is not going to send you software updates that create a backdoor for access to information stored in the memory of the device. If we are already trusting these companies to install this software on our devices, how much incremental risk is it to trust them that they are not going to misuse the client-side scanning functionality?
No matter what situation weāre in, we need to trust the company that we are engaging with, or else we have very little protection. Itās not clear to me how Appleās proposal here ā that basically says, āTrust usā ā is any different from the kind of trust it has always asked us for. Apple delivers software code to usersā machines all the time, and Apple expects that users trust them ā and frankly, they have demonstrated that they deserve usersā trust. Apple has shown that when governments have gone too far and asked for things, particularly here in the U.S. and particularly after the San Bernardino terrorist attack [when Apple refused to create the software needed to unlock an iPhone belonging to the accused attacker], it will push back on the government.24 Apple has fought the government before and made that fight public.25
So, I wonder: If we canāt trust Apple, who can we trust? And if we canāt trust Apple or anybody else, maybe we should just throw all of these devices in the bathtub and turn on the water and stop using them. I donāt want to do that, and instead I want to focus on mechanisms to evaluate whether companies are trustworthy. At some point, weāve got to trust that theyāre going to set a policy around āuse limitationsā and, that if governments ask them to use scanning technology for other uses, that Apple will make that transparent to us. And if the government tells them to keep it secret or uses some law to hide it, then we are already in that situation. So itās difficult for me to see why, as policy, this CSAM proposal is worse than the status quo, even if I can understand why, as an implementation strategy, it might be worse. In the end, we already need a better system to evaluate when we can trust technology companies. There may be a role for third-party trust evaluators similar to accounting firms and requirements for companies to make certifications to regulators that they are following their policies. There are many ways we can create systems that aid in our ability to evaluate who we can trust.
Dellinger: I think all of the cases that you mentioned involve companies doing something that is consistent with their business models to maximize profit for their shareholders and that would be ultimately beneficial to or appreciated by consumers. If users are pushed an update that makes their phones more secure, then that is something that benefits users and that they may want. This CSAM proposal switches those incentives around a little bit, because the client-side scanning that Apple proposed is not something that is for the benefit of individual consumers who have purchased Apple devices. It is something that, even though Apple is not acting as an agent of law enforcement, effectively surveils all Apple device users and is being undertaken with the specific purpose to provide potentially illegal materials to NCMEC, which in turn will provide them to law enforcement. So this use case is very different from pushing a software update to protect consumers.
Again, Iām not saying I donāt trust Apple ā I do trust Apple, particularly compared to a lot of the other choices that we have ā but I donāt think that trust is what we should be relying on, particularly when things are happening and, more importantly, could happen to the potential detriment of consumers and their legal rights. This scanning capability is also something that is going to be on the phone of every single person who owns an Apple device. What weāre talking about is nonparticularized searching, which is mass surveillance. Again, itās not a Fourth Amendment search, but itās a search nonetheless ā a search of the material on the phone of every single person who owns an Apple device.
We will all be subjected to this. We are not suspects. We are not targets. We are not all people who are believed to be using CSAM. But we are all going to be subject to this type of searching by this company. If it is Apple today, it will be a host of additional companies in the future. I just think trust isnāt sufficient in this case, particularly given companiesā association with the government and law enforcement.26
In the private search doctrine, if a private party conducts a search, law enforcement doesnāt actually need a warrant to search that same stuff, whatever it is ā whether itās a phone or a box or a computer.27 How does this fit in with our Fourth Amendment jurisprudence at large ā with our understanding of privacy, with whatās private and whatās not, with our feelings about whatās okay and not okay for use of devices? And like Professor Hoffman says, yeah, we could all throw these devices in the bathtub. But it would be hard to get through work the next day, given the indispensable role that these types of devices have come to play in our lives.28
I think that itās not going to be just CSAM detection in the end. Going back to San Bernardino, I couldnāt have been more thrilled about Appleās approach. I thought they were exactly right in what they did. And I know people disagree about that, but at that time the technology didnāt exist. The FBI was asking them to actually create a technology. But, in this case, the technology does exist. It has been created. The question is: When is it going to be deployed ā and to what ends? And that trust youāre talking about: I donāt have that trust that its use will be limited.
I donāt have the trust that our government and law enforcement will not end up using this in a way that Apple was not anticipating. And even if itās not our country, I think in other countries you have more serious concerns about dissent, dissenting opinions, and government. But we do have a history of the FBI tracking leaders of the civil rights movement29 and the Black Lives Matter movement.30
These things do happen, and I just find it highly concerning. And I donāt think we have seen the end of it. In the wake of Dobbs,31 which has effectively eviscerated womenās decisional, physical, and informational privacy, we have to consider that future administrations could attempt to enact federal laws declaring fetuses to be āpersonsā or criminalizing abortion; people need to understand what surveillance could look like in that world and how the content on their devices may be used against them.32 This technology is there and, once the technology is in use, decisions about how it can be used are very different than a situation where the government is saying āYou must create a technology which doesnāt currently exist.ā So I think that leaves us open to more problems.
Hoffman: To Professor Dellingerās first point about trust and accountability, I donāt disagree that it would be great to have something more than trust; that it would be great to have a way to verify that these technology companies are acting in a responsible and accountable way. Thatās a large part of the research that weāre doing in exploring different governance models. All Iām trying to say is thatās where we are now. This situation does not cause a greater need for trust. It highlights the fact that weāre in a situation where we donāt have devices that we can use without having to trust companies. And I donāt disagree that there have been companies that have made big mistakes.
However, I donāt think we can point to Apple as having been one of those companies. And thatās what I find interesting: If there is any company, or maybe two companies in the tech space that have stood for the principle that their users should be able to trust them, they are the one that I used to work for, Intel Corporation, and Apple.
Intel has had a policy, about which it has been very vocal, that it would never install technologies into the devices to weaken the security of those devices, and that it would not create back doors. And Apple has gone to great lengths to make privacy and cybersecurity a core part of its product offering and its competitive advantage. So I think this is what took the Apple folks by surprise. I think Apple thought, āPeople trust us, and when we tell them weāre only going to scan these devices to stop this horrible social harm, people will trust us.ā Privacy is not just about collection limitation, and Apple likely thought folks would be able to trust it to enforce a policy of āuse limitationā and āaccountability.ā I probably come down on the side saying āThatās pretty good, but can you build in some additional mechanisms to demonstrate that you are living up to your promises?ā
When you install cybersecurity software on your machine, it scans your machine. And in many instances, it transmits information about whatās on your machine back to the cybersecurity company so that it can understand, communicate about, and react to threats and vulnerabilities. We have a huge cybersecurity issue, and client-side scanning (or, as people refer to it, āend-point threat detectionā) can help address it. But we have to then trust that thatās all that theyāre looking for and that the information sent back isnāt going to be used in a way to harm me as an individual. I think thatās a really important legal policy and societal conversation that we need to have: What are the reasonable limits of that trust? And what are the right accountability mechanisms to make sure that these companies can demonstrate that theyāre behaving responsibly?
And to Professor Dellingerās later points about the private search doctrine, I would agree. I actually think we do need to be careful here about bad facts creating bad law or really good facts for Apple creating bad law when we try to apply it to everybody else. Like I said, I think Apple is one of the most trustworthy companies and has invested a lot in demonstrating that. The question is how do we set up the right structures to hold other organizations accountable so that we have some degree of understanding that theyāre worthy of that trust.
I agree with Professor Dellinger that thereās some really interesting analysis that needs to be done on theĀ private search doctrine, and my limited understanding is that the law is not completely clear because itās very fact-specific and based on the context of the relationship between the government and the private actor whoās doing the search. Itās got to be more than just knowledge and acquiescence by the government. But, generally, a lot of the cases, I think, point to the level at which the government is directing the search. What does that mean in this context? What does it mean when the government may have had many conversations with Apple about what they would like to see ā when the government is not directing a particular search, but itĀ isĀ directing the overall idea that searches should happen?
Encryption enables people to prevent others from gaining access to their devices. Privacy and civil liberty advocates claim encryption is needed to protect against encroaching surveillance of personal information. Law enforcement and national security advocates assert the need to access the devices of those suspected of crimes to properly investigate and keep people safe. Where does this technology fit into this debate?
Dellinger: Thereās a huge, ongoing, seemingly never-ending debate about encryption. In the encryption debates, law enforcement has argued that encryption hampers their ability to do their job because they canāt get access to the data that they need to solve crimes. On the other hand, people will say āWell, no, we need encryption. Encryption protects the privacy of our devices and our communications. And if we donāt have encryption, thatās actually a security threat because a back door is a back door. Itās a back door for everyone, including malicious actors, hackers, and even our enemies.ā This is the ongoing debate.
This encryption debate applies here because encryption could protect our communications when we create content on our devices, when we back up our devices to the cloud, and when we send messages, photos, or other content to one another. One can imagine a policy for using encryption based on where data is stored (in the cloud versus in local device storage). Hypothetically, if a company did want to encrypt its cloud, that could provide even more privacy protection to people. But Apple has not said yet (and may never say) that itās trying to or wants to encrypt its cloud.
This leads some people to wonder why Apple doesnāt just limit its scanning to the cloud, rather than reaching out further to scan at the local device level, as it initially proposed. This is what other companies are doing, just scanning data stored in the cloud.33 Apple certainly does scan data stored in the cloud but has sent fewer reports to NCMEC as a result ā certainly fewer than Facebook has, for example.34
I imagine that if Apple did encrypt the cloud, they would get more pushback because the encryption creates an obstacle for law enforcement to access information that it may need. From the perspective of that potential future, doing this very limited client-side scanning on a phone may be acceptable to some peopleĀ ifĀ it enabled encryption of the cloud. Of course, this wasnāt how Appleās proposal was presented ā this is completely hypothetical. There may still be too many privacy and security issues raised by the client-side scanning technology to buy that argument. But Iām wondering if something like that might be in the offing because Apple is a pretty privacy-forward company. Ultimately, though, even that would be a compromise of the privacy of the device itself, which could reasonably be thought of like the privacy of a personās home.
Another wrinkle to this debate is the risk of false positives created by automated client-side scanning, especially when companies do not implement adequate human review into the process. Take, for example, a father who sent photos of his sonās groin to a doctor to help assist in diagnosis and whose Google account was disabled because Google flagged the father as violating its terms of service regarding distribution of CSAM.35 The father appealed the decision to Google, but Google rejected the appeal without further explanation and notified the father that he was already under investigation by the police.36 Even if the threat of criminal charges was unlikely, the father suffered significant harm due to āthe domino effect of Googleās rejection.ā37 He lost emails, contact information, memories of his sonās first year, and his phone number, and, ā[w]ithout access to his old phone number and email address, he couldnāt get the security codes he needed to sign in to other internet accounts, locking him out of much of his digital life.ā38 As you mention, David, we have come to depend on these companies for so many facets of our lives, yet there is little recourse for someone involved in a situation like the one where this man found himself.
I bring up this example to make a few points. First, I want to highlight this difference in approach between Google and Apple in scanning for CSAM. Googleās approach involves using technology to identify new examples of CSAM. By contrast, Appleās proposal only involved looking only for hashed images that were already on file with NCMEC. If Google had been using Appleās client-side scanning method, this situation would likely not have happened to this father and others like him. Second, the use of Googleās image identification technology and occasionally flawed process in identifying CSAM is another type of technology that is subject to mission creep. Because this technology already exists, there is no reason to think that it could not be incorporated into client-side scanning. And, of course, the current goal is to protect kids, but that is a policy choice that could easily change. Last, this example shows that there are still significant dangers to privacy and civil liberty for scanning ājustā the photos that have been uploaded to the cloud. So, regardless of where the scanning is taking place ā on a personal device or in the cloud ā there are risks involved with scanning.
Hoffman: Well, I find this overall topic interesting because I think there are some big implementation issues that are perplexing, frankly. A lot of people wonder why Apple did not look to some of the newer technologies that potentially would allow it to continue to just do the scanning in the cloud, even if the information remains encrypted. Generally, that approach is what they call āhomomorphic encryption,ā which allows for observations to be drawn from encrypted data. Youāll only get a yes or no answer out of that analysis. You wouldnāt be able to access the image, the technology would just report out: This was a match. And then they could potentially report that information to law enforcement.
Those types of implementations are complicated. There are issues around speed, but a lot of advances have been made. There are real questions about whether this is trying to get us used to scanning images on the phone because maybe in the future, this is going to be just about scanning all images, not just those uploaded to the cloud. That raises some interesting policy questions. This is one where Professor Dellinger and I probably disagree. Iām open to that conversation about whether this proposal should have been just limited to photos that are being uploaded to or stored in the cloud. Iām not sure that this limit makes sense, given Appleās articulated goals.
If we get to the point where weāre comfortable that there arenāt going to be a lot of false positives, and if weāre comfortable with the fact that weāre really just talking about child sexual abuse materials, then is it reasonable for Apple to say āLook, we put a lot of time and energy and resources into making this device. Weāre not interested in our device being used to harm children. If you want to do that, buy somebody elseās device. And weāre going to scan for that.ā This is why I was puzzled by the decision to restrict scanning to images that are going to be stored in the cloud. So youāre only going to catch the dumb people who abuse children? The ones who are smart and know not to upload to iCloud but figure out another way to share images arenāt going to be caught? That never really made much sense to me.
You have both researched different governance models that may address some of these issues. Can you tell us more about that research?
Hoffman: The Platform Accountability Project has a research team at the Keenan Institute of Ethics and the Sanford School of Public Policy. What weāve been doing is looking at how the technology platforms are generally regulated under what we would call mostly an enforcement model: We wait for bad things to happen that violate the law. Then a small group of lawyers does investigations and regulatory enforcement actions or litigation to hold them accountable. Thatās not the way things operate in many other industry sectors. Weāve been looking at environmental regulations, oil and gas, and offshore exploration. Weāve been looking at the financial sector and asking the question āHow do these sectors hold companies accountable?ā39 It tends to be a mix of enforcement and much more of a focus on what we would call supervision and monitoring. This idea entails either regulators or independent private parties overseen by the regulators who advise businesses on what they need to do to be in compliance and work with them to get in compliance.
Each of these industry sectors has different approaches to implement both the enforcement side and the supervision and monitoring side. Weāve been playing around with what those would look like with respect to technology platforms, to have a system that would be more likely to hold them accountable. We address some of these issues in a special three-episode series on the āWays and Meansā podcast from the Sanford School of Public Policy.40
What should judges know when it comes to this area of device privacy?
Hoffman: I think it will be important for judges to understand that these issues are going to continue to come up in the context of these technologies. How far would they allow the relationship between the government and the private sector to go before it would be deemed a search under the Fourth Amendment? Thatās an absolutely fascinating question.
I think the other thing that would be interesting for judges to be thinking about is what role should there be moving forward for individuals to be able to hold technology companies accountable by filing their own claims. One of the things that our research team has been looking into is the potential applicability of two different laws that allow for such actions to be brought: the False Statements Accountability Act and the False Claims Act. The False Statements Accountability Act criminalizes knowingly and willfully making a materially false statement to the government.41 However, enforcement of the law requires that the Department of Justice bring an action against the party making false statements.
The False Claims Act is even more interesting to me. Currently, it only applies to claims that are made to the government for money or property,42 and it allows for qui tamĀ lawsuits, so that individual employee whistleblowers can bring a claim stating that the representation is false.43 But if it were modified and extended to apply in this situation, it would allow for more private enforcement. So you could imagine a situation where we would say āOkay, Apple, youāre making all of these promises, we actually want you to make those in writing in a government filing and to renew that every year and make sure that youāve done a full analysis that your implementation continues to comply with all of the processes that youāve put in place.ā
And ifĀ qui tamĀ lawsuits applied, so that a corporate employee wants to be a whistleblower and could challenge that claim on False Claims Act grounds, then that whistleblower would be allowed to keep a substantial amount of the recovery. That is a great incentive for whistleblowers to come forward, even if they know that their career could be substantially impacted. For example, in a 2019 case involving a software development company, the employee whistleblower recovered $4 million as part of the incentive.44 These are the types of governance and accountability models we need to look at from a policy perspective. It would be interesting for judges to think about, if cases like that came in front of them, how they would receive them.
Dellinger: Another thing that I would throw in for judges to consider, going back to an earlier point, is to ask when these types of searches would require a warrant when conducted by or at the direction of law enforcement? AfterĀ Carpenter v. United States, police need a warrant to obtain more than seven daysā worth of cell-site location information from a phone company.45
But we also have some recent reporting about law enforcement just buying location data from data brokers or companies.46 That would seem to be kind of an end run around a Fourth Amendment protection. If you would require a warrant for the government to get a week of that information directly from a phone company or an internet service provider, then why would it be okay to buy a monthās worth of location data from a data broker who purchased that data from one of those companies? What really is a search? And I can answer that question for my Privacy Law class from the traditional Fourth Amendment perspective, but Iām wondering if it is changing. Iām wondering if there should be more of an eye out for what the relationship is between government and private companies. And when weāre going to have these purchases of information that can be used against people and against their best interests or personal interests, then shouldnāt that mean that we have a different view of companies collecting this information and to whom it is sent?
A lot of the concern is not having visibility into what happens because so many times something that might be dissent ā speaking out against the government or engaging in some kind of critical activity ā can be brandedĀ as a security threat. Then whatever is happening behind the scenes from an intelligence perspective may not be visible to the public for a variety of reasons that are understandable from the intelligence communityās standpoint. But itās dissent, and dissent is incredibly valuable to a democracy. Now, all of a sudden, we canāt see whatās happening. My concern is about how searches are changing when the government has been told by the Supreme Court āHereās the rule,ā but then they can just go do it this other way and accomplish the same end.
Hoffman: I want to echo that this is a really important point for judges to start thinking about: When we do the Fourth Amendment analysis, and we think about the traditional reasonable expectations of privacy question, we get hung up on the fact that data already has been provided by the individual to someone. It gets captured under what we call the āthird-party doctrine,ā and then is not thought to have any privacy protection anymore.47
Additionally, we need to start thinking more about the advanced data analytics that these data brokers operate on much of this data, whether itās data that comes from a company thatās collected it from an individual or even aggregations of publicly available data and government records. This is concerning because when the government uses those insights, particularly for a law enforcement function, they can learn more than people would ever reasonably expect. People may know that they have to provide this data, but they do not expect that by examining a credit card bill someone could determine whether or not they are paying their child support, for example. Advanced data analytics and the data broker ecosystem are increasingly creating these kinds of Fourth Amendment questions.
We really need to move toward an understanding that much of this is well beyond the publicās reasonable expectations of privacy.
Footnotes: