Why Apple’s child safety updates are so controversial

Why Apple's child safety updates are so controversial
Why Apple’s child safety updates are so controversial

Last week, Apple prevailed a number of updates to strengthen children’s safety functions on its devices. Among them: a new technology that can analyze photos on user devices to detect child sexual abuse (CSAM). Although the change has been widely applied by certain legislators and advocates of children’s safety, it has prompted many experts in terms of security and privacy, which say that the update is equivalent to Apple slowing its commitment. to put the privacy of users above all.

Apple challenged this characterization, saying that its approach balances both privacy and the need to do more to protect children by preventing part of the most odious content of spreading more broadly.

What is Apple announced?

Apple announced three separate updates, all fall under the umbrella of “Child Safety”. The most important – and the one who got most of the attention – is a feature that will scan iCloud photos to know the known CSAM. The functionality, which is integrated with iCloud photos, compares the photos of a user against a previously identified material database. If a number of these images is detected, it triggers an examination process. If the images are checked by human examiners, Apple will hang the iCloud account and signal it at the National Center for Missing and Operated Children (NCMEC).

Apple has also preview new “communication security” features for the Messages application. This update allows the Messages application to detect when sexually explicit photos are sent or received by children. It is important to note that this feature is only available for children who are part of a family account, and it’s parents to opt.

If the parents opt in the feature, they will be alerted if a child under 13 years old sees one of these photos. For children over 13, the message application will display a warning when receiving an explicit image, but will not alert their parents. Although the feature is part of the message and separate application of CSAM detection, Apple noted that the feature could always play a role in stopping the use of children because it could disrupt predators messages.

Finally, Apple updates SIRI and its research capabilities so that it can “intervene” in the requests of the CSAM. If someone asks how to report a material of abuse, for example, SIRI will provide links to resources to do so. If it detects that someone could search CSAM, it will display a warning and surface resources to provide help.

When does this happen and can you withdraw?

The changes will be part of IOS 15, which will later deploy this year. Users can effectively remove by disabling iCloud photos (the instructions to follow can be found here). However, anyone disabling ICLOUD photos should keep in mind that this could affect your ability to access photos on multiple devices.

So, how does this scan image work?

Apple is far from the only company that scans photos to search CSAM. Apple’s approach to do so is unique. The detection of CSAM is based on a known material database, maintained by NCMEC and other security organizations. These images are “chopped” (the official name of Apple for this is the neuralhash) – a process that converts the images to a digital code that allows them to be identified, even if they are modified in a way or a Other, such as the culture or manufacture of other visual changes. As mentioned earlier, the detection of the CSAM only works if iCloud photos are activated. What is notable about Apple’s approach is that, rather than matching the images once they have been sent to the cloud – like most cloud platforms do – Apple moved this process to devices Users.

Here’s how it works: Hasched CSAM hash stored on the device and the photos on the device are compared to these hates. The IOS device then generates an encrypted “safety voucher” that is sent to iCloud with the image. If a device reaches a certain threshold of CSAM, Apple can decrypt the security vouchers and perform a manual revision of these images. Apple does not say what the threshold is, but said a unique image would result in any action.

Apple has also published a detailed technical explanation of the process here.

Why is it so controversial?

Privacy advocates and security researchers raised a number of concerns. One of them is that it looks like a major overthrow for Apple, which five years ago refused the FBI’s request to unlock a phone and put in place display panels indicating “what happens. he on your iPhone stay on your iPhone “. For many, the fact that Apple has created a system that can proactively check your images for illicit documents and refer them to the application of the law, resembles a betrayal of this promise.

In a statement, the electronic border foundation called it “a shocking topic for users who relied on the company’s leadership in privacy and safety”. Similarly, Facebook – which spent years taking the heat of Apple on its privacy goals – suffered a question with the iPhone Caker’s approach to CSAM. The chef of whatsapp, cathcart, describes it as “a monitoring system constructed and exploited by apple”.

Specifically, there are real concerns that once such a system is created, Apple could put pressure on the application of the law or governments – to search for other types of equipment. Although the detection of CSAM is only in the United States to begin with, Apple suggested that it could possibly develop other countries and work with other organizations. It is not difficult to imagine scenarios where Apple could put pressure to start looking for other types of illegal content in some countries. The concessions of the company in China – where Apple would have “the control of the ceded” of its data centers to the Chinese government – are cited as evidence that society is not immune to the requirements of less democratic governments.

There are other questions too. As if it is possible for someone to abuse this process by maliciously getting the CSAM on someone’s device to trigger to lose access to their iCloud account. Or if there may be a false positive or another scenario that causes a person ill-reported by the company’s algorithms.

What does Apple say about it?

Apple firmly denied that it was a question of degrading privacy or the promenade of its previous commitments. The company has published a second document in which it tries to respond to many of these claims.

On the issue of false positives, Apple has repeatedly emphasized that this is only the comparison of users’ photos against a collection of known children’s operating equipment, so images of, for example, your own children will not trigger a report. In addition, Apple said that the chances of a false positive are around a trillion of dollars when you factor in the fact that a number of images should be detected to trigger even an exam. Crucial, however, Apple says basically that we just have to make their word about it. As the former Facebook Security Manager, Alex Stamos and Security Researcher, Matthew Green wrote in a Joint New York Times OP-ED, Apple did not provide researchers outside of visibility on how any It actually works.

Apple further says that its manual review, which is based on human reviewers, would be able to detect whether CSAM is on a device following a kind of malicious attack.

With regard to the pressure of governments or agencies of the public force, the company essentially stated that it would refuse to cooperate with such requests. “We faced the demands of building and deploying requirements by the government that degrade the privacy of the front users and firmly refused these requests,” he writes. “We will continue to refuse them in the future. Let’s be clear, this technology is limited to the detection of CSAM stored in iCloud and we do not object at the request of the government to enlarge it. “Although, again, we just have to take Apple on my word here.

If it’s so controversial, why does the apple do it?

The short answer is that the company believes that it finds the good balance between child safety and the protection of privacy. The CSAM is illegal and, in the United States, companies are obliged to report it when they find it. As a result, the detection characteristics of the CSAM have been cooked in popular services for years. But unlike other companies, Apple has not verified the CSAM in the photos of users, largely because of its position on privacy. Not surprisingly, this has been a major source of frustration of children’s security organizations and law enforcement.

To put this in perspective, Facebook stated in 2019 65 million instances of CSAM on its platform, according to the New York Times. Google reported 3.5 million photos and videos, while Twitter and Snap reported “more than 100,000”, Apple, on the other hand, reported 3,000 photos.

It is not because predators do not use Apple services, but because Apple has not been almost as aggressive as other platforms for this material, and its privacy features have made it difficult do it. What is changed now is that Apple says it’s proposed a technical way to detect CSAM collections known in iCloud photo libraries that always respect the privacy of users. Obviously, there is a lot of disagreement on the details and if any type of detection system can really be “private”. But Apple calculated that the compromise is worth it. “If you store a collection of CSAM hardware, yes, it’s bad for you,” the head of Apple’s private life at the New York Times. “But for the rest of you, it’s no different.”

deepak

Leave a Reply

Your email address will not be published. Required fields are marked *