In a video interview with the Wall Street Journal, Apple SVP Craig Federighi discusses the reaction to the iCloud Child Safety features announced last week.
Federighi admits that the simultaneous announcement of the Messages protections for children and CSAM scanning, two similar features but work in very different ways, has caused customer confusion and Apple could have done a better job at communicating the new initiative.
In the discussion with the Journal’s Joanna Stern, Federighi reiterates that the CSAM scanning technology only applies to photos set to be uploaded to iCloud, and is disabled if iCloud Photos is not in use.
Federighi says “we wish that this had come out a little more clearly, because we feel very positively and strongly about what we are doing, and we can see that it has been widely misunderstood”.
The CSAM feature uses cryptographic hashes to detect known child pornographic content on a user’s device. However, the phone does not flag your account immediately. To mitigate the chance of false positive matches, Apple sets a minimum threshold for the number of instances of content that is flagged.
When that threshold is met, the account is then reported to Apple for human review and then ultimately to NCMEC (the National Center for Missing & Exploited Children). Apple had previously not specified this threshold explicitly. However, Federighi seems to disclose it as part of this interview saying ‘something on the order of 30 known pornographic images matching’.
He also says that even for a flagged account, only the images that are matched are then seen by humans — the feature does not give unfettered access to the user’s entire photo library on the account.
Asked whether Apple is giving government a ‘backdoor’ through the implementation of this feature, Federighi plainly rejects the assertion. Federighi says Apple designed the system with multiple levels of auditing, so the system cannot be misused. The same database of hashes will be used worldwide.
Separately to the CSAM scanning and reporting facility for iCloud Photos uploads, Apple is also going to release a new feature for the Messages app which offers safeguards against children being sent images with nudity. An on-device machine learning algorithm will look for nudity in photos sent via Messages, and redact the image until the user confirms that they want to see it.
For Apple ID accounts of children under the age of 13, parents can optionally enable a notification setting which will inform parents when their child views an image of nudity. Federighi says this feature is designed to help relieve the concerns of parents who fear their children are being groomed and being coached not to talk about the relationship with their family.
Federighi says Apple did this now because ‘they figured it out’ (as in, they worked out a way to offer CSAM protections whilst maintaining the privacy of its user base), and were not pressured by others to act.
The new Child Safety features will roll out later this year, as an update to iOS 15. Initially, they will only be enabled in the United States and Apple will announce expansions to other markets at a later date.
Author: Benjamin Mayo
Source: 9TO5Google