Last week, Apple announced two new child safety features coming this fall that stirred up controversy in the security and privacy world. The first is a technology that scans photos that are uploaded to customers’ iCloud Photo Libraries for digital fingerprints that match a database of known Child Sexual Abuse Material or ‘CSAM’ that is maintained by the Center for Missing and Exploited Children, a quasi-governmental entity in the US. The other is a machine learning-based technology used by Messages on an opt-in basis to alert children, and if they are under 13, their parents, of images flagged by the system as potentially pornographic.
The two technologies are different, but by announcing them at the same time in a way that wasn’t always clear, Apple found itself embroiled in controversy. The company has since tried to clarify the situation by publishing a set of FAQs that go into more detail about the upcoming features than the initial announcement did.
Then today, Apple’s senior vice president of Software Engineering, Craig Federighi, sat down with Joanna Stern of The Wall Street Journal for a video interview to explain the two features and how they work. Stern’s interview is well worth watching because it does more in just under 12 minutes to clarify what Apple is doing, and just as importantly not doing, than anything else I’ve watched or read.