Child sexual abuse can be headed off before it occurs. That’s where we’re putting our energy going forward.
Apple also told The Wall Street Journal that Advanced Data Protection that allows users to opt into end-to-end encryption of new categories of personal data stored in iCloud, will be launched in the US this year and globally in 2023.
For an explanation of the new security protections announced today, be sure to catch Joanna Stern’s full interview with Craig Federighi.
First, iMessage Contact Key Verification allows users to verify that they are communicating with the person with whom they think they’re communicating. The feature will alert users who use it if someone has infiltrated cloud services to gain access to the user’s iMessage conversations. For even greater security, users can compare a Contact Verification Code in person, on FaceTime, or through another secure channel.
Second, Security Keys lets users adopt hardware security keys when logging into their iCloud accounts. The new system is an enhancement over two-factor authentication because it prevents someone from obtaining a your second factor through a phishing scam.
iCloud already protects 14 sensitive data categories using end-to-end encryption by default, including passwords in iCloud Keychain and Health data. For users who enable Advanced Data Protection, the total number of data categories protected using end-to-end encryption rises to 23, including iCloud Backup, Notes, and Photos. The only major iCloud data categories that are not covered are iCloud Mail, Contacts, and Calendar because of the need to interoperate with the global email, contacts, and calendar systems.
Apple says that iMessage Contact Key Verification will be available globally in 2023, and Security Keys is coming early 2023. Advanced Data Protection for iCloud is available in the US today for participants in Apple’s beta OS program, and will presumably roll out with the next point release to Apple’s OSes.
Over the years, I’ve shared family photos with my wife Jennifer in three ways: iMessage, AirDrop, and Shared Albums. However, of those, iMessage won hands down, not because it’s the best way to share photos, but because Messages is an app we already use every day to communicate. Plus, sharing photos with Messages is easy whether you’re already in the app and using the Photos iMessage app or in the Photos app itself and using the share sheet. From conversations with friends and family, I know I’m not alone in my scattershot approach to sharing photos with my family.
It’s into that chaotic, ad hoc mess and all of its variations that users have improvised over the years that Apple is stepping in with iCloud Shared Photo Library, its marquee new Photos feature for iOS and iPadOS 16 and macOS Ventura. And you know what? It just works.
The feature lets anyone with an iCloud photo library share part or all of their photo library with up to five other people. Once activated, a new library is created that sits alongside your existing one and counts against the iCloud storage of the person who created it.
One critical limitation of iCloud Shared Photo Library is that you can only be a member of one shared library, a restriction that is designed to limit the library to your immediate household. That means I could share photos with my wife and kids because there are fewer than six of us, but I couldn’t set up another library with my siblings or parents for our extended families. Nor could I invite one of my extended family members to use the extra slot I’ve got in my family library unless they were willing to forego being part of any shared library their own family created.
Unwinding a shared library.
So, what do you do if you’re in a shared library and want to join a different one? There’s a button in the Photos section of Settings to leave a library, so you can do so with one tap, saving all of the photos in the shared library to your personal library or keeping just those you originally contributed to the shared pool. Deleting libraries is possible too, but only by the person who created them, who is given the choice of keeping all images or just the ones they contributed when they do so.
Safety Check is a new iOS privacy feature that allows users whose safety may be at risk due to domestic or intimate partner violence to review and revoke access previously granted to their partner. Apple developed the feature in consultation with groups that support the victims of domestic and intimate partner violence, including the National Network to End Domestic Violence, the National Center for Victims of Crime, and Wesnet, The Women’s Services Network.
Speed is important when someone’s safety may be at risk. Instead of hunting app-by-app to undo privileges granted to a partner, Safety Check includes an emergency reset that signs a user out of their iCloud account on other devices that might be in a partner’s possession, stops sharing their location, resets privacy permissions across all apps, and limits messaging to the device they have with them.
Safety Check also includes tools to manage sharing with people and apps. The feature can show you what you’re sharing and with whom and then change the access settings if you’d like. For apps, each type of private information that’s shared is listed and can be disallowed. It promises to be a great way to perform a periodic personal privacy audit.
With the number of apps we use and people we share with growing all the time, a tool like Safety Check is critical. It eliminates the need to go hunting for multiple privacy settings by consolidating the information in one place, which is important if you’re someone whose safety is at risk.
Today at the International Association of Privacy Professionals’ Global Privacy Summit, Apple CEO Tim Cook delivered a keynote speech on privacy. The IAPP’s mission is to act as a resource for privacy professionals to help them in their efforts to manage information privacy risks and protect data for their organizations.
Cook told the assembled crowd that Apple supports privacy regulation, including GDPR in the EU and privacy laws in the US. However, Cook also expressed concern about unintended consequences that laws being considered in the US and elsewhere might cause, calling out sideloading proposals in particular. Cook said that although Apple supports competition, alternate app stores and sideloading are not the solution because they would open devices up to apps that sidestep the company’s tracking protections and can expose users introduce malware.
Concluding his remarks, Cook called on tech companies and governments to work together to fashion policies that don’t undermine user privacy. Ending on an optimistic note, Cook told the gathering that although the world is at a pivotal moment for privacy, the end of privacy as we know it is not inevitable.
This user guide is a personal safety resource for anyone who is concerned about or experiencing technology-enabled abuse, stalking, or harassment. It can help you sever digital ties with those you no longer want to be connected to and outlines the personal safety features that are built into Apple devices.
In addition to accessing the Personal Safety Guide on Apple’s website, it’s available as a downloadable PDF.
Regarding AirTags and other Find My accessories, Apple’s guide explains what the device’s alerts mean, providing users with the context necessary to know how to respond. The guide also offers suggestions of what to do if an unknown device is following them.
It’s good to see Apple’s Personal Safety Guide actively maintained. Apple has built-in safety measures for devices like AirTags, but it’s equally important that users know how to take advantage of those safety features, which the Personal Safety Guide should help with.
The problem of AirTags being used to stalk people has been in the news ever since they were released last spring, but a recent story in The New York Times has brought the issue to the forefront again. AirTags are fantastic when used as intended to keep track of your keys, luggage, and other personal items, but stalking is a serious problem that Apple should do everything it can to prevent.
Apple is also in a unique position given the vast size of its Find Me network. That puts the company in a different league than competitors like Tile, which carries greater responsibility with it.
In a story on Peer Reviewed, Matt VanOrmer puts a finger on something I’ve been wondering for a while: Are AirTags contributing to the problem of stalking or merely making us more aware of it because of the unique stalking countermeasures built into the device? It’s a classic causation/correlation question that is worth reflecting on. As VanOrmer explains:
I think the increase in news stories about AirTag stalking situations are less indicative of AirTags causing more stalking, and more indicative of how frequently stalkings already occur — with AirTags’ anti-stalking features simply bringing more of these horrible situations to light. These stories may be a classic example of the Baader-Meinhof phenomenon (AKA the “Frequency Illusion”) — in which increased awareness of creeps using AirTags to stalk women creates the illusion that it is happening more often, or even that AirTags are responsible for this illusory increase in incidence.
As VanOrmer rightly points out, Apple should do everything it can to prevent AirTags from being used to track people, which includes improving the tools available to Android users for whom Apple has made an app that is generally viewed as insufficient. This is also a topic where some added transparency about what Apple is doing to address concerns about stalking would help observers decide whether it’s enough instead of having only anecdotal news reports to go on. However, given the wide-reaching impact of the Find My network, which affects people who aren’t even Apple customers, I think a third-party audit of how Apple is handling the security and privacy implications of AirTags is warranted.
Last week, Apple announced two new child safety features coming this fall that stirred up controversy in the security and privacy world. The first is a technology that scans photos that are uploaded to customers’ iCloud Photo Libraries for digital fingerprints that match a database of known Child Sexual Abuse Material or ‘CSAM’ that is maintained by the Center for Missing and Exploited Children, a quasi-governmental entity in the US. The other is a machine learning-based technology used by Messages on an opt-in basis to alert children, and if they are under 13, their parents, of images flagged by the system as potentially pornographic.
The two technologies are different, but by announcing them at the same time in a way that wasn’t always clear, Apple found itself embroiled in controversy. The company has since tried to clarify the situation by publishing a set of FAQs that go into more detail about the upcoming features than the initial announcement did.
Then today, Apple’s senior vice president of Software Engineering, Craig Federighi, sat down with Joanna Stern of The Wall Street Journal for a video interview to explain the two features and how they work. Stern’s interview is well worth watching because it does more in just under 12 minutes to clarify what Apple is doing, and just as importantly not doing, than anything else I’ve watched or read.
Today, Apple announced three new child safety features for its operating systems that will launch when its operating systems are updated in the fall. The implementation details of the features are technically complex, which makes reading the full documentation worthwhile if you are concerned about how they are accomplished.
The first feature is a tool for parents that will be built into Messages. According to Apple:
The Messages app will use on-device machine learning to warn about sensitive content, while keeping private communications unreadable by Apple.
The opt-in tool will “warn children and their parents when receiving or sending sexually explicit photos.”
The second feature applies to photos stored online in users’ iCloud Photos library. Apple says:
iOS and iPadOS will use new applications of cryptography to help limit the spread of CSAM online, while designing for user privacy. CSAM detection will help Apple provide valuable information to law enforcement on collections of CSAM in iCloud Photos.
The screening of iCloud Photos images happens on-device using cryptographic hashes of known CSAM content and has to pass a human review process after passing certain thresholds before an account is disabled, and a report is made to the National Center for Missing and Exploited Children. The feature will be US-only at first.
[it] is also expanding guidance in Siri and Search by providing additional resources to help children and parents stay safe online and get help with unsafe situations. For example, users who ask Siri how they can report CSAM or child exploitation will be pointed to resources for where and how to file a report.
Siri and Search will also intervene when CSAM-related search requests are made.
To understand better how these features are being implemented by Apple, it’s worth visiting its new child safety webpage. At the bottom of the page are links to additional resources that explain the technology underlying the features.