Today at the International Association of Privacy Professionals’ Global Privacy Summit, Apple CEO Tim Cook delivered a keynote speech on privacy. The IAPP’s mission is to act as a resource for privacy professionals to help them in their efforts to manage information privacy risks and protect data for their organizations.
Cook’s speech, which was livestreamed on the IAPP’s YouTube channel, began with a recap of Apple’s efforts to protect user privacy, including App Tracking Transparency, alluding to the “A Day in the Life of Your Data” white paper the company published early last year.
Cook told the assembled crowd that Apple supports privacy regulation, including GDPR in the EU and privacy laws in the US. However, Cook also expressed concern about unintended consequences that laws being considered in the US and elsewhere might cause, calling out sideloading proposals in particular. Cook said that although Apple supports competition, alternate app stores and sideloading are not the solution because they would open devices up to apps that sidestep the company’s tracking protections and can expose users introduce malware.
Concluding his remarks, Cook called on tech companies and governments to work together to fashion policies that don’t undermine user privacy. Ending on an optimistic note, Cook told the gathering that although the world is at a pivotal moment for privacy, the end of privacy as we know it is not inevitable.
First spotted by 9to5Mac, Apple has expanded its Personal Safety Guide that serves as a hub for information about device and data access when your safety is at risk. New details in the guide cover AirTags, which have received a lot of attention as stories have surfaced of their use to stalk people, but the guide is broader than that, covering a wide range of topics. As Apple explains at the beginning of the guide:
This user guide is a personal safety resource for anyone who is concerned about or experiencing technology-enabled abuse, stalking, or harassment. It can help you sever digital ties with those you no longer want to be connected to and outlines the personal safety features that are built into Apple devices.
In addition to accessing the Personal Safety Guide on Apple’s website, it’s available as a downloadable PDF.
Regarding AirTags and other Find My accessories, Apple’s guide explains what the device’s alerts mean, providing users with the context necessary to know how to respond. The guide also offers suggestions of what to do if an unknown device is following them.
It’s good to see Apple’s Personal Safety Guide actively maintained. Apple has built-in safety measures for devices like AirTags, but it’s equally important that users know how to take advantage of those safety features, which the Personal Safety Guide should help with.
The problem of AirTags being used to stalk people has been in the news ever since they were released last spring, but a recent story in The New York Times has brought the issue to the forefront again. AirTags are fantastic when used as intended to keep track of your keys, luggage, and other personal items, but stalking is a serious problem that Apple should do everything it can to prevent.
Apple is also in a unique position given the vast size of its Find Me network. That puts the company in a different league than competitors like Tile, which carries greater responsibility with it.
In a story on Peer Reviewed, Matt VanOrmer puts a finger on something I’ve been wondering for a while: Are AirTags contributing to the problem of stalking or merely making us more aware of it because of the unique stalking countermeasures built into the device? It’s a classic causation/correlation question that is worth reflecting on. As VanOrmer explains:
I think the increase in news stories about AirTag stalking situations are less indicative of AirTags causing more stalking, and more indicative of how frequently stalkings already occur — with AirTags’ anti-stalking features simply bringing more of these horrible situations to light. These stories may be a classic example of the Baader-Meinhof phenomenon (AKA the “Frequency Illusion”) — in which increased awareness of creeps using AirTags to stalk women creates the illusion that it is happening more often, or even that AirTags are responsible for this illusory increase in incidence.
As VanOrmer rightly points out, Apple should do everything it can to prevent AirTags from being used to track people, which includes improving the tools available to Android users for whom Apple has made an app that is generally viewed as insufficient. This is also a topic where some added transparency about what Apple is doing to address concerns about stalking would help observers decide whether it’s enough instead of having only anecdotal news reports to go on. However, given the wide-reaching impact of the Find My network, which affects people who aren’t even Apple customers, I think a third-party audit of how Apple is handling the security and privacy implications of AirTags is warranted.
Last week, Apple announced two new child safety features coming this fall that stirred up controversy in the security and privacy world. The first is a technology that scans photos that are uploaded to customers’ iCloud Photo Libraries for digital fingerprints that match a database of known Child Sexual Abuse Material or ‘CSAM’ that is maintained by the Center for Missing and Exploited Children, a quasi-governmental entity in the US. The other is a machine learning-based technology used by Messages on an opt-in basis to alert children, and if they are under 13, their parents, of images flagged by the system as potentially pornographic.
The two technologies are different, but by announcing them at the same time in a way that wasn’t always clear, Apple found itself embroiled in controversy. The company has since tried to clarify the situation by publishing a set of FAQs that go into more detail about the upcoming features than the initial announcement did.
Then today, Apple’s senior vice president of Software Engineering, Craig Federighi, sat down with Joanna Stern of The Wall Street Journal for a video interview to explain the two features and how they work. Stern’s interview is well worth watching because it does more in just under 12 minutes to clarify what Apple is doing, and just as importantly not doing, than anything else I’ve watched or read.
Today, Apple announced three new child safety features for its operating systems that will launch when its operating systems are updated in the fall. The implementation details of the features are technically complex, which makes reading the full documentation worthwhile if you are concerned about how they are accomplished.
The first feature is a tool for parents that will be built into Messages. According to Apple:
The Messages app will use on-device machine learning to warn about sensitive content, while keeping private communications unreadable by Apple.
The opt-in tool will “warn children and their parents when receiving or sending sexually explicit photos.”
The second feature applies to photos stored online in users’ iCloud Photos library. Apple says:
iOS and iPadOS will use new applications of cryptography to help limit the spread of CSAM online, while designing for user privacy. CSAM detection will help Apple provide valuable information to law enforcement on collections of CSAM in iCloud Photos.
The screening of iCloud Photos images happens on-device using cryptographic hashes of known CSAM content and has to pass a human review process after passing certain thresholds before an account is disabled, and a report is made to the National Center for Missing and Exploited Children. The feature will be US-only at first.
Finally, Apple announced that:
[it] is also expanding guidance in Siri and Search by providing additional resources to help children and parents stay safe online and get help with unsafe situations. For example, users who ask Siri how they can report CSAM or child exploitation will be pointed to resources for where and how to file a report.
Siri and Search will also intervene when CSAM-related search requests are made.
To understand better how these features are being implemented by Apple, it’s worth visiting its new child safety webpage. At the bottom of the page are links to additional resources that explain the technology underlying the features.
Fast Company’s Michael Grothaus interviewed Craig Federighi this week regarding the suite of new privacy features which Apple unveiled at WWDC. The article includes some notable technical details on how iCloud Private Relay works under the hood. One of the most interesting — and somewhat unfortunate — revelations is that iCloud Private Relay will only work from Safari. Users of other browsers are out of luck here.
The reason for this restriction has to do with Apple’s commitment to unassailable privacy, which happens by ensuring that no party can ever access both your IP address and your destination URL. From what I can gauge, this is actually a three-step process which looks something like this:
- From Safari, you navigate to a particular URL. Safari encrypts this destination URL locally and then forwards your request to Apple’s iCloud Private Relay servers.
- Apple’s servers anonymize your IP address so that it can’t be traced back to you, then forward the request to a trusted third-party’s servers.
- The third-party decrypts the destination URL, then forwards the final request (decrypted URL plus anonymized IP address) to the destination.
Privacy has become a central theme of Apple’s OS updates in recent years, and this WWDC’s announcements were no different. During the opening keynote yesterday, the company introduced new privacy features across its OSes and system apps designed to put users in control of their data and prevent unwanted tracking. As Craig Federighi Apple’s senior vice president of Software Engineering explained in an Apple press release:
Privacy has been central to our work at Apple from the very beginning. Every year, we push ourselves to develop new technology to help users take more control of their data and make informed decisions about whom they share it with. This year’s updates include innovative features that give users deeper insights and more granular control than ever before.
One of Apple’s focuses this year is on email. Hide My Email, which is part of iCloud+, lets users create random email addresses that forward to their main address, allowing them to avoid giving out their primary email address to third parties that may sell it or use it to send unsolicited messages. According to Apple, iCloud+ subscriptions with the new features the company announced will cost the same as they do now for the amount of storage offered with a current iCloud subscription. Another new mail feature is, Mail Privacy Protection, which is built into Apple’s Mail app, and prevents invisible pixel trackers that are used to tell if someone has opened a message and gather other information.
CNET reports that Apple is adjusting the time within which they sound an alert when separated from their owners and adding ways to alert people when AirTags and third-party Find My network-enabled items are nearby.
Initially, AirTags were set to play a sound three days after they were separated from their owners. Now, the device will play a warning beep somewhere between 8 and 24 hours. Apple is also creating an Android app to allow owners of Android phones and devices to know if an AirTag or Find My network device is planted on them.
AirTags launched last month with numerous privacy protections baked into the device and related software, noting at the time that they expected to make adjustments along the way. With a few weeks of real-world use by customers and investigations by The Washington Post and others, today’s changes are a positive step toward ensuring that AirTags can only be used for their intended purpose: tracking belongings, not people.
The latest changes are being rolled out now and will be applied to AirTags when they are near an iPhone. Apple told CNET that it will have more details on the planned Android app later this year.
Apple has released a new video in its ongoing ‘Privacy. That’s iPhone’ campaign titled ‘Tracked.’ The latest spot starts with the lead character purchasing a coffee and then being followed around all day by a growing crowd of people that intrude on his privacy. Back home at the end of the day, the protagonist is prompted by his iPhone to ‘Ask App Not to Track’ or ‘Allow’ tracking, and as soon as ‘Ask App Not to Track’ is chosen, the mob of people crowding his apartment pop like balloons, disappearing in puffs of smoke.
Privacy isn’t an easy thing to depict visually, and no doubt, someone will take issue with aspects of the way the video portrays app trackers, but I enjoyed it. The video is entertaining, but it’s also useful to anyone who doesn’t realize how intrusive cross-app and website tracking can be.
This isn’t the first video released in the series. Late last month, Apple released ‘App Tracking Transparency’, which explains how the iOS 14.5 feature works. Earlier this year, the company also released ‘A Day In the Life of Your Data,’ a case study with real-world examples of what app trackers can do.