Earlier this month, Apple suspended its Siri grading program, in which third-party contractors listened to small snippets of audio to evaluate Siri’s effectiveness. Today in a press release, Apple explained its Siri grading program and changes the company is making:
We know that customers have been concerned by recent reports of people listening to audio Siri recordings as part of our Siri quality evaluation process — which we call grading. We heard their concerns, immediately suspended human grading of Siri requests and began a thorough review of our practices and policies. We’ve decided to make some changes to Siri as a result.
Apologizing for not living up to the privacy standards customers expect from it, Apple outlined three changes that will be implemented this fall when operating system updates are released:
First, by default, we will no longer retain audio recordings of Siri interactions. We will continue to use computer-generated transcripts to help Siri improve.
Second, users will be able to opt in to help Siri improve by learning from the audio samples of their requests. We hope that many people will choose to help Siri get better, knowing that Apple respects their data and has strong privacy controls in place. Those who choose to participate will be able to opt out at any time.
Third, when customers opt in, only Apple employees will be allowed to listen to audio samples of the Siri interactions. Our team will work to delete any recording which is determined to be an inadvertent trigger of Siri.
This is a sensible plan. It’s clear, concise, and has the benefit if being verifiable once implemented. It’s unfortunate that Siri recordings were being handled this way in the first place, but I appreciate the plain-English response and unambiguous plan for the future.
I love trying new apps and services. It may be part of my job at MacStories, but even if it weren’t, I would still constantly be on the lookout for interesting, creative products that can benefit either my work or leisure. In recent years it seems like there’s always a fresh stream of apps and services to check out. Often when I try something new, however, I’m immediately confronted with the obstacle of a login screen. At which point there’s a choice to make: do I go through the hassle of creating an account for this service, or – if the option is available – do I simply authenticate via a third party like Google or Facebook? Sadly, neither option is ideal.
Creating a new account for every service you try is a major pain. It’s made easier with the aid of iCloud Keychain and 1Password, but while those tools eliminate lots of friction, they can be a little clunky, and in the end you’re still trusting your data to the (usually unknown) privacy policies of the service you sign up for.
Third-party login buttons solve the convenience problem, mostly. They may require entering your credentials for that third-party service, but at least you don’t have to create and remember new credentials for multiple services. The data privacy issue can be a question mark with these buttons though; when you authenticate through, let’s say Facebook, do you really know exactly what data you’re sharing with the new service? Or how the service will use that data? As consumers continue losing trust in Facebook itself to secure their data, why would they trust a service that taps into their Facebook data?
Sign In with Apple is a modern alternative to the current mess of login methods, offering Apple users a solution that addresses the current options’ shortfalls. It makes account creation and sign-in trivially simple – even more so than buttons from Google or Facebook – while also keeping your data in the hands of a company with a decent privacy track record.
When apps update to adopt Sign In with Apple, I suspect many users’ initial thoughts will be some variation of what immediately popped into my mind after trying it for the first time: “Where has this been all my life?”
Last week, The Guardian reported on Apple’s Siri grading program in which contractors listen to snippets of audio to evaluate the effectiveness of Siri’s response to its trigger phrase. That article quoted extensively from an anonymous contractor who said they and other contractors regularly heard private user information as part of the program.
In response, Apple has announced that it is suspending the Siri grading program worldwide. While suspended, Apple says it will re-evaluate the program and issue a software update that will let users choose whether to allow their audio to be used as part of the program.
In a statement to Matthew Panzarino, the editor-in-chief of TechCrunch, Apple said:
“We are committed to delivering a great Siri experience while protecting user privacy,” Apple said in a statement to TechCrunch. “While we conduct a thorough review, we are suspending Siri grading globally. Additionally, as part of a future software update, users will have the ability to choose to participate in grading.”
In an earlier response to The Guardian, Apple had said that less than 1% of daily Siri requests are sent to humans as part of the grading program. However, that’s not very comforting to users who are left wondering whether snippets of their daily life are part of the audio shared with contractors. Consequently, I’m glad to see that Apple is re-examining its Siri quality-control efforts and has promised to give users a choice of whether they participate.
Matthew Panzarino, writing for TechCrunch:
Apple has disabled the Apple Watch Walkie Talkie app due to an unspecified vulnerability that could allow a person to listen to another customer’s iPhone without consent, the company told TechCrunch this evening.
Apple shared an official statement with TechCrunch:
We were just made aware of a vulnerability related to the Walkie-Talkie app on the Apple Watch and have disabled the function as we quickly fix the issue. We apologize to our customers for the inconvenience and will restore the functionality as soon as possible. Although we are not aware of any use of the vulnerability against a customer and specific conditions and sequences of events are required to exploit it, we take the security and privacy of our customers extremely seriously. We concluded that disabling the app was the right course of action as this bug could allow someone to listen through another customer’s iPhone without consent. We apologize again for this issue and the inconvenience.
Panzarino rightfully points out the parallels of this issue with the highly-publicized FaceTime bug from earlier this year. The one key difference: whereas with the FaceTime bug, when it was reported by a user, Apple didn’t respond or take action until the problem received widespread media attention; with this Walkie-Talkie bug, Apple followed up on a customer’s report and addressed the issue seemingly before anyone else knew about it. Hopefully this is the sign of improved processes inside the company for handling serious bugs and vulnerabilities.
Sarah Perez of TechCrunch has assembled an excellent, in-depth walkthrough answering key questions about how Apple’s upcoming authentication service, Sign In with Apple, will work:
From a security perspective, Apple offers a better option for both users and developers alike compared with other social login systems which, in the past, have been afflicted by massive security and privacy breaches.
Apple’s system also ships with features that benefit iOS app developers — like built-in two-factor authentication support, anti-fraud detection and the ability to offer a one-touch, frictionless means of entry into their app, among other things.
Despite the advantages to the system, the news left many wondering how the new Sign In with Apple button would work, in practice, at a more detailed level. We’ve tried to answer some of the more burning and common questions.
Perez addresses questions regarding what information a developer receives when a user chooses Sign In with Apple, whether it’s possible to use the authentication service on Android devices, when an app will and won’t be required to use Sign In with Apple, and more.
Despite some controversy regarding how strongly Apple is pushing this new secure login option, if it works as advertised, Sign In with Apple could be one of the upcoming OS features that has the biggest societal impact in the long run.
Darrell Etherington, writing for TechCrunch:
Apple’s truly transforming into a privacy-as-a-service company, which shows in the way that it’s implementing both the new single sign-on account service, as well as its camera and location services updates in iOS 13. The SSO play is especially clever, because it includes a mechanism that will allow developers to still have the relevant info they need to maintain a direct relationship with their users – provided users willingly sign-up to have that relationship, but opting in to either or both name and email sharing.
For years, a major point of debate in tech circles has been the friction between privacy and convenience, particularly as relates to web services offered by companies like Apple and Google. Apple’s privacy-sensitive approach has, in some people’s view, hamstrung it from offering the same level of convenience in its services that’s found in competing services from Google, Amazon, and others who rely on sending your data to the cloud for analyzing.
This year at WWDC, Apple’s new privacy-focused initiatives seem to be striking more of a balance between convenience and security. The company’s new Sign in with Apple feature is a great example: it provides developers a way to contact their users directly, while still protecting those users’ actual email addresses so they can’t be sold to third parties. In my view that’s a brilliant win-win, and the type of innovation I hope we see more of in future products.
You can also follow all of our WWDC coverage through our WWDC 2019 hub, or subscribe to the dedicated WWDC 2019 RSS feed.
Finding a way to convey the benefits of privacy isn’t easy, which is why I like Apple’s ‘Privacy on iPhone – Private Side’ video so much.
The video, which runs under a minute, opens with images of several ‘No Trespassing,’ ‘Keep Out,’ ‘Beware of Dog,’ and other signs. In a series of quick cuts, the video shows two people who pause an intense conversation when interrupted by a waiter as well as people locking file cabinets, closing blinds, locking doors, shredding documents, and more. Near the end, a woman rolls up the window of a car when she sees someone nearby watching her put on makeup.
As Apple’s description of the YouTube video says:
Your privacy matters. From encrypting your iMessage conversations, or not keeping a history of your routes in Maps, to limiting tracking across sites with Safari. iPhone is designed to protect your information.
Every clip of the video, which adds a bit of levity to an otherwise serious topic, reinforces the closing message that ‘If privacy matters in your life, it should matter to the phone your life is on.’
The video is an effective rebuttal of the ‘I have nothing to hide’ argument against privacy. Even the mundane aspects of day-to-day life aren’t something that you necessarily want to broadcast to the world, which this video is very effective in conveying.
Facebook is in the news again, and unsurprisingly it’s not the good kind of publicity.
Yesterday Josh Constine of TechCrunch exposed a “Facebook Research” VPN that Facebook has been using to harvest extensive phone data from users age 13 to 35 in exchange for payment from the company of up to $20/month. The practice was made possible by Facebook’s enterprise developer certificate from Apple, but after the story came to light, Apple swiftly responded by revoking that certificate from Facebook and publicly condemning the company’s misuse of Apple’s Enterprise Developer Program. That action caused the immediate end of the Facebook Research initiative on Apple platforms, but it also has reportedly brought widespread consequences throughout the entirety of Facebook’s company operations. Tom Warren and Jacob Kastrenakes, reporting for The Verge:
Apple has shut down Facebook’s ability to distribute internal iOS apps, from early releases of the Facebook app to basic tools like a lunch menu. A person familiar with the situation tells The Verge that early versions of Facebook, Instagram, Messenger, and other pre-release “dogfood” (beta) apps have stopped working, as have other employee apps, like one for transportation. Facebook is treating this as a critical problem internally, we’re told, as the affected apps simply don’t launch on employees’ phones anymore.
Revoking a certificate not only stops apps from being distributed on iOS, but it also stops apps from working. And because internal apps by the same organization or developer may be connected to a single certificate, it can lead to immense headaches like the one Facebook now finds itself in where a multitude of internal apps have been shut down.
This is more than a slap on the wrist, but it seems like a fitting response to Facebook’s blatant abuse of the Apple enterprise agreement. My main hope is that it causes Facebook to think twice before implementing any similarly shady initiatives in the future.
User privacy is one of the social drums Tim Cook has been consistently beating for years now, and today that’s continuing in an even stronger way with a new op-ed by Apple’s CEO published by Time. Cook writes:
I and others are calling on the U.S. Congress to pass comprehensive federal privacy legislation—a landmark package of reforms that protect and empower the consumer. Last year, before a global body of privacy regulators, I laid out four principles that I believe should guide legislation:
First, the right to have personal data minimized. Companies should challenge themselves to strip identifying information from customer data or avoid collecting it in the first place. Second, the right to knowledge—to know what data is being collected and why. Third, the right to access. Companies should make it easy for you to access, correct and delete your personal data. And fourth, the right to data security, without which trust is impossible.
In addition to outlining these four principles, Cook gets more specific in calling for a particular organization to be formed that counteracts a “shadow economy that’s largely unchecked” whereby people’s data is sold by retailers and other companies without express knowledge or consent. He writes:
Meaningful, comprehensive federal privacy legislation should not only aim to put consumers in control of their data, it should also shine a light on actors trafficking in your data behind the scenes. Some state laws are looking to accomplish just that, but right now there is no federal standard protecting Americans from these practices. That’s why we believe the Federal Trade Commission should establish a data-broker clearinghouse, requiring all data brokers to register, enabling consumers to track the transactions that have bundled and sold their data from place to place, and giving users the power to delete their data on demand, freely, easily and online, once and for all.
Apple has established a consistent practice of standing for user privacy, partly owing to its highly publicized standoff with the FBI in 2016, but it seems that in 2019 the company wants that value to be even more pronounced. First there was the unavoidable banner at CES touting the iPhone’s privacy advantage, and now today’s op-ed. It will be interesting to see if any of the ideas Cook pushes bring about productive discussion on this issue, leading to practical change in US policy.