Earlier today Twitter announced that you’ll now be able to use a third-party app (such as Google Authenticator, Authy, or 1Password) for two-factor authentication instead of SMS. The company has updated their support document with instructions on how to set it up here.
This is great news as Twitter was the last service with 2FA that only supported sending codes via SMS. Switching from text messages to 1Password (which I use for one-time codes) was easy: in Twitter for iPad, I went to Settings ⇾ Account ⇾ Security, and enabled the ‘Security app’ toggle. I then selected to use another app to generate my codes and opened 1Password on my iPhone, where I hit Edit on my Twitter login item and scrolled to the OTP section. Here, I tapped the QR button, scanned the QR code Twitter was displaying on my iPad with the iPhone’s camera, and that was it.
Unless you specifically want to receive 2FA codes from Twitter via SMS, you should consider switching to a dedicated authentication app: these codes work independently from carriers and location, and they can be generated offline.
Yesterday a serious security flaw in macOS High Sierra was discovered that let someone with access to a Mac running Apple’s latest OS gain root access to the its data. Today, Apple released Security Update 2017-001, which fixes the issue. The release notes to the update describe the issue as follows:
Impact: An attacker may be able to bypass administrator authentication without supplying the administrator’s password
Description: A logic error existed in the validation of credentials. This was addressed with improved credential validation.
In a comment to Rene Ritchie of iMore.com, Apple said:
Needless to say, this is an important update that should be installed as soon as possible.
Greg Barbosa, writing for 9to5Mac:
A newly discovered macOS High Sierra flaw is potentially leaving your personal data at risk. Developer Lemi Orhan Ergin publicly contacted Apple Support to ask about the vulnerability he discovered. In the vulnerability he found, someone with physical access to a macOS machine can access and change personal files on the system without needing any admin credentials.
Users who haven’t disabled guest user account access or changed their root passwords (likely most) are currently open to this vulnerability. We’ve included instructions on how to protect yourself in the meantime until an official fix from Apple is released.
Incredibly embarrassing and dangerous screwup for a company as devoted to security as Apple. They’re working on a fix, and in the meantime you should follow these steps to change your root password (thankfully, I had guest user access disabled, so the bug didn’t affect my machine).
See also: Rene Ritchie’s explainer.
In a telephone interview with Matthew Panzarino of TechCrunch, Apple’s Senior Vice President of Software Engineering, Craig Federighi, answered many of the questions that have arisen about Face ID since the September 12th keynote event. Federighi went into depth on how Apple trained Face ID and how it works in practice. Regarding the training,
“Phil [Schiller] mentioned that we’d gathered a billion images and that we’d done data gathering around the globe to make sure that we had broad geographic and ethnic data sets. Both for testing and validation for great recognition rates,” says Federighi. “That wasn’t just something you could go pull of the internet.”
That data was collected worldwide from subjects who consented to having their faces scanned.
Federighi explained that Apple retains a copy of the depth map data from those scans but does not collect user data to further train its model. Instead, Face ID works on-device only to recognize users. The computational power necessary for that process is supplied by the new A11 Bionic CPU and the data is crunched and stored in the redesigned Secure Enclave.
The process of disabling Face ID differs from the five presses of the power button required on older iPhones. Federighi said,
“On older phones the sequence was to click 5 times [on the power button] but on newer phones like iPhone 8 and iPhone X, if you grip the side buttons on either side and hold them a little while – we’ll take you to the power down [screen]. But that also has the effect of disabling Face ID,” says Federighi. “So, if you were in a case where the thief was asking to hand over your phone – you can just reach into your pocket, squeeze it, and it will disable Face ID. It will do the same thing on iPhone 8 to disable Touch ID.”
In many respects, the approach Apple has taken with Face ID is very close to that taken with Touch ID. User data is stored in the Secure Enclave, and biometric processing happens on your iOS device, not in the cloud. If you have concerns about Face ID’s security, Panzarino’s article is an excellent place to start. Federighi says that closer to the introduction of the iPhone X, Apple will release an in-depth white paper on Face ID security with even more details.
Lorenzo Franceschi-Bicchierai, writing for Motherboard:
This is the first time that anyone has uncovered such an attack in the wild. Until this month, no one had seen an attempted spyware infection leveraging three unknown bugs, or zero-days, in the iPhone. The tools and technology needed for such an attack, which is essentially a remote jailbreak of the iPhone, can be worth as much as one million dollars. After the researchers alerted Apple, the company worked quickly to fix them in an update released on Thursday.
The question is, who was behind the attack and what did they use to pull it off?
It appears that the company that provided the spyware and the zero-day exploits to the hackers targeting Mansoor is a little-known Israeli surveillance vendor called NSO Group, which Lookout’s vice president of research Mike Murray labeled as “basically a cyber arms dealer.”
A great story from Motherboard that is equal parts fascinating and absolutely terrifying. The malware from NSO is able to effectively steal all the information on your phone, intercept every message and add backdoors to every method of communication on your phone. Evidence suggests that NSO has likely been able to hack iPhones since the iPhone 5.
The security researchers who first became aware of the security bugs notified Apple about 10 days ago, and Apple today released iOS 9.3.5 which fixes the bugs. Suffice to say, you should immediately install the update onto your iOS devices.
Ivan Krstić, Apple’s Head of Security Engineering and Architecture, gave a presentation at the Black Hat conference a few weeks ago, and it is now available to view in full on YouTube.
With over a billion active devices and in-depth security protections spanning every layer from silicon to software, Apple works to advance the state of the art in mobile security with every release of iOS. We will discuss three iOS security mechanisms in unprecedented technical detail, offering the first public discussion of one of them new to iOS 10.
HomeKit, Auto Unlock and iCloud Keychain are three Apple technologies that handle exceptionally sensitive user data – controlling devices (including locks) in the user’s home, the ability to unlock a user’s Mac from an Apple Watch, and the user’s passwords and credit card information, respectively. We will discuss the cryptographic design and implementation of our novel secure synchronization fabric which moves confidential data between devices without exposing it to Apple, while affording the user the ability to recover data in case of device loss.
It was at this presentation that Apple announced that it would launch a bug bounty program for those who discover vulnerabilities in its key products. Also discussed by Krstić during his presentation is how the Secure Enclave Processor enabled Apple to adopt a new approach to data protection, as well as a new security feature in iOS 10 that makes iOS Safari JIT “a more difficult target”.
Joonas Kiminki got his iPhone stolen in Italy last month. After a couple of weeks, he received an email saying that the device had been found. The email turned out to be a well-designed, meticulous phishing attempt:
What strikes me the most is that everything seemed very “right” and professional. The email and the website content looked great, my phone really was an iPhone 6 and they even got the timezone right in the email.
The email raised no alerts on any email client I use, including Google Inbox, mail.google.com and Apple Mail. No web browser, mobile or desktop, show any alarms on the fake site. Google.com knows virtually nothing about the site, the email address or the (probably fake) US phone number the SMS was from. Very well done.
This is exactly what happened to my mother last week. Her iPhone was stolen in Italy in June, and after a month she received an email and SMS (in Italian) telling her that the iPhone had been located. Fortunately, she called me before entering her Apple ID credentials (she was about to).
Clearly, a criminal organization in Italy has set up an entire system to scam owners of stolen iPhones. I’m surprised that both Apple and Google are failing to recognize these email messages as spam.
Glenn Fleishman, writing for Macworld on a recent change to Touch ID authentication in iOS 9:
When iOS 9 was released, Apple updated its list of cases in which iOS asks for a passcode even when Touch ID is enabled. A previously undocumented requirement asks for a passcode in a very particular set of circumstances: When the iPhone or iPad hasn’t been unlocked with its passcode in the previous six days, and Touch ID hasn’t been used to unlock it within the last eight hours. It’s a rolling timeout, so each time Touch ID unlocks a device, a new eight-hour timer starts to tick down until the passcode is required. If you wondered why you were being seemingly randomly prompted for your passcode (or more complicated password), this is likely the reason.
This explains why I’ve been seeing the passcode prompt during the weekends (when I stay up late and occasionally sleep more than 8 hours).
Craig Federighi, Senior Vice President of Software Engineering at Apple, writing for The Washington Post:
That’s why it’s so disappointing that the FBI, Justice Department and others in law enforcement are pressing us to turn back the clock to a less-secure time and less-secure technologies. They have suggested that the safeguards of iOS 7 were good enough and that we should simply go back to the security standards of 2013. But the security of iOS 7, while cutting-edge at the time, has since been breached by hackers. What’s worse, some of their methods have been productized and are now available for sale to attackers who are less skilled but often more malicious.
A cogent argument from Federighi. It follows on from Tim Cook’s open letter and interview with ABC News, as well as Bruce Sewell’s testimony to a congressional committee.