New Apple privacy features; how Apple will protect us from AI

Home » New Apple privacy features; how Apple will protect us from AI
New Apple privacy features; how Apple will protect us from AI

Apple software lead Craig Federighi has talked about the new privacy features announced in yesterday’s keynote, and how Apple intends to protect us from some of the threats presented by AI.

As part of this, he says, Apple will use AI to fight one of the dangers created by AI …

New Apple privacy features

Apple yesterday highlighted some of the new privacy features, and has today summarized these.

Private browsing

Apple says that advanced tracking and fingerprinting protections go even further to help prevent websites from tracking devices. Private Browsing also now locks when not in use, allowing a user to keep tabs open even when stepping away from the device.

Photo privacy

iOS already makes apps ask for permission to access your photos, and Apple says iOS 17 further improves this protection. A new “Limited access” option offers greater control; users are shown more information about the access they have granted; and will get occasional reminders, so permissions can be revoked if no longer required.

It’s common for links to have embedded identifiers to track who is visiting the site. For example, if you receive a promotional email, and click on the link, it likely identifies you. Messages, Mail, and Safari private browsing mode will now automatically remove the tracking IDs.

For example, if you receive this link, and click on it:

then your iPhone will automatically remove the ‘clickId=d77_62jkls’ part before taking you to the site.

And more

But FastCo‘s Michael Grothaus says that these are just a few of the dozens of new privacy features being introduced in the new operating systems. He got a chance to interview Craig Federighi about them, and more.

At the most extreme end of the scale, Lockdown Mode is offering even more protection.

With iOS 17, Apple is not only beefing up Lockdown Mode (by blocking the iPhone from connecting to 2G cellular networks and from auto-joining insecure wireless networks) but bringing Lockdown Mode to the Apple Watch for the first time.

The new Check In safety feature helps to automate something some people do already – let someone know their location, and when they expect to get home – but with improved privacy, too.

The feature takes the hassle out of having to remember to manually share your location with a friend before you head out, and it even provides more privacy than the old, manual method because the friend won’t get access to your location unless you’ve failed to check in—when that location data is actually needed.

How Apple will protect us from AI threats

Apple is conscious of both the benefits and risks introduced by more powerful forms of AI, and is already working on protecting us from the threats.

Identifying code vulnerabilities

One big worry about privacy and security is that AI tools can be used to help identify code vulnerabilities that can be exploited by hackers. Federighi acknowledges this, but says Apple will fight fire with fire.

He notes that Apple already uses a number of static and dynamic analysis tools to help the company spot potential code defects that may be hard for a human to detect. “As those tools get more and more advanced” with AI, he says, “we will be on the forefront of using those tools to find problems and to close them before attackers who might have access to similar tools would be able to use them.”

Protecting against deepfakes

Deepfakes are AI-generated audio and video that can look and sound like someone you know – from your boss to your spouse. This will make the risk of scams, like those where someone claims to need your help with a bill, much more convincing.

Federighi says that Apple is already thinking about these dangers.

“When someone can imitate the voice of your loved one,” he says, spotting social engineering attacks will only become more difficult. If “someone asks you, ‘Oh, can you just give me the password to this and that? I got locked out,’ ” and it literally sounds like your spouse, that, I think, is going to be a real threat.”

What Apple can do to reduce the risk is find ways to flag these fakes – for example, detecting that the message does not originate from a device used by the real person.

“We want to do everything we can to make sure that we’re flagging [deepfake threats] in the future: Do we think we have a connection to the device of the person you think you’re talking to? These kinds of things.

Check out Apple’s full privacy and safety summary.

FTC: We use income earning auto affiliate links. More.

Source link

Leave a Reply

Your email address will not be published.