It Is Mind-Bogglingly Easy to Rope Apple’s Siri into Phishing Scams

A month ago I was milling about a hotel room in New Orleans, procrastinating my prep for on-stage sessions at a tech conference, when I received a startling iMessage. “It’s Alan Murray,” the note said, referring to my boss’ boss’ boss.

Not in the habit of having Mr. Murray text my phone, I sat up straighter. “Please post your latest story here,” he wrote, including a link to a site purporting to be related to Microsoft 365, replete with Microsoft’s official corporate logo and everything. In the header of the iMessage thread, Apple’s virtual assistant Siri offered a suggestion: “Maybe: Alan Murray.”

The sight made me stagger, if momentarily. Then I remembered: A week or so earlier I had granted a cybersecurity startup, Wandera, permission to demonstrate a phishing attack on me. They called it, “Call Me Maybe.”

Screenshot of the iMessage thread
Screenshot of the iMessage thread

Alan Murray had not messaged me. The culprit was James Mack, a wily sales engineer at Wandera. When Mack rang me from a phone number that Siri presented as “Maybe: Bob Marley,” all doubt subsided. Jig, up.

There are two ways to pull off this social engineering trick, Mack told me. The first involves an attacker sending someone a spoofed email from a fake or impersonated account, like “Acme Financial.” This note must include a phone number; say, in the signature of the email. If the target responds—even with an automatic, out-of-office reply—then that contact should appear as “Maybe: Acme Financial” whenever the fraudster texts or calls next.

The subterfuge is even simpler via text messaging. If an unknown entity identifies itself as Some Proper Noun in an iMessage, then the iPhone’s suggested contacts feature should show the entity as “Maybe: [Whoever].” Attackers can use this disguise to their advantage when phishing for sensitive information. The next step involves either calling a target to supposedly “confirm account details” or sending along a phishing link. If a victim takes the bait, the swindler is in.

The tactic apparently does not work with certain phrases, like “bank” or “credit union.” However, other terms, like “Wells Fargo,” “Acme Financial,” the names of various dead celebrities—or my topmost boss!—have worked in Wandera’s tests, Mack said. Wandera reported the problem as a security issue to Apple on April 25th. Apple sent a preliminary response a week later, and a few days after that said it did not consider the issue to be a “security vulnerability,” and that it had reclassified the bug as a software issue “to help get it resolved.”

What’s alarming about the ploy is how little effort it takes to pull off. “We didn’t do anything crazy here like jailbreak a phone or a Hollywood style attack—we’re not hacking into cell towers,” said Dan Cuddeford, Wandera’s director of engineering. “But it’s something that your layman hacker or social engineer might be able to do.”

To Cuddeford, the research exposes two bigger issues. The first is that Apple doesn’t reveal enough about how its software works. “This is a huge black box system,” he said. “Unless you work for Apple, no one knows how or why Siri does what it does.”

The second concern is more philosophical. “We’re not Elon Musk saying AI is about to take over the world, but it’s one example of how AI itself is not being evil, but can be abused by someone with malicious intent,” Cuddeford said. As we let machines guide our lives, we should be sure we know how they’re making decisions.

This article first appeared in Cyber Saturday, the weekend edition of Fortune’s tech newsletter. Sign up here.

Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.