AAPLPrivate
Evidence
45%Reported
FactConfirmedProduct·April 9, 2026

Researchers Demonstrated Prompt Injection Attack Bypassing Apple Intelligence Safeguards (Now Patched)

Security researchers published details of a successful prompt injection attack that chained Unicode RIGHT-TO-LEFT OVERRIDE characters with a 'Neural Exec' technique to bypass Apple's on-device LLM input/output filters; Apple has since hardened its safeguards.