New attack on ChatGPT research agent pilfers secrets from Gmail inboxes

New attack on ChatGPT research agent pilfers secrets from Gmail inboxes

Curated from Security – Ars Technica — Here’s what matters right now:

The face-palm-worthy prompt injections against AI assistants continue. Today’s installment hits OpenAI’s Deep Research agent. Researchers recently devised an attack that plucked confidential information out of a user’s Gmail inbox and sent it to an attacker-controlled web server, with no interaction required on the part of the victim and no sign of exfiltration. Deep Research is a ChatGPT-integrated AI agent that OpenAI introduced earlier this year. As its name is meant to convey, Deep Research performs complex, multi-step research on the Internet by tapping into a large array of resources, including a user’s email inbox, documents, and other resources. It can also autonomously browse websites and click on links. A user can prompt the agent to search through the past month’s emails, cross-reference them with information found on the web, and use them to compile a detailed report on a given topic. OpenAI says that it “accomplishes in tens of minutes what would take a human many hours.” Read full article Comments

Next step: Stay ahead with trusted tech. See our store for scanners, detectors, and privacy-first accessories.

Original reporting: Security – Ars Technica

Back to blog

Leave a comment

Please note, comments need to be approved before they are published.