By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
PulseReporterPulseReporter
  • Home
  • Entertainment
  • Lifestyle
  • Money
  • Tech
  • Travel
  • Investigations
Reading: A Single Poisoned Doc Might Leak ‘Secret’ Information Through ChatGPT
Share
Notification Show More
Font ResizerAa
PulseReporterPulseReporter
Font ResizerAa
  • Home
  • Entertainment
  • Lifestyle
  • Money
  • Tech
  • Travel
  • Investigations
Have an existing account? Sign In
Follow US
  • Advertise
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
PulseReporter > Blog > Tech > A Single Poisoned Doc Might Leak ‘Secret’ Information Through ChatGPT
Tech

A Single Poisoned Doc Might Leak ‘Secret’ Information Through ChatGPT

Pulse Reporter
Last updated: August 6, 2025 11:31 pm
Pulse Reporter 2 hours ago
Share
A Single Poisoned Doc Might Leak ‘Secret’ Information Through ChatGPT
SHARE


The newest generative AI fashions are usually not simply stand-alone text-generating chatbots—as an alternative, they will simply be hooked as much as your knowledge to present personalised solutions to your questions. OpenAI’s ChatGPT could be linked to your Gmail inbox, allowed to examine your GitHub code, or discover appointments in your Microsoft calendar. However these connections have the potential to be abused—and researchers have proven it will probably take only a single “poisoned” doc to take action.

New findings from safety researchers Michael Bargury and Tamir Ishay Sharbat, revealed on the Black Hat hacker convention in Las Vegas as we speak, present how a weak spot in OpenAI’s Connectors allowed delicate data to be extracted from a Google Drive account utilizing an oblique immediate injection assault. In an indication of the assault, dubbed AgentFlayer, Bargury reveals the way it was potential to extract developer secrets and techniques, within the type of API keys, that have been saved in an indication Drive account.

The vulnerability highlights how connecting AI fashions to exterior methods and sharing extra knowledge throughout them will increase the potential assault floor for malicious hackers and probably multiplies the methods the place vulnerabilities could also be launched.

“There may be nothing the consumer must do to be compromised, and there may be nothing the consumer must do for the info to exit,” Bargury, the CTO at safety agency Zenity, tells WIRED. “We’ve proven that is utterly zero-click; we simply want your electronic mail, we share the doc with you, and that’s it. So sure, that is very, very unhealthy,” Bargury says.

OpenAI didn’t instantly reply to WIRED’s request for remark concerning the vulnerability in Connectors. The corporate launched Connectors for ChatGPT as a beta characteristic earlier this 12 months, and its web site lists not less than 17 totally different companies that may be linked up with its accounts. It says the system lets you “convey your instruments and knowledge into ChatGPT” and “search recordsdata, pull dwell knowledge, and reference content material proper within the chat.”

Bargury says he reported the findings to OpenAI earlier this 12 months and that the corporate shortly launched mitigations to stop the method he used to extract knowledge through Connectors. The way in which the assault works means solely a restricted quantity of knowledge might be extracted without delay—full paperwork couldn’t be eliminated as a part of the assault.

“Whereas this problem isn’t particular to Google, it illustrates why creating sturdy protections in opposition to immediate injection assaults is necessary,” says Andy Wen, senior director of safety product administration at Google Workspace, pointing to the corporate’s lately enhanced AI safety measures.

You Might Also Like

Samsung and Netflix associate on Squid Sport Season 2 and Squid Sport Unleashed

Sony’s Model New Flagship Headphones Are on Sale for Prime Day

Now Texas is suing TikTok over youngsters’s security considerations, too

23 Greatest Black Friday Laptop computer Offers (2024): Acer, Apple, Anker

The First US Chook Flu Loss of life Is a Stark Warning

Share This Article
Facebook Twitter Email Print
Previous Article Alila Mayakoba is now bookable on factors Alila Mayakoba is now bookable on factors
Next Article This Would possibly Be The Hardest "Excessive College Musical" Quiz Of All Time This Would possibly Be The Hardest "Excessive College Musical" Quiz Of All Time
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Weekly Newsletter

Subscribe to our newsletter to get our newest articles instantly!

More News

John Cena Acquired A Hair Transplant
John Cena Acquired A Hair Transplant
13 minutes ago
New ‘persona vectors’ from Anthropic allow you to decode and direct an LLM’s character
New ‘persona vectors’ from Anthropic allow you to decode and direct an LLM’s character
43 minutes ago
DoorDash (DASH) Q2 earnings report
DoorDash (DASH) Q2 earnings report
56 minutes ago
This Would possibly Be The Hardest "Excessive College Musical" Quiz Of All Time
This Would possibly Be The Hardest "Excessive College Musical" Quiz Of All Time
1 hour ago
A Single Poisoned Doc Might Leak ‘Secret’ Information Through ChatGPT
A Single Poisoned Doc Might Leak ‘Secret’ Information Through ChatGPT
2 hours ago

About Us

about us

PulseReporter connects with and influences 20 million readers globally, establishing us as the leading destination for cutting-edge insights in entertainment, lifestyle, money, tech, travel, and investigative journalism.

Categories

  • Entertainment
  • Investigations
  • Lifestyle
  • Money
  • Tech
  • Travel

Trending

  • John Cena Acquired A Hair Transplant
  • New ‘persona vectors’ from Anthropic allow you to decode and direct an LLM’s character
  • DoorDash (DASH) Q2 earnings report

Quick Links

  • About Us
  • Contact Us
  • Privacy Policy
  • Terms Of Service
  • Disclaimer
2024 © Pulse Reporter. All Rights Reserved.
Welcome Back!

Sign in to your account