OpenAI hit by two big security issues this week


OpenAI seems to make headlines every day and this time it’s for a double dose of security concerns. The first issue centers on the Mac app for ChatGPT, while the second hints at broader concerns about how the company is handling its cybersecurity.

Earlier this week, engineer and Swift developer Pedro José Pereira Vieito the Mac ChatGPT app and found that it was storing user conversations locally in plain text rather than encrypting them. The app is only available from OpenAI’s website, and since it’s not available on the App Store, it doesn’t have to follow Apple’s sandboxing requirements. Vieito’s work was then covered by and after the exploit attracted attention, OpenAI released an update that added encryption to locally stored chats.

For the non-developers out there, sandboxing is a security practice that keeps potential vulnerabilities and failures from spreading from one application to others on a machine. And for non-security experts, storing local files in plain text means potentially sensitive data can be easily viewed by other apps or malware.

The second issue occurred in 2023 with consequences that have had a ripple effect continuing today. Last spring, a hacker was able to obtain information about OpenAI after illicitly accessing the company’s internal messaging systems. reported that OpenAI technical program manager Leopold Aschenbrenner raised security concerns with the company’s board of directors, arguing that the hack implied internal vulnerabilities that foreign adversaries could take advantage of.

Aschenbrenner now says he was fired for disclosing information about OpenAI and for surfacing concerns about the company’s security. A representative from OpenAI told The Times that “while we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work” and added that his exit was not the result of whistleblowing.

App vulnerabilities are something that every tech company has experienced. Breaches by hackers are also depressingly common, as are contentious relationships between whistleblowers and their former employers. However, between how broadly ChatGPT has been adopted into services and how chaotic the company’s , and have been, these recent issues are beginning to paint a more worrying picture about whether OpenAI can manage its data.



Source link

Rymo
We will be happy to hear your thoughts

      Leave a reply

      Thanksineededthat.com
      Logo
      Shopping cart