card

ChatGPT can leak sensitive corporate data

16.06.2023
482

A recent report from Team8, an Israel-based venture firm, warns that companies utilising generative artificial intelligence (AI) tools like ChatGPT may unwittingly expose confidential customer information and trade secrets. This widespread adoption of AI chatbots and writing tools raises concerns about data leaks and potential lawsuits. Hackers could exploit these chatbots to gain access to sensitive corporate data or take actions detrimental to the company. Recall that Check Point security firm recently raised a new ChatGPT4.0 account theft concern which saw a lot of premium ChatGPT accounts hacked. Moreover, there is apprehension that the confidential information currently fed into the chatbots could be utilised by AI companies in the future.

Technology giants such as Microsoft and Alphabet are racing to enhance chatbots and search engines by incorporating generative AI capabilities. These models are trained on data scraped from the internet to provide users with comprehensive answers to their queries. However, if these tools are fed confidential or private data, erasing that information becomes extremely challenging, as highlighted in the report provided to Bloomberg News.

The report categorises the risks associated with enterprise use of generative AI (GenAI) as "high." It warns of potential access and processing of sensitive information, intellectual property, source code, trade secrets, and other data, either through direct user input or the API. This includes customer or private information and confidential data. However, the report emphasises that with proper safeguards in place, these risks can be managed effectively.

Contrary to recent reports, the Team8 report clarifies that chatbot queries are not being fed into large language models to train AI. As of now, large language models cannot update themselves in real-time or return one's inputs to another's response. However, the document cautions that this may change in the future versions of these models, warranting vigilance. Check out other search engines that collect data about you and how to protect yourself.

The report highlights three additional "high risk" issues related to integrating generative AI tools. It emphasises the growing threat of information sharing through third-party applications, where compromised third-party applications leveraging a GenAI API could potentially grant unauthorised access to email and web browsers, allowing attackers to act on behalf of users. The use of generative AI also carries a "medium risk" of increasing discrimination, harming a company's reputation, or exposing it to legal action regarding copyright issues.

Microsoft, which has invested billions in OpenAI, the developer of ChatGPT, played a role in drafting the report. Ann Johnson, a corporate vice president at Microsoft, emphasises the importance of transparent discussions surrounding evolving cyber risks in the security and AI communities.

It is crucial for companies to be aware of the potential risks associated with generative AI tools and implement appropriate safeguards such as using a hide VPN and updated firewalls to protect their confidential information, intellectual property, and overall cybersecurity.