Samsung workers unintentionally landed themselves in major trouble when sensitive information about the company was leaked while operating ChatGPT. ChatGPT helps Samsung employees check source code, as Samsung allows its semiconductor section to access the AI chatbot.
Three different instances took place in the spreading of confidential documents. Pasting of confidential source code, Sharing of code for code optimization requests, and recording of meetings to convert notes for a presentation.
- Confidential data was leaked by Samsung workers accidentally while using ChatGPT.
- Samsung took immediate action where they limited ChatGPT upload capability to 1024 bytes per person.
ChatGPT is an AI chatbot that collects users’ personal data which is shared by users for training purposes for the model and ChatGPT users should never forget this. It looks like employees of Samsung have learned this lesson the hard way by accidentally leaking top-secret data about their own company.
It was reported Samsung employees have accidentally leaked a few confidential data while accessing AI chatbot ChatGPT for work. The semiconductor division of Samsung has allowed its employees to access ChatGPT for checking source code and assisting engineers during work.
However, it was reported by the Economist of Korea, that three different instances of Samsung employees were leaked to ChatGPT accidentally which included a variety of sensitive information.
- In one instance, it was noticed that Samsung employees went on to paste confidential source code in the chat of the AI chatbot to identify errors.
- In the second instance, another Samsung employee shared the code and requested code optimization for ChatGPT.
- The third instance showcased a recording of a meeting in which a request was placed to convert it into notes for a presentation.
This is a true example of hypothetical scenarios that are a major concern for privacy experts. Another scenario consists of the spreading of legal and confidential documents or medical information to help summarize and analyze log-form content, which can be later on used to improve the functioning of the model. Privacy experts had warned that ChatGPT might violate GDPR compliance, due to which Italy placed a ban on the AI chatbot recently.
Immediate action was taken by Samsung in this scenario where the company limited ChatGPT upload capability to 1024 bytes per person. They are also investigating this situation and looking for people who are involved in this spreading of private data. It is even considering building an internal AI chatbot to help prevent similar issues in the future.
Although, it is quite unlikely that Samsung will recall the leaked data. Since the ChatGPT data policy clearly states, the model uses data to train its model unless users request to opt-out. Even in the usage guide of OpenAI’s ChatGPT, it is stated to not explicitly use any personal or sensitive data during conversations with the AI chatbot.
Looking at Samsung’s condition, companies must stay extra cautious while utilizing the ChatGPT and avoid sharing sensitive data while having a conversation with the AI chatbot.