Riesgos corporativos derivados del uso de ChatGPT

Corporate risks arising from the use of ChatGPT

 


At the end of November 2022, the artificial intelligence (AI) chatbot prototype, ChatGPT, developed by the OpenAI organization, was published. A few months later, in March 2023, the latest version of the chatbot, the GPT-4 model, was released. Its use began to become popular, leading some countries such as Italy to consider how the service managed the information provided to it. The country subsequently blocked ChatGPT for non-compliance with personal data protection for twenty days. After this period, OpenAI made a series of changes to continue operating, such as: showing in detail what personal data is collected, how it is processed and the possibility for users to refuse to have the data used in other AIs. Currently the European Union is in the process of developing the first standard on AI, this would also affect the generative AI used by ChatGPT[1]. It is expected to be ready at the end of 2023.

The appearance of ChatGPT as a service open to the public marked a new historical moment in the cyber field. A large number of users challenged AI with their ingenuity and tested its usability, as demonstrated by the 67 million Google results for the question: Why use ChatGPT? The answers to the question are broad and varied, covering a large number of areas, where it is worth mentioning the use of the service with malicious intent: generation of new malware or content for more effective phishing.

For users, the ability of AI to solve problems quickly has prevailed over the fact that it is a service provided by an organization and that it could obtain benefits in the future thanks to the training and data provided. However, OpenAI presents itself as a non-profit organization and with the aim of creating safe AI for humanity. This would have participated in generating consumer confidence.

Likewise, the saying that “information is power” seems to be forgotten. User trust could have contributed to this massive forgetfulness. Today, the average person's Internet exposure is quite high and there is a lack of perspective on the consequences that this can entail. Along these lines, users make requests to ChatGPT that involve providing personal or confidential information, such as: generating a resume, writing a legal contract, selecting a candidate from several PDF documents, etc.

Such trust in the service has promoted the expansion of the use of AI from the personal to the corporate level. This situation is leading to the establishment of limits being a necessity in companies. Large corporations such as Samsung have already banned or limited its use as a result of incidents related to the loss of corporate information.

Currently, there is controversy about the use of the information provided to ChatGPT, since although said technology claims not to store data, numerous users could have accessed information provided by others (see internal product development code < a href="https://hipertextual.com/2023/04/chatgpt-employees-samsung-filtran-informacion-confidencial#:~:text=The%20empresa%20permite%20que%20some%20of%20its%20employees,have %20authorized%20use it%20on%20your%20teams%20of%20work." target="_blank">[2] or licenses for Windows 11 Pro[3]).

Flaws have also been detected in the answers provided, either because they are false or because they were invented, as happened to an American lawyer who used ChatGPT for his legal argument in a document he presented to trial and in which the AI ​​invented legal precedents. non-existent, so now he could be sanctioned[4].

Furthermore, it is worth remembering that ChatGPT is not infallible in the field of cybersecurity either. As a service, its use requires a username, email account, mobile phone number and password. Currently, the first data leaks have already been detected with the exposure of some 100,000 user accounts[5]. In addition to containing the information necessary to access the service, the leaked information would also include that sent to ChatGPT, that is, the record of the conversations held with the AI ​​by a specific user.

In summary, the corporate risks derived from the use of AI are numerous and varied, such as the use in queries of personal or confidential information (property of the company or for which it is responsible for its management), possible security breaches and data leaks into the service and even the acceptance of the premise that AI judgment is of greater value than one's own. For all this, organizations will have to ask themselves: are these risks worth taking?


Noelia B., Intelligence Analyst.



return to blog

Leave a comment

Please note that comments must be approved before they are published.