Taking a Secure Approach to Generative AI in the Enterprise
Anurag Lal, President and CEO of Infinite Convergence.
Generative Artificial Intelligence (AI) really burst on the scene in November 2022 when artificial intelligence lab OpenAI released a generative AI-powered chatbot called ChatGPT. According to a Reuters report, ChatGPT, reached an estimated 100 million monthly active users just two months after launch.
Today, many enterprises are deploying the use of generative AI solutions like ChatGPT to automate responses to common questions, code a variety of apps, automate tasks such as writing emails and creating content and more.
A survey by VentureBeat revealed that more than half (54.6%) of organizations are experimenting with generative AI and 18.2% are already implementing it into their operations. This enterprise adoption of generative AI technology is fueling an increase in the generative AI software market which S&P Global projects to reach $3.7 billion in 2023 and grow to $36 billion by 2028.
The use of generative AI technology holds a lot of promise for enterprises, but there are risks associated with integrating this technology into the enterprise stack. While the use of any new technology carries some degree of risk, to get the most out of generative AI with the least amount of risk, organizations must take a secure approach to its implementation.
Risks of Generative AI
As the rapid adoption of AI in the enterprise continues, questions surrounding the accuracy of the technology and concerns about cybersecurity, data privacy and intellectual property risk are why organizations like Apple, Samsung, Verizon and some Wall Street banks are limiting or banning employee use of generative AI technology like ChatGPT.
Cybersecurity
A Salesforce.com survey of more than 500 senior IT leaders reveals that 67% are prioritizing generative AI technology for their organizations during the next 18 months, but 71% of those leaders believe this technology is likely to introduce new security risks to their data.
Cybercriminals are honing their skills in using this technology to bolster their cyberattacks. In a call with journalists reported by PC Magazine, the FBI discussed how generative AI programs are fueling cybercrime, with cyber criminals tapping into open-source generative AI programs to deploy malware and ransomware code and execute sophisticated phishing attacks.
The ability of generative AI technology like ChatGPT to seamlessly generate phishing scams without spelling, grammatical, and verb tense mistakes is making it easier to dupe people into believing the legitimacy of the communication. A 2023 report by Perception Point found that advanced phishing attacks grew by 356% in 2022. The report noted that “malicious actors continue to gain widespread access to new tools and advances in AI and Machine Learning (ML) which simplify and automate the process of generating attacks.”
Data privacy
The growing popularity of generative AI is also raising data privacy concerns. Enterprises must be careful about what information they feed into generative AI tools to avoid exposing sensitive or personally identifiable information. Because generative AI tools can share user information with third parties as well as use this information to train data models, this technology has the potential to violate privacy laws.
According to news reports, the US Federal Trade Commission (FTC) launched an investigation into ChatGPT, the creator OpenAI, focusing on its handling of personal data, its potential to give users inaccurate information and its “risks of harm to consumers, including reputational harm.”
Intellectual Property
Information entered into a generative AI tool may become part of its training set which can put users of this data at risk of intellectual property (IP) infringement.
Gartner highlights that tools like ChatGPT, which are trained on a large amount of internet data, likely include copyrighted material. The analyst firm warned that it’s outputs have the potential to violate copyright or IP protections
Mitigating risk of generative AI in the enterprise
There is no question that generative AI holds a lot of promise for enterprises, but to safely and securely reap these benefits organizations must take steps to minimize the risks outlined above.
IT leaders must examine this technology to understand how accurate and useful generative AI is to their enterprise. A lack of transparency about what is happening on the back end of this new technology can make it difficult to determine if it is really useful for the organization and establish the best use cases for this technology.
When using any external tool it is important to review each solution provider’s terms of service, and data protection and security policies. It is also important to conduct due diligence to determine whether the tool uses encryption, whether data is anonymized and whether the tool complies with regulations such as the European Union’s General Data Protection Regulation, California Consumer Privacy Act and numerous other privacy regulations.
Organizations should develop and implement policies governing the use of AI in the workplace. The policy should not only spell out which tools employees are permitted to use but what information employees are allowed to feed into them.
Enterprises should also equip their IT teams with tools that can determine what is generated by AI like ChatGPT versus what is human-generated especially as this relates to incoming “cold” emails.
To further mitigate organizational risk, enterprises should make it a priority to routinely train and re-train employees on the latest cybersecurity threats associated with generative AI with specific emphasis on AI-generated phishing scams. This training should also include cyber risk prevention measures as well as guidance on appropriate uses of AI in the workplace.
AI offers exciting opportunities for organizations but as with any new technology there are uncertainties and risks. By understanding these risks and taking steps to mitigate them, enterprises can more safely and securely deploy this technology to gain a competitive edge.