Menlo Security’s DLP can mitigate the risk of using AI through three crucial implications, including data loss prevention, copy/paste control, and browser forensics.
Heaptalk, Jakarta — Artificial Intelligence (AI) platforms such as ChatGPT, Bing AI, and Perplexity remain the most discussed issues among all realms due to their outstanding performance in solving problems, improving user experience, and facilitating content development.
These advantages align with Statista’s survey recently, which disclosed that 29% of Generation Z, 28% of Generation X, and 27% of Millennials have integrated AI technology to ease their workload. Moreover, the Co-Founder and Chief Product Officer of Menlo Security, Poornima DeBolle, claimed around 100 million people worldwide have utilized ChatGPT tools within only two months, which also elevates AI’s market value up to US$13.37 billion this year, with a Compound Annual Growth Rate (CAGR) of 27,02%.
Apart from the promising potential, Poornima observed artificial intelligence also delivers risks and challenges, especially for companies. He explained that one of the main concerns is the potential loss of personal data or intellectual property (IP) due to misuse of the platform. With the convenience AI provides, she claimed employees could accidentally share confidential information, which makes it vulnerable to unauthorized access by the entity behind the AI platform itself.
“Artificial intelligence platforms are like a double-edged sword. Their utilization increases workplace productivity and brings multi-million dollar business losses if something goes wrong. To avoid costly losses, companies must pay attention to the position of cybersecurity, which plays an important role in this matter,” added Poornima.
To mitigate the adverse effects of artificial intelligence, Poornima urged companies to implement comprehensive measures to protect their businesses. In this context, she considers that conventional cybersecurity solutions, such as cloud access security brokers (CASB) and detect-and-respond approaches, may not be sufficient to deal with the complexity of this growing technology.
Furthermore, she explained that one of the common obstacles is that once keywords and commands are entered into the AI platform, the process cannot be undone or repeated. As a result, if a solution detects data exfiltration, Poornima affirmed enterprise can do nothing with the matter. For this reason, she views businesses required strong protection to avoid unwanted results.
As a cybersecurity company, Menlo Security continues to resolve the adverse effects possibilities of an AI platform for the business, one of which is providing cloud-based Data Loss Prevention (DLP) technology to manage AI platform threats efficiently. This tool will ensure the use of AI platforms through three crucial implications, including data loss prevention, copy/paste control, and browser forensics.
According to The Vice President for International Sales (APAC + EMEA) of Menlo Security, Stephanie Boo, DLP can recognize sensitive information such as source code, notes, or emails a user may have sent. This solution makes users feel safe interacting with AI platforms without worrying about data leaks.
Stephanie elaborated, “DLP works between user and AI, including ChatGPT, to inspect everything and apply rules on what is and is not allowed. If you have your personal information, it will increase your risk. DLP here is to tell us whether the data in the store is safe and whether this is course code that needs to be used or not,”
Through an isolation-based approach to securing digital ecosystems in the business sector, Menlo Security expects to empower employees of other companies to utilize the AI platform without being accidentally exposed to the risk of Personally Identifiable Information (PII) or other sensitive data.