The business sector, governments, and individuals need to rely on emerging technologies to produce automated responses to cyberattacks.
Heaptalk, Jakarta — Today, the development of science and technology has demonstrated substantial benefits for multiple generations. Like any tool, science and technology can be used for particular intentions, depending on the motivations of those using them.
Artificial intelligence (AI) is one such technology that can open up fantastic new opportunities. However, this technology can also pose entirely new threats. Recently, news circulated regarding ChatGPT, an AI chatbot that uses artificial intelligence to simulate human conversations.
Despite possessing many advantages, many technology experts have been concerned about the impact of AI-generated content creation tools on network security since its official launch last November 2022. These tools are highly efficient and more complex to be detected than human activity.
In a recent Black Hat and Defcon security conference, a demonstration of Hacking Humans with AI-as-a-service revealed how AI could create better phishing emails and highly effective spear phishing messages than humans.
Researchers using OpenAI‘s GPT-4 platform in combination with other AI-as-a-service products focused on personality analysis generated phishing emails tailored to the backgrounds and characteristics of their colleagues.
Eventually, the researchers developed a funnel to help refine emails before reaching their target. Besides, the platform also automatically provided specific information, such as mentioning Singapore laws/statutes when instructed to create content aimed at society in Singapore.
The Vice President and Regional Chief Security Officer for Asia Pacific & Japan of Palo Alto Networks, Sean Duca, viewed the increasingly sophisticated hacking landscape as driving the importance of the cybersecurity industry to have equal resources against AI-powered attacks. In the long term, Sean sees this industry no longer solely relying on a group of individual cyber threat hunters trying to mitigate attacks sporadically.
The ChatGPT developer clearly stated that this AI-driven tool has an innate ability to challenge wrong premises and reject unethical requests. The system appears to have built-in guardrails designed to prevent criminal activity. However, with several adjustments, the AI can produce phishing emails that almost perfectly resemble human messages.
This matter could generate problems for a highly vulnerable cyberattack market, including Indonesia. According to the BSSN report, around 976,429,996 traffic anomaly activities in 2022. Phishing is predicted to be one of the most common attacks in 2023. The ease of developing phishing messages that ChatGPT offers can significantly increase the risk of this type of attack.
As AI evolves, businesses and individuals will encounter challenges navigating the AI cybersecurity landscape. One strategic move is to take wise action to neutralize the ever-evolving threat. AI-powered attacks are increasingly becoming a part of everyday life.
The business sector, governments, and individuals need to rely on emerging technologies to produce automated responses to cyber attacks. In addition, an immense focus on determining a balance between machine, human and ethical considerations is also a significant move that should be performed.
“Establishing any policies is critical to conducting business ethically while enhancing cybersecurity. We need to build effective governance and legal frameworks to promote greater confidence in applying AI technologies around us to be safe and reliable and contribute to a just and sustainable world. Therefore, the balance between AI and humans will emerge as a critical factor in successful cybersecurity, where trust, transparency, and accountability complement the benefits of machines.” concluded Duca.