Don’t miss out on the opportunity to join top executives in San Francisco on July 11-12 and learn how to integrate and optimize AI investments for success. Click here to learn more


A recent study by Abnormal Security, an email security platform, has revealed that cybercriminals are increasingly using generative AI, including ChatGPT, to develop highly authentic and persuasive email attacks.

The company conducted a comprehensive analysis to assess the probability of generative AI-based novel email attacks intercepted by their platform. The investigation found that threat actors now leverage GenAI tools to craft email attacks that are becoming progressively more realistic and convincing.

Security leaders have expressed ongoing concerns about the impact of AI-generated email attacks since the emergence of ChatGPT. Abnormal Security’s analysis found that AI is now being utilized to create new attack methods, including credential phishing, an advanced version of the traditional business email compromise (BEC) scheme and vendor fraud.

According to the company, email recipients have traditionally relied on identifying typos and grammatical errors to detect phishing attacks. However, generative AI can help create flawlessly written emails that closely resemble legitimate communication. As a result, it becomes increasingly challenging for employees to distinguish between authentic and fraudulent messages.