What happens when corporations fail to protect their employees and customers? Everything, according to Juan Rivera, senior solutions engineer at Telesign.
“Meta was recently slapped with a $1.3 billion fine by the European Union for violating data privacy – and they were just used as an example for companies that cannot afford a $1.3 billion fine,” Rivera explains. “There’s financial loss, as well as potentially huge reputational loss when both customer and employee trust is damaged. Most companies don’t have the flexibility or luxury to manage these kinds of losses.”
In other words, it’s incredibly expensive on every side if corporations fail to put safety practices in place.
The fraud and identity theft landscape now
Cybercriminals have been using the same tactics for years, but now they’re backed by generative AI. Phishing emails that trick victims into revealing login credentials or sensitive information are created with convincing ChatGPT scripts.
Data breaches that bypass safety checks are made possible by tricking generative AI into writing malicious code that reveals the chat history of active users, personally identifiable information like names, email addresses, payment addresses, and even the last four digits and expiration data of credit cards.
Criminals are also leveraging synthetic identities, similar to the way sales and marketing teams use data to create tailored user profiles in order to target the right prospects. With addresses, personal information, and stolen credit cards, they can build new credit identities or log into an existing account with very real information.
As AI becomes better at predicting human patterns, impersonating humans, and sounding more like humans, it’ll be used more to trick both employees and consumers alike. These messages are convincing because they understand the behavior of specific people and can predict how they’d act with their employees. And the danger is imminent, Rivera says.
“Statistically speaking, the chances of these events happening are 100 percent,” he explains. “They’re already happening. AI is raising the stakes, enabling fraudsters to scale up these attacks faster, better, and more convincingly.”
Protecting and securing data and identities
There are mandated security standards required by law, but also practical considerations. Two-factor identification (2FA) is no longer a strong enough standard — multi-factor authentication is necessary today. That means an additional layer beyond just a standard PIN code. It might be low friction and common enough today that users never balk, but it’s no longer enough. It could mean something more sophisticated, such as biometrics, or requiring additional information to validate your identity, like a piece of physical identification a user is in possession of — a document, a license, an ID and so on.
There are other advanced identification protocols that aren’t customer-facing, but live behind the scenes. For example, Telesign uses phone identity APIs to gain insight into a user that’s trying to create an account or log in to an existing account. It leverages telco data from a user’s provider to match the information a user is providing with information on record.
“It’s the ability to combine data points like phone number, email address, even the originating IP of the user profile, to tell you whether a user is suspicious,” Rivera explains. “These data points become a scorecard to measure the likelihood of a genuine access account or an attempt at fraud. Suspicious behavior triggers a response, and it’s low- to no-friction protection because it happens in milliseconds on the back end.”
With a low-friction approach at the top of the funnel, the approach to any suspicious actors or behavior can be reinforced with additional friction — requesting multi-factor identification, for example, such as an email to the address on record asking the consumer to call to validate a sign-in attempt.
Beyond tech: Why the human element is crucial
The technical side of security is the foundation of safety, but ongoing employee training and education around security best practices is absolutely critical to mitigate threats, Rivera says. This can include sharing with employees a suspicious email that’s come through and noting the features that give it away, or making sure passwords are changed frequently and software updates are applied diligently.
But security awareness needs to extend beyond businesses and employees; companies should engage with customers on a regular basis to raise knowledge and awareness. It not only adds another layer of safety, but it bolsters optics, Rivera points out, so that a company is now seen as caring for the customer base enough to continually educate them on evolving threats in the digital space.
“I don’t think we see this enough,” he says. “We don’t see the Amazons of the world reaching out on a regular basis and saying, ‘Hey, we understand that you’re shopping online more. We want to make sure you understand how to stay safe.’ We need to start making education an industry standard, because fraudsters don’t sleep.
It is no secret that cybercrime is a rampant and growing issue in the modern world. With the development of machine-learning, generative Artificial Intelligence (AI) technology, cybercriminals possess advanced tools to exploit the vulnerabilities of companies. This article will discuss the dangers posed by cybercriminals utilizing generative AI and ways for companies to defend against these threats.
Generative AI, sometimes referred to as ‘adversarial AI’, is a type of machine-learning algorithm that is capable of generating data from scratch. This data can be used to create convincing counterfeits such as fake websites and malicious stories that are made to look as though they are from real sources. Cybercriminals are increasingly taking advantage of this technology to launch complex and sophisticated attacks. The result is that organizations are increasingly exposed to previously unseen threats that can be difficult to detect and neutralize.
In order to protect their networks, companies need to arm themselves with the appropriate tools and technologies. Companies should recognize that information security cannot simply rely on traditional antivirus and malware-protection. Instead, integrative and layered security approaches should be implemented that feature comprehensive threat-detection and analytics capabilities. Additionally, companies should put measures in place that restrict access to sensitive corporate data and limit the potential for cyberattacks.
Furthermore, companies should create robust cyber-awareness programs that train their employees to recognize the warning signs of a potential attack and to spot malicious content. Additionally, all company members should be trained on basic data-security practices, such as proper password hygiene and the use of two-factor authentication whenever possible. It is important to keep in mind that though cybercrime is a global issue, a company’s security is only as strong as its weakest link; and many attacks can be prevented by investing in proper user education and training.
It is clear that the threat of cyberattacks from generative AI is very real. Companies that put the appropriate protection measures in place will be better equipped to defend their networks against the ever-evolving digital threat landscape. By investing in next-generation applications, creating robust cyber-awareness training programs, and maintaining a culture of security, organizations can better protect themselves from the dangers of cybercrime.