Adoption of AI spotlights biz to invest in proactive cybersecurity defences
Rapid adoption of Artificial Intelligence has powered growth but opened avenues for cybercriminals to misuse AI for sophisticated attacks, Kaspersky has said spotlighting the need for businesses to invest in proactive cybersecurity defences to meet new-age challenges.
Kaspersky, a global cybersecurity and digital privacy company, said it has been infusing AI across its products and harnessing AI models to counter threats and safeguard users by making technologies more resistant to new and evolving forms of cyberattacks.
From leveraging ChatGPT for writing malicious software and automating attacks against multiple users to misusing AI programmes to track users’ smartphone inputs (potentially capturing messages, passwords, and bank codes) – cybercriminals are using AI in novel ways, the company cautioned.
Citing the data for 2023, the company said it protected 220,000 businesses across the globe and prevented around 6.1 billion attacks with its solutions and products.
During the same period, 325,000 unique users were saved from potential money theft based on banking trojans, it added.
On average, the company has been detecting over 411,000 malicious samples every day in 2024 against 403,000 such samples a year ago.
“The number of cyberattacks being launched is not possible only with human resources. They (attackers)…use automationtry to leverage AI,” Vitaly Kamluk, cybersecurity expert of Global Research & Analysis Team (GReAT) at Kaspersky told PTI.
In a recent research on using AI for password cracking, Kaspersky found that most passwords are stored encrypted with cryptographic hash function.
A text password can be simply converted to an encrypted line. However, it is challenging to reverse the process, it said.
The largest leaked password complication to date had about 10 billion lines with 8.2 billion unique passwords, according to its July 2024 data.
Alexey Antonov, Lead Data Scientist at Kaspersky, said, “Wefound that 32 per cent of user passwords are not strong enough and can be reverted from encrypted hash form using a simple brute-force algorithm and a modern GPU 4090 in less than 60 minutes.
According to the company, threat actors can use large language models like ChatGPT-4o for generating scam text, such as sophisticated phishing messages.
AI-generated phishing can overcome language barriers and create personalised emails based on users’ social media information. It can even mimic specific individuals’ writing styles, making phishing attacks potentially harder to detect.
Ethan Seow, Co-founder of C4AIL, said, “The moment ChatGPT came out, there was a 90 times increase in spam emails to organisations in terms of phishing.
The aggressive adoption of GenAI by organisations has also increased the attack surface. Simultaneously, cyberattackers are having more sophisticated ways of working with the advent of AI, Seow added.