[Security Issue] Hackers Now Use AI: Evolving Cyber Attacks and Emerging AI Security Threats
In 2024, a finance officer at a multinational company in Hong Kong transferred the equivalent of approximately USD 25 million following a video call that appeared to feature instructions from the company’s CFO. The shocking reality? Both the CFO’s voice and face had been synthetically generated by AI. This isn’t science fiction—it’s the new cybersecurity reality called AI Cybersecurity Threats.
As AI technology continues to weave itself into our daily lives, offering convenience and efficiency, it also introduces unprecedented threats. Like a double-edged sword, AI can serve as both a shield and a weapon.
AI is reshaping the threat landscape, impacting global cybersecurity strategies across two major fronts: the enhancement of traditional cyberattacks using AI, and the emergence of entirely new threats generated by AI technologies themselves.
How Are Traditional Hacking Methods Evolving Through AI?
As AI adoption increases, so does the sophistication of traditional hacking methods. Attacks that were once crude and generic are now personalized and difficult to detect. Automated, AI-driven campaigns now operate 24/7, drastically amplifying the scale and speed of cybersecurity threats.
1. Advanced Social Engineering Attacks
AI has revolutionized social engineering. Phishing emails powered by natural language processing now mirror human communication so accurately that they bypass traditional security filters with ease. AI analyzes a target’s social media activity, behavior patterns, and personal data to craft convincing, personalized attacks.
Deepfake technologies further elevate this threat. As seen in the Hong Kong case, attackers can generate fake voices and videos of C-level executives to request fund transfers or confidential data. Unlike traditional voice phishing, these attacks manipulate visual proof, making them alarmingly persuasive and hard to notice the cybersecurity threat it will bring.
2. Automated, Large-Scale Attacks
AI enhances the scale and automation of attacks. Machine learning algorithms can simultaneously scan thousands of systems for vulnerabilities, detecting subtle anomalies that human analysts may miss.
AI also excels in password cracking. Password cracking is a technology-based method to find out the encrypted password in a computer system or service. Deep learning models predict passwords more efficiently than brute-force methods, using personal behavior and contextual data to generate highly probable passwords, undermining conventional password policies.
What Security Threats Are Driven From AI?
Beyond enhancing existing attacks, AI introduces brand-new security risks by exploiting the technology’s own structure and dependencies, fooling or attacking the AI. As societies become more reliant on AIs, these vulnerabilities grow more consequential.
1. Attacks on AI Models Themselves
AI systems are now integral to critical infrastructure, making them direct targets. Adversarial attacks manipulate input data to trick AI into producing incorrect outputs. For example, a sticker on a traffic sign might cause a self-driving car to misinterpret a stop sign as a speed limit.
Facial recognition at airports can be fooled using makeup or accessories, while medical AI systems can misdiagnose conditions when subtle, human-invisible noise is introduced to scans.
Model poisoning attacks tamper with AI training data to insert malicious patterns. An AI spam filter could be trained to classify certain malicious emails as safe, or a recruitment AI could be biased to favor or disfavor applicants based on region or school, undermining fairness and security alike. Moreover, for anti-malware systems, certain malware with a specific signature can be made to be recognized as safe to disable the security system.
2. Privacy and Data Protection Risks
As AI processes massive datasets, it poses serious threats to personal privacy. Model inversion attacks allow attackers to reconstruct parts of original training data from the AI’s outputs. This is especially problematic for systems that handle personal private data such as healthcare or financial AI systems.
AI’s pattern recognition capabilities also enable unprecedented levels of personal identification and tracking that was impossible before. For example, by correlating credit card usage, location data, and browsing history, AI can infer sensitive traits such as political views or health conditions. Even anonymized CCTV footage can be analyzed to reconstruct individual daily routines and social interactions.
3. Dependence on AI Systems
The growing reliance on AI systems magnifies the impact of their failures or malfunctions. In sectors like finance, transportation, and healthcare, an AI error can lead to not just financial losses but also human casualties. There are numerous instances reported regarding traffic incidents due to auto-driven AI systems or medical AI’s misdiagnosis.
The “black box” nature of AI decision-making complicates threat detection and incident response. Especially with deep learning models, it is difficult to understand its nature of decision making, causing complications when analyzing and resolving cybersecurity threats caused by them. Notable incidents include Microsoft’s chatbot ‘Tay’, which began producing hate speech within 16 hours of launch in 2016, and Amazon’s recruitment AI, which was scrapped for systemic gender bias against women in 2018 shows how difficult to control and predict an AI system.
A Call for Comprehensive Cybersecurity Strategies
The advancement of AI is intensifying traditional cybersecurity threats and spawning entirely new ones. Nowadays, technical measures alone are insufficient. Governments, businesses, and civil societies must adopt holistic strategies that combine legal, regulatory, and ethical frameworks.
AI security is no longer a niche technical issue—it is a core component of Global Cybersecurity and organizational resilience. Companies that wish to stay competitive and safe in the AI era must act now to understand, monitor, and mitigate these complex, evolving risks.
* Would you like to learn more?