Industry Insights with Kaushik Sinha

Artificial Intelligence & Machine Learning , Finance & Banking , Geo Focus: Asia

Future of Cybersecurity: How to Build an AI-Powered Defense

Navigating Evolving Threats and Emerging Countermeasures in Age of AI
Future of Cybersecurity: How to Build an AI-Powered Defense
Image: Shutterstock

Cybersecurity has become critical in today's world, as breaches can cause severe reputational damage, disrupt large businesses, impact the global economy and even influence political outcomes in the most powerful nations. Common cybersecurity threats include malware, ransomware, phishing, credential theft, insider threats and distributed denial-of-service attacks. Recently, AI-based attacks have also emerged as a significant concern.

See Also: How Active Directory Security Drives Operational Resilience

Cybersecurity refers to technologies, practices and policies aimed at preventing cyberattacks or mitigating their impact. Its goal is to protect computer systems, applications, devices, data, financial assets and individuals from ransomware, malware, phishing scams and data theft. With the world more connected than ever - through wireless, optical, IP and increasingly satellite communications - cybersecurity has become paramount.

All networking technologies, including satellite constellation-based data communication systems, are vulnerable to attacks. Open-source software development, now a common practice, has created new risks, as hackers actively monitor known vulnerabilities and unpatched software. In an app-driven economy, billions of financial transactions occur daily, many involving sensitive personally identifiable information. Banks, government agencies and cloud infrastructures are prime targets, and breaches can trigger unprecedented economic consequences.

Artificial intelligence has become deeply integrated into R&D and business processes across industries. Leading enterprises and high-tech countries use AI for basic tasks such as improving productivity, often with generative AI applications such as ChatGPT, DALL-E and GitHub Copilot. AI tools now influence marketing, finance, IT, customer support and manufacturing operations. As AI usage expands, so do the risks of AI-driven attacks, making it essential for enterprises to develop strategies to address these threats.

AI-Based Threats and Countermeasures

Malicious actors can exploit AI systems through methods such as prompt injection to gain unauthorized access to sensitive information. A typical attack scenario might involve accessing AI-generated output and reverse-engineering it to manipulate the underlying training data. Employees also may inadvertently leak sensitive information by using gen AI tools to compile reports or analyze transcripts, increasing the risk of data exposure. Regular data protection training can help mitigate these risks.

Enterprises should maintain a risk score repository for all AI applications in use and enforce restricted URL filtering to control access. It is essential to assess the data security practices of each AI tool to ensure data is stored securely, preferably on private servers, and not shared with unauthorized third parties. Regular automated logging of employee interactions with AI tools, such as prompts used or data inputs, can prevent data leaks. Data poisoning remains a risk, as threat scenarios evolve. Quality assurance processes are necessary to ensure that AI-generated outputs and training data are free from harmful effects.

AI-driven attacks may also include identity-based social engineering tactics such as impersonation, deepfakes and voice phishing. These techniques can facilitate the installation of malware and ransomware. ChatGPT can generate code for phishing websites or malicious actors easily. To make the situation worse the malicious versions - WormGPT and FraudGPT, available on the dark web - pose additional risks.

Attackers can quickly use prompt engineering to identify vulnerabilities in an organization's defenses. As a precaution, enterprises must establish comprehensive AI policy guidelines and best practices, which can even be generated using AI tools. AI tools may also support with audit and compliance of AI cybersecurity policies. Rigorous evaluations of AI tools should precede their deployment to ensure safe and responsible use.

Basics of AI Algorithms in Cybersecurity

Cyberthreat detection often relies on supervised learning models that use structured data and labeled datasets to recognize known malware signatures. But when new threats emerge, unsupervised learning algorithms analyze patterns within unlabeled data to detect anomalies. Additionally, natural language processing techniques can extract insights from unstructured data sources.

Behavioral analytics models can detect cyberattacks by identifying deviations from established patterns. Gen AI and large language models make it easier to implement these approaches, providing faster threat detection and response capabilities.

AI Tools in Cybersecurity

The landscape of AI-powered cybersecurity tools is evolving rapidly, particularly with the integration of gen AI capabilities. Leading vendors offer solutions that enhance identity protection, threat detection, data security and cloud protection. Frameworks like NVIDIA Morpheus leverage GPUs to optimize applications that filter, process and classify large volumes of streaming cybersecurity data. AI-based solutions strengthen security across data centers, cloud environments and edge networks, supporting real-time phishing detection and device fingerprinting.

Networking architectures are increasingly adopting open interfaces, expanding the attack surface manifold. But AI-based intelligent systems, such as the RAN Intelligent Controller in Open RAN architecture, offer provisions for built-in defenses in terms of various security apps.

Government Regulations Internationally

Many governments around the world are focusing on regulating AI for secure and trustworthy usage. In the U.S., a White House executive order mandates the safe and secure development of AI, requiring safety test results to be reported to relevant agencies. It also mandates disclosures when large resources are used to train AI models. The European Union has gone a step further with the AI Act, which categorizes AI applications by risk level across industries. Stricter norms and mandatory transparency requirements will be enforced based on these risk categories.

In India, the government think tank NITI Aayog has released AI strategy guidelines, offering a framework for AI development and regulation. As India generates vast amounts of data relevant to AI, it will be interesting to see whether stricter AI regulations are introduced soon.

With the respective government regulations coming into force soon, the AI and cybersecurity landscape is steadily heading toward greater safety and success. The rapid pace of AI development resembles Moore's Law, with new tools and features released frequently.



About the Author

Kaushik Sinha

Kaushik Sinha

Head - Mobile Systems, Senior R&D Director, Fujitsu Research India Pvt. Ltd.

Sinha is a global technology leader with extensive experience working in leading wireless technology R&D organizations, including Nokia, Mavenir and HFCL 5G R&D. At Fujitsu Research India, he leads the Mobile Systems Business Unit to design and develop world class 5G Radio Access products.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.asia, you agree to our use of cookies.