INNOVATIVE NATIONAL TAX & UPKEEP INTERNATIONAL TALLY PTY LTD
Back to Articles
RiskJanuary 8, 2026

Cybersecurity Risk in an AI-Driven World

PN

Priya Nair

INNOVATIVE NATIONAL TAX & UPKEEP INTERNATIONAL TALLY PTY LTD

Enterprise adoption of generative AI has accelerated faster than the governance frameworks designed to manage the associated risks. As of late 2025, over 70% of Fortune 500 companies have deployed at least one generative AI system in an operational context — processing contracts, generating financial analysis, or assisting customer-facing staff. Each of these deployments creates new attack surfaces that traditional IT risk frameworks built around perimeter defence, access controls, and patch management are ill-equipped to address. The threat landscape has shifted fundamentally, and risk functions need to adapt their assessment methodologies accordingly.

Prompt injection attacks — in which malicious instructions are embedded in content processed by an enterprise AI system, causing it to execute unintended actions — represent one of the most acute near-term risks. Unlike traditional SQL injection, which targets a specific technical interface, prompt injection can be delivered through virtually any document or data input that an AI system is asked to process. A procurement AI asked to summarise a supplier contract could be manipulated by text embedded in the contract itself; a financial analysis tool could be subverted by formatting in a spreadsheet it is asked to review. Defending against prompt injection requires a combination of input sanitisation, output monitoring, and human review of high-stakes AI-assisted decisions — none of which are yet standard practice in most organisations.

Model inversion and membership inference attacks are less immediately operational but represent significant data privacy risks as AI deployment matures. These attacks exploit the statistical properties of trained models to reconstruct elements of the training data — potentially exposing sensitive customer information, proprietary financial data, or confidential employee records that were used to fine-tune internal AI systems. Organisations that have fine-tuned foundation models on proprietary data without implementing differential privacy protections may find that their AI systems inadvertently leak sensitive information to adversaries with sufficient computational resources.

The NIST AI Risk Management Framework, released in updated form in late 2025, provides the most comprehensive publicly available structure for enterprise AI governance. It organises risk management activities around four core functions — Govern, Map, Measure, and Manage — and explicitly addresses the distinct risk profiles of generative versus predictive AI systems. Boards and audit committees should be asking their technology and risk leadership teams for a structured assessment of AI deployments against the NIST AI RMF, and organisations that have not yet conducted such an assessment should treat it as a priority before expanding their AI footprint further.

Written by Priya Nair · January 8, 2026