Australian know-how leaders are shedding sleep over growing regulatory complexity, rising harm from ransomware, and the challenges of near-ubiquitous synthetic intelligence (AI) and deepfakes, a brand new survey by cybersecurity physique ISACA has discovered.
Generative AI (genAI) and enormous language fashions will drive the agenda in 2026, with 64% of Oceania respondents to ISACA’s 2026 Tech Developments & Priorities Pulse Ballot – which surveyed almost 3,000 world safety professionals – naming them as key.
As a transformative pressure, that locations genAI forward of AI and machine studying (60%), knowledge privateness and sovereignty (34%) and provide chain threat (34%).
But for all its promise, genAI has these threat and safety professionals frightened – with 67% saying that AI-driven cyber threats and deepfakes will preserve them up at evening in 2026, and solely 8% saying they’re very ready to handle its dangers.
Some 45% fear most concerning the “irreparable harm” in the event that they fail to detect or reply to a serious breach, whereas 41% fear about provide chain vulnerabilities like people who hit the likes of Qantas, Dymocks, and British Airways.
Technical points equivalent to cloud misconfigurations and shadow IT (named by 38% of respondents) are additionally inflicting safety executives to toss and switch, as are fears that regulatory complexity (36%) will put growing strain on safety practices.
To deal with this, respondents named regulatory compliance (58%), enterprise continuity and resilience (52%), and cloud migration and safety (48%) as high focus areas – with three-quarters anticipating cyber laws will enhance digital belief.
Safety leaders “are dealing with constant AI-driven threats, tighter regulation and growing expectations from executives, all while struggling to find and keep the right people,” ISACA Board vice chair Jamie Norton mentioned.
“It’s a perfect storm that demands stronger leadership focus on capability, wellbeing and risk management.”
The monster below the mattress
For an trade that was already extremely nerve-racking, the brand new threats posed by genAI have solely made issues worse – ratcheting up the strain on chief data safety officers (CISOs) that had been already feeling the strain lengthy earlier than it emerged.
ISACA’s findings corroborate current surveys equivalent to Proofpoint’s current 2025 Voice of the CISO survey of 1,600 CISOs, which discovered 76% of Australian CISOs have handled the fabric lack of delicate data over the previous 12 months.
With 80% of Australian CISOs feeling that they’re held personally accountable when a cybersecurity incident occurs – nicely above the worldwide common of 67% – genAI is just exacerbating what was already a major supply of stress.
It “adds to the pressure on CISOs to secure their organisations in the face of a rapidly changing threat and technological landscape,” Proofpoint discovered, with “expectations high and increasing numbers feeling the pressure and experiencing burnout.”
Accounts of the center assault suffered by former SolarWinds CISO Tim Brown – who not solely struggled to scrub up the most important 2020 SolarWinds breach however was charged with fraud by the US SEC – have highlighted simply how large a human toll the strain is taking.
The 2026 agenda
AI companies and infrastructure are driving a world surge in world ICT spending, Gartner not too long ago mentioned, predicting spending will develop 9.8% subsequent 12 months and cross $9 trillion ($US6 trillion) for the primary time – a lot of it pushed by genAI applied sciences.
ISACA Oceania ambassador and ACS Fellow Jo Stewart-Rattray mentioned solely 8% of tech leaders really feel ready for the dangers of genAI. Photograph: Provided/Info Age
But for all their firms’ spending plans, executives see managing these applied sciences as a serious a part of the problem, with many respondents to the ISACA survey frightened they gained’t have the ability to discover the employees to assist them accomplish that correctly.
Some 37% of Australian organisations anticipate to increase their hiring subsequent 12 months in comparison with this 12 months, ISACA discovered – however the third who plan to rent audit, threat and cybersecurity professionals subsequent 12 months anticipate to have issues discovering the precise individuals.
This disconnect was recognized within the current main OECD Science, Know-how and Innovation Outlook report, which famous that the safety and resilience facets of Australia’s science, know-how and innovation insurance policies “are relatively less apparent”.
This, then, is the crux of the fears that ISACA survey respondents carry with them – with compliance seen as essential however 30percentnot very or in no way ready to really ship the oversight they want.
“With only 8% saying they feel very prepared for generative AI’s risks,” ISACA Oceania ambassador and ACS Fellow Jo Stewart-Rattray mentioned.
“There’s an urgent need to balance experimentation and usage with robust oversight.”
This story first appeared on Info Age. You’ll be able to learn the authentic right here.
