Cyera, a data security platform, has announced the launch of its Cyera Research Labs, a team of researchers, scientists, and cloud experts focused on delivering evidence-based guidance at the intersection of AI and data security.
According to Cyera, the new research lab arm of the company debuts alongside the new State of AI Data Security Report, based on a survey of 900+ security professionals. One standout finding: while 83% of enterprises already use AI, only 13% have strong visibility into how it touches sensitive data.
The State of AI Data Security report highlights the gap between AI advancements and enterprise safeguards.
According to the report, 921 information technology and cybersecurity professionals were surveyed. The key question in the survey addressed whether chief information security officers (CISOs) are equipped to manage artificial intelligence (AI) with the same rigor applied to users, systems, and data as AI becomes more integrated into enterprises.
The findings reveal that 83% of respondents already use AI; however, only 13% have strong visibility into how AI interacts with their data, leaving most enterprises unaware of these interactions.
The report indicates that the AI model footprint is highly concentrated, with nearly four in five organizations relying on ChatGPT or OpenAI (79%). Microsoft Copilot (57%) and Google Gemini (41%) follow as the next most utilized tools.
The most common applications of AI include content and knowledge generation (75%) and productivity and collaboration (71%).
While these activities may seem routine, the report warns that a small team testing an AI tool for drafting reports containing sensitive data across departments places enterprises at the mercy of a single vendor's model without appropriate safeguards in place.
In response to the survey, 4% of participants stated that their organization does not currently use AI at all and has no current plans to use AI.
In contrast, 11% do not currently use AI in any capacity, but plan to implement it within the next 12 months. Additionally, 55% reported using AI in pilot programs or limited cases, and 28% indicated extensive use of AI. This brings the total percentage of AI users to 83%.
When asked about visibility into AI usage across the organization, including external AI tools (e.g., ChatGPT, Copilot), embedded AI features in software-as-a-service (SaaS) applications (e.g., Salesforce Einstein, Notion AI, and homegrown AI applications), 49% of respondents reported little to no visibility, 42% noted minimal visibility, 33% had some visibility, 3% had complete visibility, 10% had good visibility, and 7% had no visibility at all.
On the capability to block or restrict risky AI activities, 57% admitted they cannot do so, while 11% are fully automated based on policy. Additionally, 29% have manual blocking processes when necessary, 33% are aware of risks but have no controls in place, 15% have no blocking capability, and 9% plan to implement blocking functionalities in the future.
The survey revealed that 76% believe securing autonomous AI agents is the most difficult challenge, and 70% consider external prompts to public large language models (LLMs) to be equally risky.
In terms of complexity, the most challenging AI interaction types to secure include autonomous agents performing actions (76%), external prompts to public LLMs (70%), AI embedded in SaaS (43%), API-based AI integrations (29%), and internal open-source models (24%).
A mere 7% have a dedicated AI governance team, and just 11% feel fully prepared for regulatory requirements.
The report shows a heavy reliance on tools like ChatGPT (79%), Microsoft Copilot (57%), and Google Gemini (41%).
