Skip to content
AI Security

The prompt is the new perimeter.

Your team is pasting customer records, source code and API keys into someone else’s model right now. Flowstate inspects every prompt, on every tool, and flags what shouldn’t be leaving the building — before it does.

Prompt inspection · live
Inspecting
ML
Marcus Liu · via Cursor
Blocked
"...help me debug this AKIA5J7Q8B2NVKZRX connection issue..."
AWS access key Redacted before send · key revoked
SC
Sarah Chen · via ChatGPT
Warned
"draft a refund email for j.doe@acme.com regarding order #48201..."
Customer PII Sarah confirmed business use · logged for audit
BW
Bob Wright · via Claude
Logged
"write a best-man speech for my brother’s wedding..."
Personal use 4th this month · surfaced to manager
AP
Alice Park · via DeepSeek (unapproved)
Blocked
"summarise the attached supplier contract..." · contract.pdf
Unapproved provider Provider not on EU data residency list · SOC notified

You blocked the USB ports. The browser tab is wide open.

A decade of DLP investment, MDM rollouts and zero-trust architecture — and your team is now copy-pasting your most sensitive data into a free-tier consumer chatbot every afternoon. Most of it goes uninspected, untracked and unblocked.

34.8%
Of corporate data going into AI is sensitive

Source code, R&D material, sales data, customer records — up from 10.7% two years ago. The growth curve is the story.

Cyberhaven, 2025 AI Adoption & Risk Report →
89%
Of enterprise GenAI usage is invisible to security

The top tools you’d expect to see. The long tail of niche AI services your team experiments with isn’t. Security has zero visibility into nearly nine in ten sessions.

LayerX, Enterprise GenAI Security Report 2025 →
#1
GenAI is now the top corporate data exfiltration vector

32% of all corporate-to-personal data movement now happens through GenAI tools — ahead of email, file shares and removable media. 67% of that activity is on unmanaged personal accounts.

LayerX, Enterprise AI & SaaS Data Security Report 2025 →

Provider-level controls only help if the provider is approved. Network-level DLP only helps if you can decrypt the traffic. The only place to catch a prompt before it leaves is at the prompt.

How it works

Inspect every prompt. Classify what matters. Block what shouldn’t leave.

Content-level inspection across every approved and unapproved AI tool your team reaches for. The check happens before the prompt ever reaches the model.

01 — Inspect

"How do you see the prompt at all?"

In-line on managed devices, at the network egress, and via provider APIs where they exist. Approved tools, shadow tools, free-tier consumer plans — every channel your team types into AI on a company device.

  • In-line on managed endpoints (Jamf, Intune, Kandji)
  • Network-edge inspection for BYOD and contractors
  • API hooks for approved enterprise plans
Inspection coverage · May 2026 98.4% of sessions
Claude · enterprise API
Full inspection
ChatGPT · in-line endpoint
Full inspection
Cursor · in-line endpoint
Full inspection
Gemini · consumer / free
Endpoint only
Grok · shadow / unapproved
Block on contact
02 — Classify

"What counts as sensitive?"

The obvious things — secrets, PII, payment data — out of the box. The harder things — customer identifiers, internal IP, source code, jurisdictional flags, personal-vs-business use — trained on your data dictionary and refined per organisation.

  • Pre-trained on 60+ secret formats and PII patterns
  • Custom classifiers for your data — SKUs, account IDs, internal repos
  • Confidence scores, not just yes/no — tune your thresholds
What we catch
Secrets & keys

AWS, GCP, Azure, Stripe, GitHub, OpenAI tokens, JWTs, private keys.

Customer PII

Names, emails, phone numbers, addresses, account IDs, order references.

Source code & IP

Internal repo paths, proprietary algorithms, unreleased product specs.

Payment data

PANs, IBANs, sort codes, CVVs — PCI-scope content out of scope.

Health & legal

PHI, contract text, M&A material, anything tagged “privileged.”

Personal use

Wedding speeches, holiday plans, side projects — logged not blocked.

03 — Block / Alert

"What happens when it’s a hit?"

A graduated response, decided per-classification. Redact and pass for low-risk patterns. Warn and confirm for ambiguous ones. Hard block and notify SOC for the things that should never have left the building.

Graduated response
Low
Redact & pass
Sensitive token replaced in-line. User unaware.
Med
Warn & confirm
"Send anyway?" with reason logged for audit.
High
Block & coach
Prompt blocked. Approved alternative suggested.
Critical
Block & SOC
Hard block, key rotation, SIEM event, incident opened.
Marcus pasted a live AWS access key into Cursor at 14:02. The prompt was blocked before it reached the model, the key was revoked via AWS IAM, and a P2 ticket was opened in your SOC queue with the full session context. Total time from paste to revocation: 1.4 seconds.

Eight risks. One inspection layer.

The categories your security team already cares about — finally extended to the channel your team actually uses.

API keys & secrets

AWS, GCP, Stripe, GitHub, internal SSO — 60+ recognised secret formats with automatic revocation.

Customer data & PII

Names, emails, addresses, order references — matched against your CRM dictionary, not a generic regex.

Source code & IP

Internal repo paths, proprietary algorithms, unreleased product material flagged before the paste lands.

Payment data

PANs, IBANs, CVVs — PCI scope kept out of prompts where it doesn’t belong.

Shadow AI providers

Unapproved tools blocked on first contact. Discovery report for security review, not silent denial.

Personal use

Best-man speeches and holiday plans don’t belong on the company budget. Logged, not blocked — surfaced for a chat.

Cross-jurisdiction

EU-resident user prompts routed to US providers, flagged for review. Schrems II compliance with the receipts.

Anomalous patterns

Volume spikes, off-hours bursts, prompt-injection probes, jailbreak attempts. Behaviour, not just content.

Alerts your SOC will actually read.

Routed through the channels your security team already lives in. Severity-tuned. Deduplicated. Linked to the prompt, the user, the device, the classifier hit.

Live secret detected

Critical

Block, revoke at provider, page SOC. Incident opened with full session context, classifier confidence, and rotation status.

Customer data exfiltration

High

PII pattern matched against CRM dictionary. User prompted to confirm business use; full payload logged for the DPO audit trail.

Shadow AI provider seen

Medium

New AI endpoint reached from a managed device. Blocked by default. Discovery ticket opened for security review — not silent denial.

Anomalous pattern

Behavioural

800 prompts from one user in two hours, mostly outside working hours, hitting a tool they’ve never used. Surfaced for a check-in, not a block.

Lives where your security team already lives.

Every detection is a structured event. Pipe it into the SIEM you already pay for. No new pane of glass for the SOC to ignore.

Splunk
Datadog
Sentinel
Sumo Logic
Elastic
Chronicle
PagerDuty
Slack
Webhook + JSON event stream
SOC 2 Type II, ISO 27001
Inspection runs in your region

Stop finding out about leaks from someone else.

Book a demo and watch a live prompt feed from your own organisation — with detections, redactions and blocks running in real time.