Skip to content
-2 Attribute 10 min read

Classifying AI as CapEx vs OpEx

AI spend isn't just a software cost any more. Classifying it correctly unlocks capitalisation, R&D tax relief, and a more honest P&L.

You launched a new AI-powered feature in March. Twelve weeks of work, three engineers, and £12k of OpenAI tokens to prove the architecture. Your finance team wants to know which bucket the AI spend falls into. You don’t know. They don’t know. Your auditor will care a lot, in about six months. This is the play that gets you out ahead of that conversation.

The job

CapEx vs OpEx classification for AI spend is the discipline of treating AI cost the same way you treat engineer hours: as something whose accounting treatment depends on the stage of the project that incurred it.

It’s not a debate about whether AI is “really” CapEx-able. The accounting standards already accept that the labour cost of building software is capitalisable when a project is in its development phase. The same logic applies to the inference cost that produced that software — and to the inference cost of running an AI feature once it’s live, which is OpEx.

The question is mechanical: which project did this spend belong to, and what stage was that project in?

Why it matters

AI spend is growing into a meaningful percentage of the engineering P&L. At a typical mid-market tech company, AI spend is now 5–10% of total engineering cost and rising. That’s well above the materiality threshold most auditors apply. Misclassification at that scale doesn’t just create a clean-up problem at year-end — it materially misstates EBITDA.

Done well, classification also unlocks tax relief. Both the UK RDEC and IRS Section 41 schemes accept qualifying AI spend as part of an R&D claim, provided you can show it was incurred during a development-phase project doing systematic investigation. The same evidence that supports capitalisation supports R&D narrative.

Done poorly, you leave money on the table — and you create audit risk that compounds with every quarter you don’t fix it.

What good looks like

A working classification has four properties.

The split is automatic, not retrospective. Spend is categorised the moment it’s attributed to a project, based on the project’s current lifecycle state. You shouldn’t be reading Jira tickets at year-end trying to remember whether a feature was in development or operation in May.

It handles the transition cleanly. A new AI feature is in development phase while you’re building it (CapEx) and in operation once it’s launched (OpEx). The transition date matters. Good classification logs the transition explicitly and applies the correct treatment to spend on either side.

The rationale is captured at the time, not reconstructed later. Each classification carries its evidence — which project, what stage, who decided, when. When the auditor asks in March why a particular charge was capitalised, you don’t have to reconstruct it from Slack threads.

Both faces of AI spend are covered. Developer AI spent on a capitalisable project is capitalisable. Production AI spent on a feature still in its development phase is capitalisable. Production AI spent running a launched feature is OpEx. The model has to handle all three.

How to do it

Start with project lifecycle states. If your projects don’t have explicit lifecycle states — discovery, development, launched, operating, deprecated — add them. This is the foundation. Without it, you’re guessing.

Map AI spend to projects through attribution (see the previous play). Once spend has a project, classification is mostly mechanical: development-stage projects get CapEx treatment, operating-stage projects get OpEx, deprecated projects probably need a conversation about why they’re still spending anything.

Build the boundary cases deliberately. Some projects have both — a launched feature that’s still being meaningfully extended. Some spend belongs to multiple projects. Decide the policy once, write it down, apply it consistently. Don’t argue these case-by-case at quarter-end.

Generate the rationale text as the spend gets classified, not at year-end. A short paragraph explaining why a particular charge is being capitalised — the project it served, the stage it was in, the work it represented — costs you nothing if it’s automatic, and it’s gold for the audit.

A worked example

The Video Generation feature launched in March. Your team spent twelve weeks building it. £12k of OpenAI charges were incurred during that build, hitting the API key the team uses for the project.

Under sensible classification:

  • The £12k of OpenAI spend during the development phase is capitalisable, because it was directly incurred in producing software that meets the capitalisation criteria.
  • The OpenAI spend from March onward — running Video Generation in production — is OpEx, because it’s the cost of operating launched software.
  • A subset of the development-phase spend may also qualify for R&D tax relief if the work involved systematic investigation of technical uncertainty (it usually did, with a new AI feature).

That’s three different accounting treatments for what looks on the invoice like one number. Good classification splits them automatically. Bad classification puts it all in “AI tools” as OpEx and you lose the capitalisation benefit, the R&D claim, and the audit trail in one go.

The pitfall to avoid

Don’t try to apply classification rules to historical spend you can’t attribute. If you can’t trace AI spend back to a specific project for the last twelve months, the right answer is to start fresh from this quarter, not to invent attribution after the fact. Auditors prefer “we started doing this properly in Q3” over “we reconstructed it retroactively.”

Put Workforce Engineering into practice

Flowstate is the Workforce Engineering platform. Connect your systems and start planning with confidence.