Founding partner program — applications close 30 June 2026. Read the brief →
Back to InsightsRisk Intelligence

Your risk register is fiction, and your regulator knows it

Norrsent Editor9 min read
Norrsent insight: a risk register is fiction unless threats, controls, and consequences are linked. Energy industry context.

Your risk register is fiction. The risks aren't imaginary, but the register exists in a different universe from the threats that cause them and the controls that supposedly reduce them.

Walk into most energy firms and you'll find three systems. Security maintains a threat list: ransomware targeting SCADA networks, supply chain compromises in turbine controllers, insider threats at remote substations. Compliance runs a control library: NIS2 technical requirements, ISO 27001 controls, access policies for operational technology. Audit owns the risk register: operational risk, cyber risk, third-party risk, each with an impact score and a heat map. Different owners, different cadences, different tools. Nobody can tell you which specific threat drives which specific risk or which specific control reduces it.

This is enterprise risk management in name only. Three teams write different versions of the truth and hope nobody asks them to reconcile.

The regulators are asking.

The regulatory shift nobody talks about

CRR III, in force across the EU since January 2025, raises capital buffers for banks financing energy infrastructure. Article 74a requires institutions to maintain "comprehensive arrangements" that link operational risk events to their root causes and the controls that failed. The ECB's SREP guidance is clearer: examiners expect to trace a risk back to specific threat scenarios and forward to specific financial consequences. If your register says "Project Delivery Risk: High" but can't show which supply chain bottleneck drives that rating or which contract clause reduces it, you're not compliant.

DORA goes further. Article 6(8) requires firms to document the ICT risk scenario, the threat actor, the affected asset, the consequence, and the mitigating control in a single chain. One connected record. For energy firms managing offshore wind farms or hydrogen production facilities, this means linking the cyber threat to the specific industrial control system, the operational consequence if it fails, and the network segmentation that's supposed to prevent it.

The UK's SS1/23 operational resilience framework uses different language but lands in the same place. Firms must map important business services to the resources that support them, the threats that could disrupt them, and the controls that protect them. The PRA wants to see the chain. If your risk register can't produce it, the register is decoration.

NIS2, which came into force across the EU in January 2023 with member state implementation deadlines through October 2024, hits energy operators directly. Article 21 requires operators of essential services to implement risk management measures that address the security of network and information systems. Regulators can now fine firms up to €10 million or 2% of global turnover for failing to maintain adequate risk management. The adequacy test is whether you can show the link between threat, control, and residual risk. Three disconnected spreadsheets don't pass.

What breaks when systems don't talk

The problem is structural.

Your offshore wind developer discovers a critical firmware vulnerability in the turbine pitch control system. Security logs it in their ticketing system, assigns a CVSS score, and tracks vendor remediation timelines. Compliance updates the control library to note that OT patch management procedures need tightening. Audit hears about it in the next quarterly review and adjusts the "Operational Technology Risk" rating from Medium to High.

Three months later, the board asks: what was the financial exposure? Nobody knows. Security can tell you the technical severity. Compliance can tell you which NIS2 requirement was breached. Audit can tell you the risk rating moved. None of them can tell you that the vulnerability affected 47 turbines representing 340 MW of capacity, that the exposure lasted 89 days during peak wind season, that the temporary network isolation reduced generation efficiency by 12% but prevented remote exploitation, or that the delayed patch cost €2.1 million in lost revenue.

That's a data model failure.

When threats, controls, and risks live in separate systems, you can't answer basic questions:

- Which risks have no documented threat scenario? (Probably most of them.) - Which controls protect nothing because the risk they were built for no longer exists? (Compliance will fight you on this one.) - Which high-impact risks have weak controls because nobody mapped severity to coverage? (Audit knows, but the data is in someone's head.)

The register becomes a storytelling device. You write the narrative that fits the audience. The board gets a heat map. The regulator gets a compliance checklist. Security gets a threat briefing. Nobody gets the truth, because the truth requires joining three tables that don't share a key.

Take capital project risk in offshore wind. Your risk register says "Supply Chain Disruption: High." Security tracks geopolitical threats to rare earth mineral supplies from China. Procurement monitors lead times for transformer substations, now running 18-24 months. Engineering knows that foundation installation has a six-week weather window in the North Sea. Finance sees the exposure: every month of delay costs €8 million in lost revenue and risks missing the CfD milestone payment.

Those facts exist in four different systems. When the board asks about supply chain risk, someone builds a slide deck. The slide deck is someone's interpretation of four partial truths. When the situation changes (and in energy projects, it changes weekly), the slide deck is already wrong.

What working risk intelligence looks like

A working risk register is a database. Every risk is a record. Every record links to:

Threat scenarios that could trigger it. The specific failure mode, the specific asset, the specific external factor. "Delayed turbine delivery due to transformer substation supply constraint from primary supplier in Germany, lead time extended from 12 to 22 months" is a threat scenario. "Supply chain risk" is a category error.

Consequences if the risk materialises. Financial loss, regulatory breach, service downtime, carbon target miss. Quantified where possible, bounded where not. "€47-63 million revenue loss, CfD milestone breach triggering £12 million penalty, 180,000 tonnes CO₂ emissions from continued coal generation" is useful. "High impact" is not.

Controls that reduce likelihood or impact. The actual control. The dual-supplier contract clause, the buffer stock agreement, the alternative installation vessel on standby. Each control should state what it reduces and by how much, even if the estimate is rough. "Secondary supplier in South Korea, 16-month lead time, adds €3.2 million cost but reduces delay risk by 60%" is a control you can evaluate.

Evidence that the control works. The signed contract, the supplier capacity audit, the last delivery performance report. If you can't produce evidence, you don't have a control. You have a hope.

This matters for decarbonisation projects where every month of delay has a carbon cost. Your steel plant is replacing blast furnaces with electric arc furnaces. The risk register says "Technology Implementation Risk: High." Fine. Which specific threat drives that rating? Is it the electrode supply chain (single European supplier, 14-month lead time)? The grid connection capacity (requires substation upgrade, planning permission pending)? The scrap metal feedstock quality (current supplier mix produces 8% more slag than design spec)? Each has different controls, different costs, different residual risks.

When threats, controls, and consequences live in one connected register, you can answer the question that matters: if this goes wrong, what happens, and what are we doing about it?

The test you can run this quarter

Pick one risk from your current register. Anything rated High or Critical. Now try to answer these questions using your existing systems:

1. What are the three most likely threat scenarios that would trigger this risk? 2. What is the financial exposure range if each scenario materialises? 3. Which controls reduce the likelihood of each scenario? 4. What evidence do you have that those controls work? 5. What is the residual risk after controls?

If you can answer all five in under an hour without calling three different teams, your register is working. If you can't, you're running fiction.

Most CROs will get stuck at question one. The risk register says "Hydrogen Production Delay: High" but doesn't link to the electrolyser vendor's manufacturing backlog, the grid connection agreement that's still in legal review, the offtake contract that requires 99.9% purity, or the quality control finding that current production runs at 99.7%. Those facts exist. They live in Engineering's project tracker, Procurement's vendor database, Legal's contract management system, and Quality's audit log. Nobody connected them.

The fix is better data.

Try this with a live capital project. Pick your largest offshore wind development or your hydrogen production facility or your carbon capture retrofit. Ask your project director: what are the top three things that could delay commissioning by six months? They'll tell you immediately. Now ask: where is that documented in the risk register? Can you show me the control and the evidence it's working? Can you show me the financial exposure?

Most project directors keep the real risk register in their head or in a project-specific spreadsheet that Audit has never seen. The corporate risk register has a line item that says "Project Delivery Risk" with a red square. The gap between those two documents is the gap between fiction and intelligence.

Why this matters now

The regulatory window is closing. CRR III is in force. DORA's technical standards drop in full this year. NIS2 enforcement is ramping up across member states. The PRA is already asking for scenario-level detail in SREP reviews. Firms that can produce linked risk-control-evidence chains will spend less time in remediation and more time running the business. Firms that can't will spend the next two years retrofitting their GRC stack while their regulator watches.

The advantage is speed. When a new threat emerges (and in energy infrastructure, they emerge weekly), a working risk register lets you trace impact in minutes. Which projects are exposed? Which controls apply? What's the residual risk? You're querying a database instead of hunting through three systems and hoping someone remembers the connection.

For energy firms managing multi-billion capital programmes with regulatory deadlines and carbon commitments, this is the difference between knowing your exposure and guessing at it. When your offshore wind project hits a supply chain delay, you need to know in real time: what's the revenue impact, what's the carbon impact, which contract clauses apply, what's the mitigation cost, what's the residual exposure after mitigation? If that takes three days and four meetings, your risk register is decorative.

Your regulator can tell which one you're running. So can your board, if they ask the right question. So can your project directors, who are already keeping the real register somewhere else because the corporate system is useless.

The question is whether you fix the system before the regulator makes you or after.

Sectors:Energy