New: Norrsent Copilot for better risk identification and mitigation planning

Norrsent Copilot · Responsible AI for GRC

AI that suggests.
Never decides.

A trusted advisor at arms length — built to make your risk, compliance, and audit teams more effective, without taking decisions out of human hands. Bounded, inspectable, and always under your control.

What Copilot does

Five ways Copilot makes
your programme stronger.

01

Threat identification & alerts

Surfaces relevant threats from the canonical library based on your project context, geography, and operational profile — flagging emerging signals before they become incidents.

02

Control identification & assignment

Suggests appropriate preventive and mitigating controls for each risk, drawn from your library and standards-based templates. Coverage gaps are highlighted for human review.

03

Mitigation planning assistance

Generates mitigation strategies ranked by effectiveness and feasibility — every recommendation reviewed and approved by your team before it enters the register.

04

Project gap analysis

Audits your existing risk register against canonical libraries and standards. Identifies missing risks, stale assessments, and uncovered exposures — delivered as a prioritised action list, not a wall of suggestions.

05

Automated report generation

Drafts board-ready risk summaries, compliance reports, and disclosure narratives from live platform data. Drafts. Always editable, always reviewed, always signed by a human.

Responsible AI · how we built it

Designed around
four principles.

These aren’t marketing commitments. They’re the architectural constraints Copilot was built within — visible in the code, the data model, and the user interface.

01

Human-approved, always

Every Copilot output requires explicit human review and approval before entering the risk register, going to a board, or being submitted to a regulator. We do not engage in automated decision-making.

02

Bounded by your context

Copilot operates within your configured risk appetite, organisation hierarchy, regulatory scope, and approved data sources. It cannot reach beyond the boundaries you set.

03

Inspectable reasoning

Every suggestion shows its sources, the threats it considered, the patterns it matched, and the confidence interval it carries. Reasoning is auditable, not a black box.

04

Never autonomous

Copilot is designed to amplify human judgement, not replace it. There is no operating mode where it makes risk decisions, signs disclosures, or modifies the register without you.

Conscious limitations

What Copilot deliberately
doesn’t do — and why.

Most AI vendors hide their limitations. We publish them. These are the lines Copilot will not cross — by architecture, not by policy. They’re what makes it trustworthy in a category where the consequences of being wrong are real.

01

Never updates the register without explicit approval

Every risk, mitigation, control, or threat suggestion remains a draft proposal until a named human approves it. The register is never written by AI alone — by design, not by configuration.

02

Never makes risk-acceptance decisions

Risk acceptance is the irreducible human responsibility. Copilot can recommend treatment options and surface trade-offs — but the call sits with your risk owner, your committee, or your board.

03

Never finalises external disclosures or audit submissions

CSRD packs, regulatory filings, audit responses — Copilot can draft them. It cannot sign them. The signature on what leaves your organisation is always a person.

04

Never operates without showing its work

If a suggestion has no clear source, no reasoning trail, or no inspectable confidence — Copilot does not produce it. We refuse 'just trust the AI' as a category.

05

Never makes Article 22 (GDPR) decisions

Copilot does not produce decisions that have legal or significant effect on data subjects. Any output that could fall under Art. 22 is routed to human review by design, not by setting.

06

Doesn't pretend to know what it doesn't know

When the threat library is sparse for a domain, when context is ambiguous, or when confidence is low — Copilot says so. Silence beats false confidence in this category.

AI suggests · humans decide · this is non-negotiable

See it in your context

See Copilot work on your risk register, not a generic demo.

REQUEST A DEMO