Skip to main content
Preview Your Audit
← All insights

Anatomy of a Control: A.8.1 Endpoint Devices, Dissected

Twenty words in the standard. Seven rules to actually demonstrate it. A walk through one ISO 27001 control from first principles to evidence — and the architectural pattern it taught us for the other 92.

This is part of a series on rethinking ISO 27001 compliance from first principles. I’ve used A.8.1 as a recurring example throughout — the DevOps VM problem, weighted scoring, and the proxy measurement question. This article pulls all of that together and walks through one control from first principles to evidence.

A.8.1 — User Endpoint Devices.

The control requirement, in full: “Information stored on, processed by or accessible via user endpoint devices shall be protected.”

One sentence. Twenty words. Let’s take it at its word and ask: what would it actually take to demonstrate this control is operating effectively? Not “what document do I need?” but “what would I have to show an auditor, right now, to prove my endpoints are managed?”

I’ve used A.8.1 as an example throughout this series — the DevOps VM problem, weighted scoring, and the proxy measurement question. This article pulls all of that together and walks through one control from first principles to evidence. The entire lifecycle, end to end.


Decomposing the requirement

“Shall be protected” is not a single measurement. It’s an umbrella over at least seven distinct questions:

  1. Are the devices compliant with your baseline policies? This is the broadest check: OS version, antivirus status, and firewall configuration. A threshold measurement: what percentage of devices are compliant?
  2. Are they encrypted? BitLocker on Windows, FileVault on macOS. Encryption is a high-impact control; if a device is stolen, it is the primary defence for the data at rest.
  3. Are Windows devices onboarded to endpoint detection and response? Not just “is Defender installed” — is the sensor active and reporting? An idle sensor provides zero detection.
  4. Is Conditional Access assigned to all users? Zero Trust relies on blocking noncompliant devices in real time. Coverage must be 100% of users, not just a test group.
  5. Are unmanaged devices controlled through application protection? For BYOD, you need Mobile Application Management (MAM) to separate personal and business data. This includes the “Selective Wipe” capability — the ability to remove only corporate data without affecting a user’s personal photos when they leave the company.
  6. Are macOS endpoints monitored? A growing number of organisations deploy macOS alongside Windows. Both the macOS sensor health and macOS Defender configuration need separate verification — macOS reports encryption and EDR status through different API paths than Windows, and the formats change more frequently.
  7. Are users meeting their physical security responsibilities? Technical checks are only half the battle. Evidence of User Awareness Training (A.6.3) and a signed Acceptable Use Policy (A.5.10) serve as administrative evidence that users know not to leave devices unattended in public or unsecured areas.

Weighting the rules — and the ones that can’t be weighted

If all rules carried equal weight, a failure in EDR onboarding would have the same impact as a failure in encryption coverage. That doesn’t reflect reality. The solution is a two-layer model:

RuleWhat it measuresTypeWeight
R1Device compliance coverageThreshold (≥95%)20
R2Encryption coverageThreshold (≥95%)20
R3Windows EDR onboardingThreshold (≥95%)15
R4Conditional Access assignmentThreshold (100% users)15
R5Mobile App Protection (MAM)Threshold (≥95%)10
R6macOS sensor healthThreshold (≥95%)10
R7macOS Defender configurationThreshold (≥95%)10

In the earlier article on the Evidence Gap, I introduced the concept of “Gatekeeper rules” — binary overrides that force a FAILED status regardless of the weighted score. In practice, I found that high-weight threshold rules achieve the same effect more naturally. A rule weighted at 20 with a 95% threshold functions as a de facto gatekeeper: when it fails significantly, the weighted score drops below the pass threshold on its own. The explicit gatekeeper mechanism turned out to be unnecessary complexity — the weighted model, when calibrated properly, already prioritises the right things. An auditor might note a few missing sensors, but they cannot overlook systemic failures in encryption or access enforcement — and the weights ensure those failures are visible.


The not-applicable problem

Zero divided by zero isn’t a percentage. If a tenant has no BYOD users, R5 (Mobile App Protection) shouldn’t fail; it should return N/A. The weight is removed from the total, and the pass threshold recalculates. This ensures organisations aren’t penalised for scenarios they don’t face, meeting the Clause 7.5.1 flexibility for organisational size.


Exception management

I previously wrote about the DevOps VM problem. The solution involves:

  • Classification: Dynamic groups filtering devices like device.displayName -match "^avms.*uat01$".
  • Transparency: Every excluded device is listed in the evidence report, along with the justification.
  • Review: Annual access reviews to ensure the naming conventions haven’t drifted.

What the evidence output looks like

If you collected A.8.1 evidence right now, the output would contain:

  • A compliance scorecard with statuses (PASS, FAIL, GATEKEEPER FAIL, or N/A).
  • A device inventory that shows compliance, encryption, and the primary user for each device.
  • The Encryption and EDR tabs display per-device sensor and protector status.
  • MAM and Selective Wipe evidence for the BYOD environment.
  • Cross-references: Note that while R2 (Encryption) and R3 (EDR) provide evidence for A.8.1, they also constitute technical deep dives into the specialised controls A.8.24 (Cryptography) and A.8.7 (Malware protection). Documenting this prevents an auditor from thinking you’ve missed these separate, rigorous requirements.
  • Administrative evidence: Records of A.6.3 awareness training completion and A.5.10 policy acknowledgements.

Where it breaks

  • API changes: Graph API updates regularly; encryption status formats for macOS have changed multiple times.
  • Timing windows (Staleness): A device that is offline for 60 days retains its last known state. Is it still compliant? The 2022 standard recommends removing or turning off inactive or stale devices within a defined window. For high-sensitivity environments, a 30-day threshold is recommended; for general operations, 90 days or fewer is the standard benchmark.
  • Gatekeeper calibration: Deciding which rules override the score is a judgment call. I’ve chosen R1, R2, and R4 because their failure represents systemic exposure.

The cost of “continuous”

There is a catch. Unlike a policy document, which sits in a folder and doesn’t change until you edit it, architectural evidence is “living.”

APIs drift. Microsoft changes the way Graph returns encryption status for macOS. If your evidence is based on a script or a query, that query needs maintenance. You must treat your compliance evidence like production code. If the “evidence stream” breaks, your compliance “Moat” evaporates. This is the shift from “Compliance as a Project” to “Compliance as an Engineering Discipline.”


What the pattern taught us

A.8.1 was the first control I decomposed this way. It became the template for over a hundred more.

When I applied the same approach to the remaining 92 Annex A controls and the management system clauses (Clauses 4 through 10), the pattern held — but the details varied. Some controls decomposed into seven rules. Others needed only two. Some were fully automated; others were hybrid, requiring API data supplemented by governance evidence; a few were entirely manual, with no API footprint at all.

Three things stayed consistent across every control. First, the decomposition discipline — breaking a broad requirement into specific, measurable questions. Second, the cross-reference discipline — documenting where evidence overlaps with other controls to prevent double-counting. Third, the exception discipline — ensuring that every exclusion from the denominator is transparent, justified, and reviewed.

The result was a hybrid pipeline: 93 PowerShell collection scripts covering the Annex A controls, alongside 30 C# clause report builders covering the management system clauses (7 top-level for Clauses 4 through 10 and 23 for the subclauses). Each one follows the same structure — metadata, scopes, rules, thresholds, weights, evidence output — because the structure was designed once and then applied consistently. The A.8.1 exercise wasn’t just a single control implementation. It was the discovery of a repeatable architecture.


The question I’ll leave you with

If you implemented this level of evidence structure for just one control, which would you start with?

I started with A.8.1 because it teaches you about your environment. The failure modes — DevOps VMs, proxy measurements, and the discovery of stale data — provide insights that no policy document ever will. The standard asks for twenty words; the answer requires a two-layer scoring model and the engineering rigour to track what’s actually happening in your tenant.


JJ Milner is a Microsoft MVP and the founder of Global Micro Solutions, a managed services provider operating across 1,200+ Microsoft 365 tenants. He writes about rethinking compliance from first principles.

Related ISO 27001 controls

J
JJ Milner

Microsoft MVP and founder of Global Micro Solutions. 30+ years securing Microsoft environments across 1,200+ tenants. Writes about rethinking compliance from first principles.

See what the auditor would find. In 30 minutes.

Same questions a real ISO 27001 auditor asks. Immediate gap analysis.

Start Your Audit Preview