Home About Services Engagement Insights Get in Touch
CCDE  |  CCIE Certified  |  21 Years' Experience

Independent
Design Assurancefor Complex Programmes

When network, cloud and security transformation programmes are high-risk, delayed or off-track — I come in, assess quickly, and deliver. Vendor-neutral design review, validation and programme recovery across DC migrations, WAN transformations, SD-WAN deployments and hybrid cloud transitions across AWS and Azure.

Design Assurance
Independent review of high and low-level designs against requirements, best practice and scalability.
Programme Delivery & Recovery
When programmes are off-track, I diagnose quickly and deliver a clear path to recovery.
Advisory
Senior independent expertise on demand — giving boards and sponsors genuine assurance.
21 Years' Experience
CCDE Cisco Certified Design Expert
CCIE Cisco Certified Internetwork Expert
7+ Industry Sectors
Sectors I Serve
Financial Services Banking Biopharmaceutical Healthcare & NHS Media Automotive Government & Defence Technology & Telecoms Insurance
About

21 years of delivery discipline across the UK, US, EU and APAC — now available as independent assurance

The problem I solve

Most infrastructure programmes don't fail in delivery — they fail in design. Flawed architectures, unvalidated low-level designs and unchallenged vendor recommendations create problems that surface during implementation or after go-live, when they are most expensive to fix.

I provide the independent technical scrutiny that delivery teams and vendors are not positioned to provide. Where I add exceptional value is programme recovery. When a transformation has stalled, gone over budget or lost stakeholder confidence, I come in, establish the facts quickly, and deliver a clear path forward.

About me

Design Assurance was founded by a CCDE and CCIE certified consultant with 21 years of experience across network, security and cloud programmes. My background spans highly regulated, high-security environments including financial services, biopharmaceutical, healthcare, media and automotive sectors. I am contracted via Tier 1 global system integrators since 2016.

I am vendor-neutral. I have no allegiance to any platform, product or integrator. My only interest is in whether your design will work.

Credentials & Experience
CCDE — Cisco Certified Design Expert Active certification. One of the rarest Architecture & Design credentials globally.
CCIE — Cisco Certified Internetwork Expert Active certification. Industry benchmark for network expertise.
AWS Solutions Architect Associate Cloud architecture across hybrid and multi-cloud environments.
AWS Security Specialty Cloud security architecture and compliance frameworks.
PCNSE — Palo Alto Networks Security design across enterprise environments.
21 Years' Experience DC, WAN, cloud, campus and wireless transformation across 7+ sectors across the UK, US, EU, APAC and Middle East.
9+ Years Independent Consulting Contracted via Tier 1 global system integrators since 2016.

Sectors I serve

Financial Services & Banking
High-stakes infrastructure where regulatory compliance and resilience are non-negotiable.
Biopharmaceutical
GxP-compliant infrastructure in global biopharmaceutical environments with strict security and validation requirements.
Healthcare & NHS
Mission-critical infrastructure across NHS Trusts and healthcare organisations.
Insurance
Highly regulated environments with complex compliance and resilience requirements. Expanding into insurance sector engagements.
Media
High-bandwidth, low-latency environments with complex content delivery requirements.
Automotive
OT/IT convergence, SD-WAN and global WAN transformation programmes.
Defence & Government
Highly regulated, security-cleared environments across the UK and internationally. Expanding into defence engagements.
Technology & Telecoms
Complex network architecture and cloud transformation across technology organisations.
Services

Independent assurance across the full programme lifecycle

I provide vendor-neutral design assurance, programme delivery and advisory services across network, security, cloud, campus and wireless transformation programmes. Every service is delivered by a CCDE and CCIE certified consultant with 21 years of hands-on programme experience.

01
Design Assurance
Independent review, validation and sign-off
HLD Review & Validation
Independent review of High-Level Designs against business requirements, scalability and best practice — identifying risks before they become expensive problems.
LLD Review & Validation
Detailed technical scrutiny of Low-Level Designs — checking implementation feasibility, configuration accuracy and production readiness.
HLD & LLD Gap Analysis
Reviewing whether the LLD actually delivers what the HLD promised — a critical and often overlooked failure point in large transformation programmes.
Pre-Deployment Sign-Off
Independent technical sign-off before go-live — giving stakeholders and boards confidence that what is being deployed is fit for production.
HLD Creation
Full authoring of High-Level Designs for DC, WAN, cloud and security programmes — vendor-neutral, fit for purpose, built to survive scrutiny.
LLD Creation
Detailed design documentation built for engineers to actually implement — not just theoretical architecture, but real-world deliverable designs.
02
Programme Delivery & Recovery
Stabilising and delivering complex transformations
Programme Recovery
Stepping into stalled, delayed or failing transformation programmes. Rapid assessment, stabilisation and delivery — whether the issue is architectural, vendor-related or delivery-driven.
Design Assurance Retainer
Ongoing independent design review throughout a programme lifecycle — catching drift between design intent and implementation reality before it becomes a problem.
As-Built Architecture Review
Independent post-deployment review of infrastructure against network design best practices, vendor reference architectures and industry standards — identifying deviations, risks and technical debt before they become operational problems.
03
Advisory
Senior independent expertise on demand
Technical Due Diligence
For private equity, M&A or procurement teams requiring independent assessment of network and cloud infrastructure quality, risk and fitness for purpose.
Vendor & Solution Assessment
Independent evaluation of vendor proposals and solution designs — ensuring clients are not buying an overengineered, unsuitable or commercially compromised solution.
Architecture Advisory
Fractional senior architecture input for organisations that need CCDE-level expertise without a full-time hire — on retainer or project basis.
Engagement Model

Flexible engagements built around your programme

Every programme is different. I scope each engagement to your specific needs and agree a commercial model that reflects the value delivered — not simply the time spent. All engagements begin with a confidential conversation at no obligation.

Ready to discuss your programme?
All initial conversations are confidential. No obligation to proceed.
Request a Design Review
Insights

Technical perspectives from the field

Programme Recovery
Programme recovery — what the first 48 hours actually looks like
When a programme is off-track, the instinct is to escalate, reorganise and replan. In my experience, the first 48 hours should be spent doing something different entirely.
Apr 2026  ·  5 min Read →
Design Assurance
Ten design failures that appear in almost every infrastructure programme
After 21 years of independent design review, the same ten failures appear with remarkable consistency. None of them are exotic. All of them are avoidable.
Apr 2026  ·  7 min Read →
Design Assurance
Why independent design assurance saves more than it costs
The cost of finding a design flaw before implementation is a fraction of finding it after. Here is the commercial case for independent design assurance.
Apr 2026  ·  5 min Read →

Follow on LinkedIn to be notified when each article publishes.

Sneak PreviewCloud Architecture
Why cloud network design is still network design
Cloud architecture is regularly treated as if the network layer designs itself. After reviewing programmes across regulated industries, this assumption is one of the most consistent sources of post-migration failure.
Apr 2026  ·  6 min Read →
Sneak PreviewDesign Practice
What CCDE teaches you that CCIE never could
CCIE tests whether you can implement. CCDE tests whether you should. After holding both for years and using both daily, the distinction changes how you read every design document you review.
May 2026  ·  6 min Read →
Sneak PreviewVendor Assessment
How to evaluate a vendor proposal without being sold to
Vendor proposals are built to sell, not to inform. After reviewing hundreds of them, I have a structured approach to cutting through the marketing and assessing what is actually being proposed.
May 2026  ·  5 min Read →
Design Assurance

Five things wrong with every HLD I have ever reviewed

After 21 years of reviewing High-Level Designs across financial services, pharmaceutical, healthcare and automotive programmes — a pattern emerges. The same mistakes appear, in different organisations, on different programmes, with different vendors. Here is what I find almost every time.

1. The design is written to justify a decision already made

The most common HLD failure is not technical — it is political. By the time a formal HLD is produced, the vendor has usually been selected, the commercial agreement is in place, and the architecture is effectively locked. The HLD is then written to document that decision, not to evaluate it.

A genuine HLD should explore options. It should compare approaches, document the rationale for choices made, and acknowledge trade-offs. When every section says "the proposed solution is X" without any consideration of why X was chosen over Y or Z, you are not reading a design document — you are reading a sales proposal with an HLD cover sheet.

2. Scalability is assumed, not designed

Almost every HLD I review states that the solution is "scalable." Very few of them define what scalable means in the context of that specific organisation, or demonstrate how scalability will be achieved.

Scalability is not a feature of a vendor platform. It is a property of a specific design, in a specific environment, against specific growth assumptions.

A credible HLD should define current state traffic volumes, projected growth over the programme lifecycle, and demonstrate through architectural choices — not vendor marketing claims — how the design accommodates that growth without fundamental re-architecture.

3. Resilience is described at component level, not system level

HLDs routinely describe component redundancy — dual uplinks, HA pairs, RAID arrays. What they rarely describe is system-level resilience: what happens when multiple components fail simultaneously, how failover behaves under real traffic conditions, and whether recovery time objectives are actually achievable with the proposed design.

Component redundancy and system resilience are different things. A design with fully redundant components can still produce a single point of failure at the system level if those components share a common dependency.

4. Security is an afterthought bolted on at the end

Security sections in HLDs are frequently one page long, written last, and describe controls at a generic level that could apply to any organisation. Firewall policy: "traffic will be controlled by firewall rules." Encryption: "data in transit will be encrypted." These statements are not design — they are intentions.

Security architecture should be woven through every section of the HLD, not confined to a single section at the end. Segmentation, access control, inspection points, logging and monitoring should all be visible in the network diagrams and architecture choices — not described generically in a separate section.

5. There is no HLD-to-LLD validation plan

The HLD is approved. The LLD is written. Nobody checks whether the LLD actually delivers what the HLD promised. In my experience, this is where programmes most commonly go wrong — the gap between what was agreed at HLD stage and what is actually specified in the LLD.

Every HLD should define how its requirements will be traced into the LLD, and who is responsible for validating that trace. Without this, the HLD becomes a contractual document that no one is responsible for delivering.

What good looks like

A well-written HLD makes reviewers' jobs boring. Every architectural choice is explained. Trade-offs are acknowledged. Scalability and resilience are demonstrated, not claimed. Security is integrated, not appended. And there is a clear path from HLD to LLD to implementation.

If your HLD makes for interesting reading — if reviewers are discovering things — it is not ready for approval.

What I consistently find in HLD review
  • The design has only been reviewed by the team that wrote it. Internal review is not independent review. The same assumptions, the same blind spots, the same vendor relationships.
  • Scalability is based on current load, not projected growth. Most HLDs are designed for today. Few account for the traffic, user or data growth that will arrive within 18 months of go-live.
  • Security is a layer, not a design principle. Firewall rules and access control added at the end of the design process rather than built into the architecture from the start.
  • Vendor recommendations favour the integrator's margin. The design almost always leads to the product the delivery partner already knows, already has stock of, and already makes the most margin on.
  • The LLD diverges significantly from the approved HLD. By the time implementation begins, the HLD is often a historical document — evolved without a formal record of what changed and why.
Want an independent review of your HLD?
All initial conversations are confidential. No obligation.
Request a Design Review
Programme Recovery

Programme recovery — what the first 48 hours actually looks like

When a programme is declared off-track, the instinct is to escalate, reorganise and replan. In my experience, the first 48 hours should be spent doing something different entirely.

Programmes get into trouble for many reasons, but they all share a common symptom: the picture becomes clearer the closer you are to the delivery. The people at the top see red RAG statuses and missed milestones. The engineers at the bottom know exactly what is wrong and why.

What the first 48 hours should actually look like

The first 48 hours should be spent reading everything: the original HLD, the current LLD, the change log (if one exists), the programme risk register, and any previous review or assurance reports. Not to form conclusions — to understand the gap between what was designed and what is being built.

Programme managers will tell you about milestones. Engineers will tell you about reality. In those first 48 hours, I spend as much time as possible with the engineers. Not in workshops. Not in status meetings. In direct, informal conversations about what is actually happening.

The three questions that matter

After 21 years of infrastructure programme delivery and recovery, I have learned that almost every programme failure can be traced to the same three questions not being answered honestly:

Is the design actually fit for purpose? Not whether it was approved — whether it will work in production, at scale, under the conditions the business actually operates in.

Does the delivery team have what they need? Not whether the project plan says they should — whether they actually have the skills, the access, the environments and the clarity of requirement to deliver what is expected of them.

Does anyone have an accurate picture of where things really are? Not the RAG status in the programme report — the actual technical state of the implementation and the real gap between current state and go-live readiness.

What recovery actually requires

Programme recovery is not primarily a project management exercise. It is a technical exercise. The root cause of most infrastructure programme failures is a design problem — either the original design was flawed, or the implementation has diverged from the design, or both.

Identifying and fixing that technical root cause is what makes recovery possible. Reorganising the governance structure, re-baselining the plan and running more frequent status meetings does not fix a design problem. It just makes the failure more visible and better documented.

What recovery actually requires is an honest assessment of the technical state, a clear understanding of the gap between current state and a viable go-live position, and a realistic plan for closing that gap. Not an optimistic plan. Not a plan that tells the board what they want to hear. A plan that reflects what is actually achievable.

The role of independent assessment

One of the reasons programme recovery is difficult is that the people closest to the delivery are often the least well-placed to assess it objectively. They are under pressure, they are invested in the decisions that have already been made, and they may have been telling the programme board that things are better than they are.

Independent assessment removes that constraint. I have no stake in the decisions that led to the current position. I have no relationship with the vendors whose products are causing problems. I have no interest in any particular outcome other than whether the programme can recover to a viable go-live position.

That independence is not just useful — it is often the only way to get an honest picture of where a programme actually is. And without an honest picture, recovery is not possible.

Slow down. Establish the facts. Only then build the plan.

What I consistently find in programme recovery
  • The root cause is almost always in design, not delivery. Delivery teams get blamed for programme failures that were baked in at architecture stage. The implementation was fine — what it was implementing was not.
  • The first 48 hours are wasted on escalation rather than assessment. Senior stakeholders want action plans before anyone has established the facts. Visible activity substitutes for diagnosis.
  • Recovery plans are built around activity, not outcomes. Lists of workstreams and RAG statuses that tell you what people are doing — but not whether any of it moves the programme toward recovery.
  • The programme team already knows what is wrong. In almost every recovery engagement, the people closest to the delivery have identified the core problems. They have not been asked — or have not felt safe to say.
  • Scope has drifted without formal change control. What is being delivered bears little resemblance to what was signed off. The gap is rarely documented, rarely costed and rarely acknowledged.
Is your programme off-track?
All initial conversations are confidential. No obligation.
Discuss Your Programme
Design Assurance

Ten design failures that appear in almost every infrastructure programme

After 21 years of independent design review, the same ten failures appear with remarkable consistency across every sector, every vendor and every programme size. None of them are exotic. All of them are avoidable.

1. The design is reviewed only by the team that wrote it

This is the most common and most costly failure. Internal review is not independent review. The same assumptions that shaped the design shape its review. A design review conducted by the delivery team is not a review — it is a confirmation.

2. Scalability is designed for today, not tomorrow

Most HLDs are sized for current load. Few seriously model the traffic, user growth or data volumes that will arrive 12 to 18 months after go-live. When those loads arrive, the architecture cannot accommodate them without significant rework.

Read the full article — Ten design failures →

Want to know if your programme has any of these?
All conversations are confidential. No obligation.
Request a Design Review
Design Assurance

Why independent design assurance saves more than it costs

There is a conversation I have regularly with programme sponsors and IT directors. The question is not whether you can afford independent design assurance. The question is whether you can afford not to have it.

In this article
  • The cost of a design flaw discovered late
  • What independent actually means
  • Before committing to a vendor solution
  • Before significant capital expenditure
  • The commercial case in simple terms

The cost of independent design review is fixed and known upfront. The cost of the problems it prevents is open-ended and frequently enormous.

Publishing
Thursday 24 Apr 2026
✉ Notify me
Want independent design assurance on your programme?
All initial conversations are confidential. No obligation.
Request a Design Review
Cloud Architecture

Why cloud network design is still network design

Cloud architecture is regularly treated as if the network layer designs itself. After reviewing programmes across financial services, biopharmaceutical and healthcare, this assumption is one of the most consistent sources of post-migration failure.

In this article
  • What cloud network design actually involves
  • The AWS Landing Zone — what the network design actually determines
  • Latency — the number that cloud architecture gets wrong most often
  • Security architecture in cloud environments
  • What independent cloud network review looks for

The network is not a commodity. In complex enterprise environments, it is the architecture. Everything else depends on it.

Publishing
Thursday 9 Apr 2026
✉ Notify me
Want independent review of your cloud network architecture?
All initial conversations are confidential. No obligation.
Request a Cloud Review
Design Practice

What CCDE teaches you that CCIE never could

CCIE tests whether you can implement. CCDE tests whether you should. After holding both for years and using both daily, the distinction changes how you read every design document you review.

In this article
  • What CCIE tests
  • What CCDE tests
  • The gap between implementation knowledge and design expertise
  • Why this matters for independent design assurance
  • What design thinking finds that implementation thinking misses

An excellent CCIE-certified engineer can implement almost any network architecture you put in front of them. That is not the same as being able to determine which architecture should be in front of them in the first place.

Publishing
Thursday 16 Apr 2026
✉ Notify me
Want to understand what independent design thinking delivers?
All initial conversations are confidential. No obligation.
Start a Conversation
Vendor Assessment

How to evaluate a vendor proposal without being sold to

Vendor proposals are built to sell, not to inform. After reviewing hundreds of them across financial services, biopharmaceutical, healthcare, media and automotive programmes, I have developed a structured approach to cutting through the marketing language and assessing what is actually being proposed.

In this article
  • Why vendor proposals are structurally biased
  • The questions every proposal should answer — and rarely does
  • How to assess a technical architecture section honestly
  • Commercial terms that favour the vendor, not the client
  • When to walk away

The vendor who wins the proposal evaluation is not always the vendor who delivers the best outcome. They are the vendor who wrote the best proposal.

Publishing
Thursday 23 Apr 2026
✉ Notify me
Need independent vendor assessment now?
All conversations are confidential. No obligation.
Request an Assessment
Weekly Insights
New articles published weekly
Technical perspectives on network, security and cloud design — from 21 years of programme delivery.
Contact

Start a confidential conversation

Get in touch

Whether you need a design review, have concerns about a programme in flight, or want an independent view, get in touch. Confidential, no obligation — just clear, expert insight.

Geography
UK · US · EU · APAC · Middle East — Remote & On-site

Design Assurance is a trading name of Network Fabric Limited. All engagements are conducted under NDA. Contracting arrangements are flexible — direct, via your preferred system integrator, or through an existing framework.

Send a message
What happens next

I'll respond within one business day. Initial conversations are typically 20–30 minutes by call or video. No commitment required — if I can help, I'll say so. If I can't, I'll say that too.