NDA-safe Payments & FinTech AWS Landing Zone Datacenter Exit

Zero-downtime datacenter exit for a FTSE 250-listed payments processor.

Secure, orchestrated migration from end-of-life on-premises datacenters to an automated AWS Landing Zone, without dropping a single live transaction.

Key Outcomes
Zero Downtime
Zero customer-impacting incidents during database cutover and DNS shift
100% IaC Coverage
Every resource defined as code: network routes, IAM, security groups
6-Month Delivery
Full datacenter exit completed six months ahead of hardware end-of-life
PCI-DSS Compliant
Compliance mapped, inherited by default via SCPs and AWS Config rules

"They mapped our physical security controls into automated policy-as-code, getting us out of our legacy datacenters six months ahead of schedule with zero impact to customers."

~6mo
Engagement length
0
AWS regions across multi-account Landing Zone
PCI-DSS
Compliance framework, SOC 2 aligned
0
Infrastructure as Code — Terraform & GitOps delivery
Live
In production — zero rollbacks post-cutover
Executive Summary

Modernised in transit, not lifted and shifted.

For a FTSE 250-listed payments processor, cloud migration is rarely about reducing costs. It is about eliminating the catastrophic risk of end-of-life physical hardware. Many consultancies copy virtual machines into the cloud and call it a migration. We used this datacenter exit as a catalyst to overhaul the entire deployment pipeline, security posture, and operational model simultaneously.

Capability Legacy On-Premises AWS Landing Zone
Infrastructure Provisioning Weeks of procurement, racking, and manual IP assignments Minutes. Fully automated via versioned Terraform pipelines with peer-reviewed pull requests.
Network Security Fragile, physical firewall appliances with manual rule management Zero Trust Architecture. AWS Transit Gateway with Network Firewall deep packet inspection.
Database Resiliency Active/Passive physical servers with slow, manual failover procedures Multi-AZ Aurora. Sub-second automated failover across multiple Availability Zones.
Compliance Posture Manual, point-in-time auditor checks. Compliance was a periodic event, not a continuous one. Continuous Compliance. AWS Config rules flag non-compliant resources within seconds of change.
Strategic Architecture Overview

Hybrid Transition & Cutover

Instead of a risky "big bang" cutover, traffic was incrementally shifted across the highly secure Direct Connect backbone, with both environments running in parallel throughout the transition.

Near-Zero RPO
Live transaction data was continuously mirrored via CDC. When the cutover happened, data parity was verified within milliseconds before any traffic was shifted.
The Strangler Fig
Features were extracted from the monolith, rewritten for EKS, and traffic was weighted incrementally at the GSLB/DNS level. There was no big bang moment.
Secure Transit Backbone
Direct Connect ensured that all transit between the legacy datacenter and AWS never touched the public internet, satisfying a hard PCI-DSS requirement.
flowchart LR
    DNS["DNS / GSLB\nWeighted Routing"]

    subgraph OnPrem ["Legacy On-Premises"]
        direction TB
        OnPremLB["Load Balancer"]
        OldApp["Monolithic Application"]
        OldDB[("Oracle RAC\nDatabase")]
        OnPremEdge["DC Edge Router"]
        OnPremLB --> OldApp
        OldApp --- OldDB
        OldApp --> OnPremEdge
    end

    subgraph Bridge ["Secure Transit Backbone"]
        direction TB
        DX["AWS Direct Connect"]
        DMS["AWS Database\nMigration Service"]
    end

    subgraph Cloud ["AWS Landing Zone"]
        direction TB
        CloudLB["AWS Load Balancer"]
        TGW["AWS Transit Gateway"]
        EKS["Amazon EKS"]
        Aurora[("Aurora\nPostgreSQL")]
        CloudLB --> TGW
        TGW --> EKS
        EKS --- Aurora
    end

    DNS -->|"100% → 0%"| OnPremLB
    DNS -.->|"0% → 100%"| CloudLB

    OnPremEdge <-->|"Private"| DX
    DX <--> TGW

    OldDB -.->|"CDC Stream"| DMS
    DMS -.->|"Near-zero RPO"| Aurora

    style DNS fill:#1e1e2e,stroke:#475569,color:#fff
    style DX fill:#6b21a8,stroke:#7c3aed,color:#fff
    style DMS fill:#6b21a8,stroke:#7c3aed,color:#fff
    style EKS fill:#4c1d95,stroke:#7c3aed,color:#fff
    style Aurora fill:#1a1a2e,stroke:#7c3aed,color:#fff
    style TGW fill:#1a1a2e,stroke:#7c3aed,color:#fff

← Scroll to explore diagram →

Architecture Overview

The Secure Migration Stack

Tools are listed to prove delivery depth while keeping network topology NDA-safe. This represents a modern, regulated hybrid-cloud transition state.

Network Layer
The Transit Backbone
  • AWS Direct Connect (DX) Private connectivity with consistent latency and deterministic routing, bypassing the public internet entirely.
  • AWS Transit Gateway Central cloud router, strictly isolating production from non-production traffic with route-table segmentation.
  • AWS Network Firewall Deep packet inspection and strict egress filtering with managed rule groups for compliance evidence.
Modernisation Layer
The Modernisation Engine
  • AWS Database Migration Service (DMS) Continuous real-time CDC replication from legacy Oracle RAC into cloud-native Aurora PostgreSQL.
  • Amazon EKS Hardened, compliant container environment hosting newly extracted microservices with Kyverno policy guardrails.
  • Governance Framework Multi-account strategy with preventative SCPs, AWS Config, and immutable CloudTrail logging across all accounts.
Risk Mitigation

Enterprise Risk & Compliance Controls

In a payments environment, moving the data is only half the job. Proving it moved securely, without corruption or compliance violations, is what satisfies the regulators.

Execution
Execution Controls
  • Cutover Strategy: Phased, weighted DNS routing allowing rapid rollback via TTL controls and health checks if latency or error rates deviated.
  • Data Integrity: CDC validation and reconciliation windows executed prior to DB cutover, with row counts, checksums, and payment-ledger parity verified before any traffic was shifted.
  • Observability Gates: Cutover guarded by error-rate and latency SLOs, automated alarms, and pre-signed runbooks for instant rollback with no manual decision required.
Governance
Governance Evidence
  • Security Boundaries: Enforced tokenisation perimeters, KMS-backed key management, and central SIEM logging for all transit events.
  • Audit Artefacts: Automated generation of compliance evidence including AWS Config snapshots, immutable CloudTrail logs, and approved IaC Pull Requests, packaged for PCI-DSS assessors.
  • SCPs as Guardrails: Service Control Policies prevent any account from disabling logging, creating public S3 buckets, or bypassing CloudTrail.
Operating Model

The Operational Transformation

Moving to the cloud means moving from a reactive IT mindset to proactive, automated DevOps at every layer.

What Changed
From CapEx to Velocity
  • CapEx to OpEx: Shifted from buying five years of hardware capacity upfront to paying for exact compute usage by the second.
  • Infrastructure as Code: Every network route, security group, and IAM role is now peer-reviewed in Git before deployment. No console cowboys.
  • Automated Resilience: Human element removed from disaster recovery with self-healing EKS clusters and automated DB failover built-in from day one.
Deliverables
What We Handed Over
  • Multi-Account AWS Landing Zone (Control Tower) with SCP guardrails and automated account vending.
  • Hybrid network architecture over AWS Direct Connect and Transit Gateway with full segmentation.
  • Modernisation strategy including CDC replication pipeline and Strangler Fig traffic routing playbook.
  • Deployment pipelines with integrated vulnerability scanning, audit logging, and PR-based approval gates.
Risk Exposure Assessment

Quantify Your Cloud Risk Exposure

Organisations processing critical transactions cannot afford reliability gaps. The CRRI™ assessment quantifies your cloud risk exposure and delivers a prioritised executive report in under 5 minutes.