31 Oct, 2025
1. Job 1 - the job the client has identified, budgeted for and asked us to do
2. Job 2 - the job the client needs to succeed, which may or may not match Job 1
3. Job 3 - the job that truly needs to happen once you’ve assessed the environment, understood the constraints and seen the “lay of the land”
Time and again, this “three jobs” framing has proven invaluable. It helps us deliver not just against the letter of an engagement but against the reality of what drives sustainable outcomes for our clients.
Recently, during a data mesh engagement, this framework proved more relevant than ever. The project started with clear ambition: take a promising proof of concept (PoC) and productionise it to be truly scalable using repeatable patterns. But what unfolded revealed an important lesson about matching technical ambition with organisational readiness.

Data Mesh has become one of the most discussed paradigms in modern data architecture. Its appeal lies in shifting from centralised, monolithic data platforms toward a decentralised model where domains own and publish their own data products. Done well, this approach allows organisations to:
Scale analytics across multiple business units
Improve data quality by aligning ownership with domain expertise
Reduce bottlenecks by empowering teams to build and serve their own data products
Yet as with any transformation, the distance between PoC and production is vast. A PoC can show what’s technically possible. Production demands that solutions be operationally robust, maintainable and aligned with the skills of the team that will carry them forward.
This is where the distinction between Job 1, Job 2 and Job 3 really matters.
The official scope was straightforward: productionise a data mesh PoC.
That meant extending existing ingestion, curation and presentation pipelines on AWS; creating repeatable patterns for future data products; and ensuring the platform could scale.
In practice, this included:
Ingestion enhancements - refactoring AWS DMS tasks and introducing a fan-in pattern for managing downstream dependencies
Curation layer improvements - extending PySpark-based AWS Glue jobs to flatten and enrich datasets with partition strategies for scalability and introducing a fan-in pattern for managing downstream dependencies
Presentation layer delivery - creating SQL-driven Glue jobs to publish consumable datasets
Orchestration layer implementation - creating an orchestration pipeline using EventBridge rules, Lambdas and DynamoDB
Observability and compliance - build upon the PoC’s foundation of monitoring, alerts and notifications
Documentation and handover - producing diagrams, roadmaps and knowledge artefacts to guide future work
On paper, Job 1 was a success. The deliverables were produced. The architecture was sound. The handover was clean.
But Job 1 is rarely the whole story.
What the client truly needed wasn’t just productionised pipelines. They needed an approach their internal team could own and sustain after the consultants rolled off.
This is where Job 2 often diverges from Job 1. In many organisations, the official ask is about delivering technology. But the underlying need is about capability transfer and long-term sustainability.
In retrospect, there was a gap here: while the architecture was robust, the client team lacked depth in AWS and DevOps. They were talented engineers but their experience was anchored in application development and business reporting, not in running complex event-driven data pipelines in the cloud.
This situation created unseen tension. The technical solution matched the engagement scope but the skills gap made ownership feel daunting and this was not called out until it was too late.
With hindsight, Job 3 wasn’t about building out more patterns or writing more code. It was about addressing the unspoken question: “is a custom AWS-native data mesh the right fit for this team at this time?”
Behind the scenes, the client was weighing up the cost of ownership. On one side was the custom AWS architecture: powerful, flexible and tightly integrated with their existing cloud environment. On the other side were commercial off-the-shelf (CotS) data tools such as dbt and Hevo: opinionated, easier to onboard and with less operational overhead.
Ultimately, the client concluded that while AWS offered long-term scalability, it also required a level of DevOps and cloud engineering capability that their team simply didn’t have at the time. Rather than commit to additional AWS infrastructure and headcount, they decided to redirect spend toward third-party SaaS tools that could be operated with lower internal overhead.
That decision wasn’t wrong. It reflected their priorities: reducing operational risk, containing cost of ownership and enabling their existing team to deliver value quickly without having to recruit DevOps talent.
This experience underscores that Job 3 often involves strategic choices, not just technical ones. Some lessons that apply more broadly:
Don’t treat build vs buy as an afterthought, Even if your PoC is on a given platform, step back and ask whether the long-term cost of ownership is acceptable. Sometimes a simpler CotS toolset is a better fit for the team and budget you have today.
Align technical ambition with operational reality. A robust AWS-native data mesh can be the “right” architecture but only if the team has the cloud and DevOps capability to run it. Otherwise, you risk building something that looks great in diagrams but feels unmanageable in practice.
Factor in skills, hiring and market realities. DevOps engineers with strong AWS data pipeline experience are in high demand. If hiring or training isn’t realistic in your timeframe, a SaaS-based solution may be the more sustainable option.
Recognise that priorities shift. What starts as a push for technical scalability may end as a decision for speed, simplicity and lower overhead. Neither is wrong but clarity about the trade-offs helps avoid frustration later.
Proof of concept proves what’s possible. Production forces you to reckon with what’s sustainable.
The three jobs framework is useful here:
Job 1: the job the client has identified, budgeted for and asked us to do
Job 2: the job the client needs to succeed, which may or may not match Job 1
Job 3: the job that truly needs to happen once you’ve assessed the environment, understood the constraints and seen the “lay of the land”
In this engagement:
Job 1: was delivering a production-ready AWS data mesh
Job 2: was enabling the client to sustain it
Job 3: was confronting the reality that, given the team’s skills and capacity, a CotS solution offered a more manageable cost of ownership
For organisations embarking on data mesh or any ambitious data initiative, the choice isn’t only how to build - it’s also who will run it, at what cost and with what trade-offs. Being transparent about those factors early can prevent surprises and ensure that the path you choose is one your team can walk confidently.

