It’s 3:45pm. Everyone’s in a meeting. I’m staring at a Spark error I’ve never seen before, and the stack trace is twenty lines of Java noise.
Six months ago, that would have meant an hour of trawling Stack Overflow, second‑guessing myself, and quietly hoping someone would come back online before end of day. Now? I open Cascade, paste the traceback, and within seconds I have a clear explanation, a likely root cause, and a suggested fix I can actually test.
That shift – from stuck to moving – is what AI tools have given me during my apprenticeship. Not a shortcut. A springboard.
I’m coming to the end of a Level 7 AI and Data Science apprenticeship at Compare the Market. My goal has been to transition into ML engineering – building and deploying models in production, not just training them in notebooks.
That means picking up a lot of new tooling fast: Spark, Delta Lake, feature stores, CI/CD pipelines, containerisation, API development. The list goes on.
The learning curve is real. And so, at times, is impostor syndrome.
I lean on Cascade (Windsurf’s AI assistant) as my primary tool, integrated directly into my IDE. It’s not just autocomplete, it’s a genuine thinking partner. Here’s what a typical day looks like.
When I’m dropped into a repository I’ve never seen before, my first move is to point Cascade at it. It helps me map the structure, understand dependencies, and get a guided tour faster than any README ever could.
What would have taken an afternoon of careful reading can take minutes. For example, instead of hunting manually, I can ask it to identify where a dataset is produced and written (job config → transform module → Delta write), highlight where key parameters are set (partition keys, schema enforcement, environment config), and point me to the relevant tests and how they’re run in CI.
I still verify everything by reading the code, running locally, and checking behaviour against our pipelines but I get to the right neighbourhood much faster.
“I switched to ChatGPT, reframed the problem, and it cracked it. Knowing which tool to reach for, and when, is a skill in itself.
We’ve set up MCP (Model Context Protocol) servers that connect Cascade to internal tools like Jira and Confluence. Writing tickets, updating documentation, and pulling context from existing pages (tasks that used to eat into deep‑focus time), can now be handled through a quick prompt in my editor.
It’s not glamorous but reclaiming those 10‑minute interruptions adds up.
Here’s something I’ve learned the hard way: not every AI is best at everything.
On one project, I was doing research and context gathering using Claude. It got me 80% of the way there, excellent at synthesis and reasoning. But when it came to a particular implementation step, it kept going in circles.
I switched to ChatGPT, reframed the problem, and it cracked it. Knowing which tool to reach for, and when, is a skill in itself.
The biggest change isn’t productivity, it’s willingness to try.
Before, I’d hesitate before tackling something unfamiliar. Do I really understand enough to attempt this? Should I wait and ask someone tomorrow?
Now I just start. If I hit a wall, I have an always‑available collaborator that doesn’t judge, doesn’t get tired of explaining, and doesn’t mind if I ask the same question three different ways.
That safety net has made me bolder, and boldness compounds. The more you try, the more you learn. The more you learn, the less you need the safety net.
I want to be clear: AI doesn’t do the thinking for me. I still have to understand why a solution works, verify it against our codebase, write the tests, and own the outcome.
The build is mine. AI just provides the scaffolding while I’m constructing it.

Compare the Market actively encourages AI adoption, but with guardrails, and I think that’s exactly right.
A few principles I follow:
Critical thinking first: AI output is a starting point, not a final answer. I regularly ask for edge cases and failure modes, and I sanity‑check suggestions against our code and our standards.
Data sensitivity: knowing what you can and can’t share with external AI tools is non‑negotiable. Internal data stays internal; I redact and paraphrase when I need help with a problem.
MCP as a model for responsible integration: rather than ad‑hoc AI usage, connecting tools like Jira and Confluence through MCP servers means AI operates within defined, auditable workflows. It’s structured, it’s intentional and it scales.
“The technology will keep evolving. But the core skill, knowing how to ask the right questions, evaluate the answers, and apply them thoughtfully, that’s timeless.
If you’re early in your career, whether that’s an apprenticeship, a career change, or your first engineering role, AI tools are worth investing time in.
Not because they’ll do your job for you, but because they’ll help you learn faster, get unstuck sooner and build confidence in your own abilities.
The technology will keep evolving. But the core skill, knowing how to ask the right questions, evaluate the answers, and apply them thoughtfully, that’s timeless.
Start with a problem you’re stuck on. Open the chat. Ask.
You might be surprised how quickly you start moving.
Whatever your specialist area, wherever you are in your journey, we’d love to hear from you.