Like every freelance dev scraping by during the recent Knowledge Obsolecence Event, I decided to jump ship and befriend AI by building a couple of AI-related products.

Luckily for me, I’m working in a domain that LLMs excel at: summarizing and extracting text. Hallucination’s right up my alley!

Over the past few years, I’ve been building document processing pipelines, agentic patterns, and all kinds of hacks to make things easier for the LLM to waddle trough gigabytes of data and I started to notice a pattern:

I’m doing the LLM’s bidding.

Basically, all my work revolves around creating tooling for the LLM to make its life easier and reduce the chance of it tripping up.

The typical cycle goes like this: I create a bunch of prompts and workflows, see the LLM or agent stumble, and then give it better tools to work with, “Here you go sweetheart”. I’d love to say it’s all for my employer (I am a freelancer, after all), and of course that’s true, but nowadays, most of my work is catering to the LLMs and coercing it to by my friend.

Here’s a taste of what I’ve built so far:

  • Smarter tools for wrangling databases and large files.
  • Anti-hallucination systems to keep LLMs in check
  • Context engineering and management tools
  • Deduplication utilities to clean up messy data
  • Trajectory analyzers to keep it on its path
  • Make sure i use the right model so i don’t go bankrupt
  • Custom tooling to observe it’s motions and trajectory (like a good helicopter parent)

Sure, my employer ultimately wants this stuff, but 80% of the code is about giving the LLM the right tools to handle the problem domain better.

I don’t really mind, it’s just a curious realization.

Some people call it “context engineering” though ¯\_(ツ)_/¯.