Compiled as the field notes from a DevOps engineer, from 2023 until 2026.

Everyone keeps talking about prompt engineering. Which always felt slightly off to me. Prompts mattered early on, in the same way shell syntax matters when you’re learning Bash. Important at first, then quickly overshadowed by system-design. Most of my real gains using LLMs at work didn’t come from phrasing prompts. They came from treating models like infrastructure components. (Although, basics of prompting are essential to get it working above certain level)

Tickets as Entropy

A lot of my work lived on operational tickets. The pattern I frequently seen was:

But the output I needed was structured and simple:

flowchart LR
  YAML -->|Translate| SQL
  SQL -->|Import| MySQL["MySQL DB"]

So, the bottleneck wasn’t infra. It was translation by hand. (From human ambiguity to structured config). Which is exactly where LLMs shine…

The First Agent (Without Calling It One)

My first system was simple glue: Python + Bash + AWS Bedrock.

Flow:

  1. Pull tickets, via cURL.
  2. Send description to LLM model. (Calling aws bedrock-runtime via Bash)
  3. Receive output as YAML. (Using jq to extract)