Compiled as the field notes from a DevOps engineer, from 2023 until 2026.
Everyone keeps talking about prompt engineering. Which always felt slightly off to me. Prompts mattered early on, in the same way shell syntax matters when you’re learning Bash. Important at first, then quickly overshadowed by system-design. Most of my real gains using LLMs at work didn’t come from phrasing prompts. They came from treating models like infrastructure components. (Although, basics of prompting are essential to get it working above certain level)
A lot of my work lived on operational tickets. The pattern I frequently seen was:
But the output I needed was structured and simple:
flowchart LR
YAML -->|Translate| SQL
SQL -->|Import| MySQL["MySQL DB"]
So, the bottleneck wasn’t infra. It was translation by hand. (From human ambiguity to structured config). Which is exactly where LLMs shine…
My first system was simple glue: Python + Bash + AWS Bedrock.
Flow:
cURL.aws bedrock-runtime via Bash)jq to extract)