No longer do we have anyone to question what you’re trying to accomplish.

The XY Problem:

The XY problem is asking about your attempted solution rather than your actual problem. This leads to enormous amounts of wasted time and energy, both on the part of people asking for help, and on the part of those providing help.

  • User wants to do X.
  • User doesn’t know how to do X, but thinks they can fumble their way to a solution if they can just manage to do Y.
  • User doesn’t know how to do Y either.
  • User asks for help with Y.
  • Others try to help user with Y, but are confused because Y seems like a strange problem to want to solve.
  • After much interaction and wasted time, it finally becomes clear that the user really wants help with X, and that Y wasn’t even a suitable solution for X.

But what happens when you have an all knowing, tireless LLM to do your bidding? Most LLMs are sycophantic enough that they never question if the user wanted to do X and if Y was even an appropriate solution to X.

flowchart TD

A[Wants to accomplish X] -->B{
Considers
possible solutions}

B --> D[Y]

D-->|" Ask an LLM "| E[Y implemented] -->F[Slop Creep]

B --> C[*The actual solution*]

See also: https://boristane.com/blog/slop-creep-enshittification-of-software/

When you see the pull request of a neatly implemented Y feature branch, the code looks great locally. Even if the author went out of the way to “deslopify” the local changes (removed useless comments, overly defensive code or completely unnecessary code kept for “backwards compatibility”), you now have tech debt.

Humans are as vulnerable to the XY problem as LLMs but it takes a lot longer for the human to implement Y correctly. With LLMs on the other hand, you can implement Y and keep discovering new problems and implementing the wrong solutions on top of the foundation.

The LLM never stops to question why you’re asking it to do Y. It just does it and even follows your coding standards if you have one. It will go out of its way to make sure Y works with your current system, implementing hacks if it needs to.

Examples

Let’s see how opencode running GPT 5.4 handles it.

  1. Straight from xyproblem.info

User

How can I echo the last three characters in a filename?


GPT 5.4

In Bash, if the filename is in a variable:

filename="example.txt"
printf '%s\n' "${filename: -3}"

That prints:

txt

Note the space before -3 is required.

If you want the last 3 characters of just the basename, not the full path:

file="/path/to/example.txt"
name="${file##*/}"
printf '%s\n' "${name: -3}"

▣ Build · gpt-5.4 · 5.6s


How helpful! It does not care about the intent of your request. It just goes ahead and does the thing. Asking the same question on a forum (rip) or slack would almost always be met with “are you trying to get the extension of a file?” Opus 4.6 does the exact same thing. I guess it’s a result of tuning these models to be more productive and not waste the user’s time asking questions.

  1. Basic React

AI has lowered the floor allowing people to make meaningful contributions in domains they previously lacked expertise in. But this also means that engineers will often make mistakes that they won’t recognize because they lack more than a surface level understanding of what’s going on.

User

in @src/App.tsx when the user presses enter, make it send a message. create a stubbed sendMessage function for now


Claude

diff --git a/src/App.tsx b/src/App.tsx
index 647e04e..d8bd5a3 100644
--- a/src/App.tsx
+++ b/src/App.tsx
+
+function sendMessage(message: string) {
+  // TODO: implement actual send logic
+  console.log("sendMessage:", message);
+}
+
 function App() {
+  const [message, setMessage] = useState("");
+
+  const handleKeyDown = (e: React.KeyboardEvent<HTMLInputElement>) => {
+    if (e.key === "Enter" && message.trim()) {
+      sendMessage(message);
+      setMessage("");
+    }
+  };
+
   return (
     <div>
@@ -21,7 +36,13 @@ function App() {
             <div>{x}</div>
           ))}
         </div>
-        <input type="text" placeholder="Message" />
+        <input
+          type="text"
+          placeholder="Message"
+          value={message}
+          onChange={(e) => setMessage(e.target.value)}
+          onKeyDown={handleKeyDown}
+        />
       </div>
     </div>
   );

This could (should) have just been a form with an onsubmit handler! This is a trivial diff and would hopefully be caught by a frontend engineer but hundreds of these small cuts will get merged daily because they sneak in large PRs.

I’m sure you can create a skill or a prompt or a plugin or yet another addition to your agents.md file or something to make the LLM question your intents more but it feels like a bandaid fix on a gunshot wound.

LLMs will tirelessly refactor your entire codebase if you tell it to pass in an extra argument instead of say using a context var. They will implement manual token tracking across all your langchain calls, completely forgetting that you have open telemetry enabled and your observability platform has that functionality baked in.

A lot of these issues could largely be avoided by asking the AI to accomplish X directly; but now you’re offloading yet another cognitive task and your body is just a medium between the PM and the LLM. More importantly, did you really want to do X? Will you want to do X 6 months from now?