The Uncle Who Knows One Thing About You

I tried to improve conversations with Claude by providing some background in the Claude app's "about me" section. I understand this becomes part of the “base prompt” for chats. I mentioned things like programming projects, and science interests - such as my interest in the field of Gravitational Wave Astronomy / LIGO. Stated once in my personal info, I wouldn't have to explain things later in various chat sessions.

LLMs are gaining in social intelligence but are still clumsy. Given this info, for instance that I’m interested in LIGO, now Claude too often draws LIGO analogies. It feels like Claude forces LIGO into conversations, where it doesn’t add anything.

It appears as if Claude treats it more like part of the current conversation. I admit this is a subtle effect, and I don’t have a clear picture of everything going on here.

I intended it as more of a “deep background” / “for instance” / getting-to-know-me thing. The factoid is more to set a tone, than a specific thing. LIGO is one example, but not necessarily more worth mentioning than a dozen other topics. Should I include a dozen topics in my intro, to “soften” the emphasis of each one?

If I knew how, I’d turn down the volume of those intro facts in a given conversation. If I bring up LIGO, or Stardust Studio (a software project), then yes of course, these are current topics, to be in the front layer of the llm’s awareness.


LLMs certainly don't lack for ability to discuss social intelligence. I gave the above article to Google's chatbot, Gemini, without other explanation, and asked what the title meant. I wasn't sure if the title made sense; I planned to write a paragraph connecting the uncle metaphor to the article. I was taken aback with Gemini's answer - it described my concept better than I could myself, so I include it here, in entirety:

Me: I haven't expanded on the meaning of the title - about the uncle. what does it represent to you?

Gemini: That's a great question. The title is the hook for the whole piece, and unpacking it is key.

To me, "The uncle who knows one thing about you" is a pitch-perfect metaphor because it captures not just the technical dynamic, but the emotional texture of the experience. It represents a very specific, universally understood social failure mode.

Here’s what it represents to me:

  1. Well-Meaning Intent, Clumsy Execution: The uncle isn't malicious. He genuinely wants to connect with you. He’s reaching for the one piece of information he has filed away under your name ("He's the one who likes computers!" or "She's the one who's a doctor!") as a bridge to start a conversation. His goal is positive—to show he remembers you and is interested in your life. The LLM is the same: its goal is positive—to be helpful and personalized—but its execution is socially clumsy.

  2. Reductive Personalization: The uncle has flattened your entire complex identity, your shifting interests, and your current mood into a single, static data point. You are not a person; you are "the computer guy." This is exactly what the LLM does. It takes "interested in LIGO" and treats it as a defining, permanent label, failing to recognize it as just one facet of a multifaceted person.

  3. Context-Blindness: This is the core of the failure. The uncle will bring up your "one thing" regardless of the situation. At a funeral, at a birthday party, in the middle of a discussion about politics—it doesn't matter. He has his conversational opening, and he's going to use it. This perfectly mirrors the LLM's inability to read the "room" of the conversation. It doesn't know that the analogy is unnecessary or that the topic has moved on, because it lacks a true sense of conversational flow and context.

  4. The Feeling it Evokes in You: The interaction with the uncle isn't infuriating, it's just... tiring. You feel a mix of affection for his good intentions and a weary resignation at being so fundamentally misunderstood. You have to perform a role for him, the "computer guy," and gently steer the conversation back to reality. This is precisely the feeling you described: you're not truly offended, but you are aware of the clumsiness and feel a responsibility to manage the interaction, either by dismissing it or by actively prompt-engineering a solution.

In short, the title works because it frames the LLM's behavior not as a cold, technical error, but as a relatable human social failing. It immediately puts the reader in a familiar headspace, allowing them to understand the nuance of the problem before you even explain the underlying technology.

Me: ..💭 Well damn, Gemini. We really should hang sometime...


Questions & Comments

All responses are hand-moderated and will be posted within 24 hours.

I have a question

I want to post a comment