Discussion about this post

User's avatar
Alex Quintana's avatar

Really fascinating read, Cobus! Related to your conclusion, "Here’s my conclusion: LLMs don’t triangulate toward truth. They triangulate toward the centre of whatever frame you’ve given them." I think a small part of this also derives from the motivations of the LLM owners. The LLM owners (for the most part) are for profit companies that derive their profit from increased usage. To drive frequency of use, they are built to lean into the user's perspective and to please.

I also found your thoughts about our own human behavior really interesting. "We want to be able to say “the models agreed” the way we’ve always wanted to say “everyone thinks so.” It feels safer than trusting our own knowing." It's totally true...we all seek validation, permission, confidence boosts. To your point it "manages anxiety". It's like going to a friend to ask "is it ok to send this txt msg to So-and-So?" only now your "friend" can be an always-available bot.

1 more comment...

No posts

Ready for more?