Why Are We Still So Skeptical About LLMs?

2025/07/23

I just read that Gemini 2.5 Pro managed to win a gold medal at the IMO 2025. That’s wild. And yet I still see a lot of people especially in academia making fun of large language models and their users, saying they’re unreliable or dumb. I don’t think that’s true at all.

Two examples come to mind.

  1. The calculator analogy. A colleague of mine said this while we were chatting over lunch, and I loved it: “Imagine a calculator just got invented that multiplies large numbers. When someone says ‘I don’t trust AI,’ it’s as funny as saying ‘Oh, I don’t trust those calculators, I wanna do all the computation by hand.’”

  2. Is AI making us dumb? Probably. But does it even matter? When I ask ChatGPT to write the annoying bash or Python script I need to hit a deadline, I don’t go through Stack Overflow or read the docs like I used to. I probably learn less. But I also get things done 10x faster. So yeah, maybe I’m skipping some learning and becoming what-you-call dumb. But honestly, do I need to learn bash scripting if an AI can always do it for me good enough (Sure, maybe I have to fix some minor bugs myself)? What’s the point? It’s kind of like how no one really understands assembly or machine code anymore. And we don’t care, because we have higher-level programming languages that suit our needs. That’s what LLMs are now. They’re the next level of abstraction.

They’re not magic. They’re not perfect. But this kind of skepticism feels outdated.