The Curious Companion: Ep. 5 – Can You Trust ChatGPT?
Curious Reader! Welcome to this week’s ChatGPT Curious companion newsletter. What you came for is below, and you can CLICK HERE to listen to the episode if you decide you prefer earbuds to eyeballs. Happy reading! This episode digs into whether ChatGPT is trustworthy, and why that question quickly expands into a bigger conversation about who and what we trust at all. We cover recent changes in the platform, what “hallucinations” really are, and why double-checking is non-negotiable. From the foundations of expertise to the three components of trust, this is a call for personal responsibility and critical thinking when using AI (and the internet in general). Quick Update on Model ChangesIn last week’s episode I talked about the GPT-5 rollout, with the primary update being that you no longer had to choose your model (model unification), and my take that most users wouldn’t notice a big difference. Well, apparently, high-usage folks did notice a change. Many users had grown attached to 4o’s “friendlier” tone, while GPT-5 felt less personable, with fewer emojis (I personally can do without the emojis), and quicker answers without the warm banter. For what it’s worth, I personally did notice that conversation mode felt different, with GPT-5’s inflection coming across as almost exasperated. Enough people complained that OpenAI brought 4o back, and now you can once again choose your model at the top of the screen:
My takeaways:
Can You Trust ChatGPT?Short answer: Yes, but only in specific contexts, and always double-check the work. One of the things I love about AI is that it pushes us to reflect on who and what we trust. ChatGPT is great when:
Why? Because you can spot when it’s wrong. When you hear the term “hallucination” regarding ChatGPT or any LLM, it doesn’t mean it’s spitting out gibberish wingdings. The term hallucination refers to output that sounds completely plausible but is factually incorrect. This is why I don’t recommend blindly trusting ChatGPT with topics you’re unfamiliar with. The Fine Print on “Important Info”FWIW, ChatGPT does literally say at the bottom of the input window: ChatGPT can make mistakes. Check important info. “Important” is subjective and definitely doing some heavy lifting in that sentence. Trusting ChatGPT with a packing list is one thing. Trusting it with advice about your health, finances, or public safety is another. Critical thinking and personal responsibility are the bridge between curiosity and safe, informed use. The rollout of internet search in ChatGPT didn’t create an inherently trustworthy option either. Bad information online is still bad information. If you assumed Google results were always correct, Houston…we’ve got another problem. I will however say that you can and should double-check ChatGPTs answers by asking it to provide you with links to resources used in formulating its answer. Then, go and check those sites yourself. Who and What Do We Trust?The question of trusting ChatGPT naturally leads to a bigger question: Who and what do we trust, period? We’ve been in a bind for years now:
When you take all of that into account, it’s easy to see how we got here,but less clear how we get out. The Foundations of ExpertiseIn my humble opinion, the answer of who and what we trust comes down to expertise, and the ability to recognize it. Four components of expertise:
These four act as checks and balances (remember when we had those in the US?! 🫠):
Questioning and verifying these components is where critical thinking comes in. Yes, it’s a lift, but that’s the price of admission. With great power comes great responsibility. The Three Components of TrustAnother aspect of this “who do we trust” piece is understanding how trust is built and what makes us trust someone. Trust has three components:
Applying the Trust Test to ChatGPTBenevolence?
Integrity?
Competency?
So… Can You Trust ChatGPT?Trust it when you can verify. Don’t blindly trust it when you can’t.
How I Used ChatGPT RecentlyEach episode I include a section where I briefly discuss how I used ChatGPT that day/week. Today’s example is more of a “how-to” than a “how I.” ChatGPT lives in suggestion mode which is why it stays asking you if you want a list or a powerpoint or some follow-up form of assistance, no matter how simple your question. (FYI, you can toggle this off in settings → “Show follow-up suggestions in chat”). I keep it on, but sometimes I just want a yes/no check. To accomplish this, I prompt it as follows: My prompt (after pasting in the full text): “Would you say that this sentence is correct? If so, no corrections needed.” ChatGPT’s response: “Yes — that sentence is correct as written. No corrections needed.” Always with the gotdamn em dashes, but at least the response is short and sweet. That’s it for today’s episode. Always grateful for you. Questions, comments, concerns, additions, subtractions, requests? Hit reply or head to the website (chatgptcurious.com) and use that contact form. I’d love to hear from you. Catch you next Thursday. Maestro out. AI Disclaimer: In the spirit of transparency (if only we could get that from these tech companies), this email was generated with a very solid alley-oop from ChatGPT. I write super detailed outlines for every podcast episode (proof here), and then use ChatGPT to turn those into succinct, readable recaps that I lightly edit to produce these Curious Companions. Could I “write” it all by hand? Sure. Do I want to? Absolutely not. So instead, I let the robot do the work, so I can focus on the stuff that I actually enjoy doing and you get the content delivered to your digital doorstep, no AirPods required. High fives all around. Did someone forward you this email? Stay curious. |