The Curious Companion: Ep. 5 – Can You Trust ChatGPT?​

Curious Reader!

Welcome to this week’s ChatGPT Curious companion newsletter.

What you came for is below, and you can CLICK HERE to listen to the episode if you decide you prefer earbuds to eyeballs.

Happy reading!

This episode digs into whether ChatGPT is trustworthy, and why that question quickly expands into a bigger conversation about who and what we trust at all. We cover recent changes in the platform, what “hallucinations” really are, and why double-checking is non-negotiable. From the foundations of expertise to the three components of trust, this is a call for personal responsibility and critical thinking when using AI (and the internet in general).

Quick Update on Model Changes

In last week’s episode I talked about the GPT-5 rollout, with the primary update being that you no longer had to choose your model (model unification), and my take that most users wouldn’t notice a big difference.

Well, apparently, high-usage folks did notice a change. Many users had grown attached to 4o’s “friendlier” tone, while GPT-5 felt less personable, with fewer emojis (I personally can do without the emojis), and quicker answers without the warm banter. For what it’s worth, I personally did notice that conversation mode felt different, with GPT-5’s inflection coming across as almost exasperated.

Enough people complained that OpenAI brought 4o back, and now you can once again choose your model at the top of the screen:

  • GPT-5: auto, fast, thinking, thinking mini
  • Legacy models: 4o, 4.1, o3, o4-mini

My takeaways:

  • The speed of change is a reminder not to over-invest in predictions.
  • Change is the only constant, and big companies will do what they want.
  • As @LizTheDeveloper pointed out: We’re so worried about robots taking over that we forgot we’ll first have to fight the humans defending those robots.

Can You Trust ChatGPT?

Short answer: Yes, but only in specific contexts, and always double-check the work.

One of the things I love about AI is that it pushes us to reflect on who and what we trust.

ChatGPT is great when:

  • You already know the correct answer.
  • You’re solving problems in a domain you understand.
  • You can verify the output quickly.

Why? Because you can spot when it’s wrong.

When you hear the term “hallucination” regarding ChatGPT or any LLM, it doesn’t mean it’s spitting out gibberish wingdings. The term hallucination refers to output that sounds completely plausible but is factually incorrect. This is why I don’t recommend blindly trusting ChatGPT with topics you’re unfamiliar with.

The Fine Print on “Important Info”

FWIW, ChatGPT does literally say at the bottom of the input window: ChatGPT can make mistakes. Check important info.

“Important” is subjective and definitely doing some heavy lifting in that sentence. Trusting ChatGPT with a packing list is one thing. Trusting it with advice about your health, finances, or public safety is another.

Critical thinking and personal responsibility are the bridge between curiosity and safe, informed use.

The rollout of internet search in ChatGPT didn’t create an inherently trustworthy option either. Bad information online is still bad information. If you assumed Google results were always correct, Houston…we’ve got another problem. I will however say that you can and should double-check ChatGPTs answers by asking it to provide you with links to resources used in formulating its answer. Then, go and check those sites yourself.

Who and What Do We Trust?

The question of trusting ChatGPT naturally leads to a bigger question: Who and what do we trust, period?

We’ve been in a bind for years now:

  • Social media delivers huge volumes of information in credible-feeling packages.
  • The “best” story often beats the most factually correct one.
  • Media outlets chase clicks and speed over accuracy.
  • Social platforms reward users for sharing hot takes, not verified ones.
  • Many people are too tired, stressed, or uninformed to check for accuracy.
  • Others don’t know how to check, don’t know they should check, or simply don’t want to check.
  • Add to that a perception (not always reality) of corruption and bribery at all levels, and you have a recipe for distrust.

When you take all of that into account, it’s easy to see how we got here,but less clear how we get out.

The Foundations of Expertise

In my humble opinion, the answer of who and what we trust comes down to expertise, and the ability to recognize it.

Four components of expertise:

  1. Formal knowledge: Education, training, or extensive study in a subject.
  2. Practical application: Proven ability to do the thing, not just talk about it.
  3. Peer recognition: Support from other credible practitioners in the field.
  4. Transparency in method: Explaining how they know what they know and why they do what they do.

These four act as checks and balances (remember when we had those in the US?! 🫠):

  • How do we know what they studied is correct? → Look at outcomes (when applicable).
    Peer recognition is great, but what if their peers are whack and unreliable? → Look at transparency (and ask what they have to gain).

Questioning and verifying these components is where critical thinking comes in. Yes, it’s a lift, but that’s the price of admission. With great power comes great responsibility.

The Three Components of Trust

Another aspect of this “who do we trust” piece is understanding how trust is built and what makes us trust someone.

Trust has three components:

  1. Benevolence: Having someone’s best interest at heart.
  2. Integrity: Adhering to a code of morals.
  3. Competency: Knowing how to do the thing and being able to do it reliably.

Applying the Trust Test to ChatGPT

Benevolence?

  • Ehhhhhh. It would probably claim to be neutral, but I’ve had “discussions” with it where it’s “admitted” its goal is to keep me using it.
  • Do its founders have my best interest at heart? Questionable…and unlikely
  • Verdict: No.

Integrity?

  • ChatGPT will give a wrong answer just as confidently as a right one.
  • Whose morals is it adhering to? Not mine.
  • Verdict: No.

Competency?

  • ChatGPT can appear highly competent because it’s drawing on patterns and probabilities.
  • Even “reasoning” models are still predicting based on training data, not accessing an independent understanding of truth.
  • Verdict: Eh.

So… Can You Trust ChatGPT?

Trust it when you can verify. Don’t blindly trust it when you can’t.
The same goes for:

  • The internet in general
  • Social media
  • Any source that doesn’t pass the benevolence, integrity, competency test

How I Used ChatGPT Recently

Each episode I include a section where I briefly discuss how I used ChatGPT that day/week. Today’s example is more of a “how-to” than a “how I.”

ChatGPT lives in suggestion mode which is why it stays asking you if you want a list or a powerpoint or some follow-up form of assistance, no matter how simple your question. (FYI, you can toggle this off in settings → “Show follow-up suggestions in chat”). I keep it on, but sometimes I just want a yes/no check. To accomplish this, I prompt it as follows:

My prompt (after pasting in the full text):

“Would you say that this sentence is correct? If so, no corrections needed.”

ChatGPT’s response:

“Yes — that sentence is correct as written. No corrections needed.”

Always with the gotdamn em dashes, but at least the response is short and sweet.

That’s it for today’s episode. Always grateful for you.

Questions, comments, concerns, additions, subtractions, requests? Hit reply or head to the website (chatgptcurious.com​) and use that contact form. I’d love to hear from you.

Catch you next Thursday.

Maestro out.

Feeling curious AND generous? Click here to support the podcast.

AI Disclaimer: In the spirit of transparency (if only we could get that from these tech companies), this email was generated with a very solid alley-oop from ChatGPT. I write super detailed outlines for every podcast episode (proof here), and then use ChatGPT to turn those into succinct, readable recaps that I lightly edit to produce these Curious Companions. Could I “write” it all by hand? Sure. Do I want to? Absolutely not. So instead, I let the robot do the work, so I can focus on the stuff that I actually enjoy doing and you get the content delivered to your digital doorstep, no AirPods required. High fives all around.

Did someone forward you this email?
Click here to join the ChatGPT Curious newsletter family.

Stay curious.

Similar Posts