ChatGPT Curious
ChatGPT Curious
Ep. 1: What You Actually Need to Know About ChatGPT
Loading
/

In this episode of ChatGPT Curious, I lay the groundwork for understanding what ChatGPT actually is, without getting too lost in the weeds. I break down key terms like LLM and parameters, explain how the model was trained (hint: lots of math), why it sometimes spits out wrong info, and what all of this means for how you use it. I also touch on the environmental cost, what the free vs. paid versions can actually do, and how to think critically about its outputs. If you’ve ever felt a little confused, a little curious, or both, this one’s for you.

Main Topics Covered

  • Why it’s worth understanding the foundation of ChatGPT
  • What is ChatGPT?
  • Brief history of OpenAI and the development of GPT
  • What GPT actually stands for and what changed with each version
  • What “parameters” are and how they shape the model’s responses
  • How language is turned into numbers via tokens
  • What happens during training
  • Human involvement in model training via RLHF (reinforcement learning with human feedback)
  • Probabilistic vs deterministic systems and what that means for output accuracy
  • The environmental cost of “compute” and an analogy for mindful use
  • What the free version can do (and can’t), including search, uploads, and voice
  • What the paid version offers
  • What ChatGPT is not
  • What to watch out for
  • Real-life use case: Trying to fix a bike derailleur using ChatGPT

Links & Resources for This Episode