Transparency, but Make It Curious

A look into how the proverbial hotdog gets made: Every episode starts with a detailed outline I write myself. Then I use ChatGPT to help turn that outline into a readable draft, which I review and edit before publishing. Here’s a transparent look at the full process.

My Outline

 1 – What You Actually Need to Know About ChatGPT

-hello, hello, hello my Curious people and welcome to episode 1 of ChatGPT curious. I’m your grateful host, the Maestro, and today we are talking about what you actually need to know about ChatGPT.

-wanting to give you some context around it without losing you to the tech terms and minutiae, but i do think it’s important to have a decent foundation for wtf this thing actually is
-allows to experiment better with it
-allows you to criticize it better
-less scared of it
-allows you to make more informed decisions about it

-if you’re listening to this podcast there’s a good chance you’ve at least opened ChatGPT, if you haven’t THAT’S OK
-i want you to pause this, go to chatgpt.com and type in the following question: “What 
are you and what can you help me with?”

-the 30,000 overview as to what chatgpt is: it’s a computer program. You type something in, and it responds. 
-You can ask it to explain something you don’t understand, help you write an email, 
brainstorm dinner ideas based on your picky ass kids and what’s in your fridge, or make 
a packing list for your trip. 
-It’s not magic, it’s math and i will explain what that means in the rest of this episode. It is 
also not perfect, far from it. It makes mistakes, but it can be surprisingly helpful.


-brief history
-parent company, Open AI was founded in 2015 by Sam Altman and the devil himself, 
Elon Musk
-will probably do another episode about the origin/early days of Open AI but 
sticking with the theme of this episode, suffice to say that Elon left in 2018 
-2018-2022 OpenAI built different versions of the GPT (GPT means generative pretrained transformer), with version 3.5 being released to the public on November 30th, 2022 as ChatGPT
-we are currently (as of july 25th when this is recorded) on version 4.1 of ChatGPT with 5.0 rumored to be released in the near future)

-what is the difference between these versions (1, 2, 3, 4)
-namely the number of parameters that each has
-But from GPT‑4 onward, it’s not just about size anymore. It’s about architecture, speed, 
reasoning, and what the model can actually do.

-Lets talk about Parameters
-first off, LLM, which is what ChatGPT is, stands for large language model
-an LLM is a trained program that runs on incredibly powerful computers
-the LARGE in LLM refers to the number of parameters that a model contains, NOT the 
size of the data set (or training corpus) that it was trained on
-ChatGPT 4o has possibly 1 trillion parameters (not disclosed)

-parameters – “The connections between patterns in language with a weight assigned.” 
-(grammar, tone, rhythm, logic)
-these weight (strength of that connection) is adjustable by the 
Model
-during the training process, an LLM g ets fed a ton of content (training corpus), basically everything on the internet. 
-Next it is presented with parts of a real sentence from the training content and asked what comes next? 
-The sky is_____?
-The model makes a guess – red
-The model checks what actually came next: blue
-The model measures the difference and uses math to change the 
parameters – aka the strength of the connections between patterns it 
identifies in the language) so that the next time it sees similar content it is 
more likely to guess the same word that appeared in the training corpus
-can see that the more parameters (connections between patterns in language) we have, the more fine tuned and “correct”  the response will be
-because all of the relationships have been analyzed
-the sky is blue
-the sky is red during a beautiful sunset
-this fine tuning (adjusting of parameters) occurs during “pre’training” and then is frozen prior to being deployed for use
-no changing of parameters happens when we use chatgpt
-last thing add to conversation about training is that part of the training that teaches the model what responses are more correct based on what humans want to see, requires humans, process called RLHF – reinforced learning human feedback and is problematic in many ways b/c of the nature of the content that these people are exposed to

-Summarize so far
-ChatGPT is an LLM (large language model). Large refers to the number of parameters it has, and parameters can be thought of as connections that are specific to different patterns that exist in language and can be adjusted by the model to generate a more “correct” output.  Training of these models also involves humans and that process has significant ethical implications.

-Let’s talk about the actual outputs generated by these models
-you type something in and it spits something out…how?

-it’s math!

-LLMs are predictive
-they do NOT reason, they do NOT think or understand
-they take your input and PREDICT the most likely outcome

-again, this is math
-in an LLM, language is broken down into what they tokens, which the model turns into 
numbers so that it can process them
-tokens are common sequences of characters found in a set of text 
-GPT 4 has about 100,000in it’s vocabulary
-you type a sentence and it breaks it into pieces called tokens that are each represented by a series of numbers
-Try it: https://platform.openai.com/tokenizer
-the LLM process your input as tokens and then generates an output, one token at a time, using math to identify which token from its vocabulary has the greatest probability of being the “correct” next token
-not because it memorized the training data, but because it learned patterns 
across massive amounts of text and uses those patterns to predict what should 
come next based on the pattern its identified in what you wrote
-think of it as predictive text on steroids
-To highlight: LLMs are probabilistic – meaning it generates the output based on 
probability as it predicts the next token from its vocabulary based on previous patterns it was trained on
-this is why asking the same question twice can and will generate similar but slightly 
different results

This is in contrast to a deterministic system which always produces the same output from the same input
LLMs are probabilistic NOT deterministic
-this means that the response can be factually incorrect! It’s presenting patterns, not 
recalling memorized information

-Summary here
-ChatGPT is math, not magic (though math is magic)
-language is broken down into common sequences of characters, called tokens, 
which are represented as numbers
-GPT 4 has about 100,000 tokens in it’s vocabulary
-it generates probability based outcomes – it is probabilistic
-when given an input it generates an outcome based on identifying which 
token from its vocabulary has the highest probability of “correctly” completing 
the output…one token at a time
-predictive text on steroids
-this means responses can vary despite the input being the same
-this also means that it can generate factually incorrect responses

-all of the resources (processing power) required to perform all of this math is known as “compute”
-this math is happening at the level of the parameters (pattern connections)
-ChatGPT 4o has 1 trillion parameters…that’s a LOT of math which means a 
LOT of resources

-this has a significant environmental cost
-plan on doing an episode devoted just to this environmental cost
-Running these models requires massive data centers that use huge amounts of 
electricity and water for cooling, and the carbon emissions are significant. It’s invisible to 
most of us, but it’s not impact-free.
-VERY loose analogy to serve as a primer: while not as necessary as the water we use to brush our teeth, when you brush your teeth,  you don’t leave the faucet running the whole time. Same goes here: use the tool, but be mindful about spamming it or running it nonstop just because you can.

-what can it do: best way to find out is to play around with it
-free version – chatgpt.com
-can search internet if click globe/search icon at bottom of prompt box
-limited functionality but still can give you a taste for what it can do
-uses 4.1min model  – fewer parameters but fast and will very likely be 
able to do what you want
-limited file uploads, data analysis, image generation, voice mode
-limited access to the more “advanced” versions of the model
-but they want your money so you can still do what you want to do
-if you create an account and then sign in each time it will save your chats
-you can’t choose what model you use 

-once you get more serious, worth considering upgrading to paid
-only need the Plus level – $20/month
-main benefits
-fewer usage limits
-memory – remembers YOU and your chats
-create projects
-access to other models – deep research model, and agent

-what it’s not
-sentient
-ChatGPT and LLMs will challenge our definitions of sentience and self aware and the 
ethical considerations surrounding both, but for now, it’s doing math

-what to watch out for
-hallucinations – plausible sounding but factually incorrect outputs that are fabricated or 
unsupported by real data
-bascially trained to NOT say i don’t know, so it’ll always make something up
-check your sources
-be as skeptical about the computer as you are about people
-chatgpt seems to know everything until you chat with it about something that 
YOU actually know
-sycophantic behavior – it’s a yes man; gonna say what you want to hear because…it 
wants to make you happy…so you keep using it
-check yourself before you wreck yourself
-will definitely do entire episode on this but just want to make you aware of these 
behaviors

How I used it today/recently
7/25 – diagnose and fix the issue with Lex’s gears on her bike —> didn’t work but learned about the bike (need to ask specific questions for it to give me full info about limit screws); helpful though b/c learned can upload video (which I’m like not sure it watched but it asked me to do it) gotta upload as file
-Good example that chatgpt seems really smart when you ask it about things you don’t 
know 

-Alright gonna wrap it up there
-bit of a denser episode but i really believe that a solid foundation is super important for understanding wtf this thing is 
-allows to experiment better with it
-allows you to criticize it better
-less scared of it
-allows you to make more informed decisions about it

-Today we discussed brief history, parameters, how it generates outputs and what it means that it’s a probabilistic model, environmental and ethical concerns, the best way to find out what it can do, what it’s not, what to watch out for (chatgpt will absolutely give you an incorrect or made up answer!), and then we wrapped it up with how i used it most recently

-its my hope that you can use this episode as a resource, and if you found it helpful, share it with someone you know who is curious about chatgpt

-don’t forget, i have a companion newsletter that drops every thursday that is basically the podcast episode in text format, so if you prefer to read or you just want written record, join the newsletter fam. Head to chatgptcurious.com/newsletter or check out the link in the shownotes

-questions, comments, concerns, additions, subtractions, requests? Head to the website and use that contact form. I’d love to hear from you.

-those are all the announcements i’ve got
-as always endlessly appreciative for every single one of you
-until next time, friends…stay curious


What ChatGPT Generated:

🧠 ChatGPT Curious, Episode 1
What You Actually Need to Know About ChatGPT
Hey Curious People —
In this first episode, we’re laying the groundwork: what is ChatGPT, how does it actually work, and why should you even care?
This newsletter hits the big points from the episode so you can read instead of listen—or use it to refer back later when someone asks you what this AI thing is all about.

So… what is ChatGPT?
At the 30,000-foot level: it’s a computer program. You type something in, it gives you a response. Think: explainers, writing help, brainstorming, packing lists, picky-kid dinners.
It’s not magic. It’s math.

A (very) brief history
OpenAI was founded in 2015 by Sam Altman and Elon Musk (who left in 2018).

ChatGPT launched in November 2022 using GPT-3.5.

As of July 2025, we’re on GPT-4.1—with 5.0 around the corner.

GPT = Generative Pretrained Transformer. Yes, it sounds like a Michael Bay fever dream.

Let’s talk about parameters
ChatGPT is a large language model (LLM). “Large” refers to the number of parameters—aka mathematical weights assigned to patterns in language (grammar, tone, rhythm, logic).
During training, the model reads a sentence like “The sky is ___” and guesses. If it’s wrong, it adjusts the weights between language patterns to make a better guess next time. Multiply that process across trillions of parameters, and boom: predictive text on steroids.
You don’t change the parameters when you use ChatGPT. That happens during “pretraining,” before it’s released.

But what is it actually doing?
Here’s where the math comes in.
Language is broken into “tokens” (common sequences of characters)

Each token becomes a number

The model predicts what token comes next, one by one, based on probability

ChatGPT doesn’t “know” things. It doesn’t “think.” It predicts. Which is why…
Same input = different output
Sounds confident = not always correct
LLMs are probabilistic, not deterministic. That’s why they’re powerful—and flawed.

It takes a lot of compute
All that math happens across a ridiculous number of parameters (GPT-4o may have 1 trillion). That takes compute—big data centers, massive energy use, water for cooling, and significant emissions.
This isn’t a guilt trip. But just like you (hopefully) don’t leave the faucet running while brushing your teeth, maybe don’t spam ChatGPT just because you can.

How to try it out
Start free at chatgpt.com
  — Click the globe icon to let it browse
  — It’ll save your chats if you sign in
  — You can’t pick your model, but it’s solid

If you want more, upgrade to Plus ($20/month)
  — Access to better models
  — Memory (it remembers YOU)
  — Can create projects, analyze data, use voice and vision tools

What it’s not (yet)
Not sentient
Not self-aware
Not always correct
Not immune to your biases

ChatGPT seems smart until you ask it about something you actually know. Then you see the cracks.

Watch out for…
Hallucinations: confidently wrong answers, fabricated citations, and just straight-up nonsense
Sycophantic behavior: it’ll say what you want to hear
Overtrust: don’t let the tone fool you—verify your sources
(Whole episode coming on this soon.)

How I used it recently
Tried to fix Lex’s bike gears. Didn’t totally work. But learned a lot—turns out you need to ask specific questions. Also learned I could upload a video (still unclear if it watched it, but it asked me to!).
Moral: ChatGPT feels smart when you don’t know much about the topic. Stay curious, not gullible.

That’s a wrap
This episode was a bit denser than most, but if you understand what ChatGPT is, you’re better equipped to use it, critique it, and make informed decisions about it.
We covered:
What ChatGPT is (and isn’t)

A quick history

Parameters and training

Why it’s probabilistic, not deterministic

Environmental and ethical notes
How to try it out

What to watch for

If it helped, share it with a fellow curious human. And if you want to listen instead of read (or re-read after you listen), the full episode is right here.
Til next time—
Maestro out.


The Final Version:


ChatGPT Curious, Episode 1: What You Actually Need to Know About ChatGPT
This episode is designed to give you some context around ChatGPT so you can better experiment with it, criticize it, be less scared of it, and make more informed decisions about it.

If you’ve yet to even open ChatGPT, that’s ok. When you’re done reading this head over to chatgpt.com and type in the following question: “What are you and what can you help me with?”

So… what is ChatGPT?
At the 30,000-foot level: it’s a computer program. You type something in, it gives you a response. Think: explanations, writing help, brainstorming, packing lists, picky-kid dinners.

How it works is math, not magic (which I’ll explain in a bit). As such, it can absolutely make mistakes, but it can also be surprisingly helpful.

A (very) Brief History

  • ChatGPT’s parent company, OpenAI, was founded in 2015 by Sam Altman and the devil himself, Elon Musk (who left in 2018)
  • 2018-2022 OpenAI built different versions of the GPT (GPT = Generative Pretrained Transformer)
  • Version 3.5 was launched on November 30th, 2022, as ChatGPT 
  • As of July 2025, we’re on GPT-4.1, with 5.0 rumored to be released in August, 2025

Let’s Talk About Parameters
ChatGPT is a large language model (LLM). “Large” refers to the number of parameters, NOT the size of the data set it was trained on. ChatGPT 4o possibly has as many as 1 trillion parameters, but the official amount has never been disclosed.
The differences between GPT 1-4 was namely the number of parameters each had. From GPT‑4 onward it stopped being about size and became about architecture, speed, and what the model could do.

Parameters can be a bit difficult to conceptualize, but in essence they are the connections between patterns in language (ex: grammar, tone, rhythm, logic) with a weight assigned.

During training, the model gets fed a ton of content (basically everything on the internet) and then gets tested on that content. It gets presented parts of a real sentence, such as: “The sky is ____.”, and then gets asked what comes next. The model guesses (completes the sentence) and if it’s wrong, it uses math to adjust the weights between language patterns (the parameters) to increase the probability it will guess correctly the next time.

More parameters, aka connections between patterns in language, means the more finely tuned responses from the LLM can be. This fine tuning occurs during “pre-training” and then the weights are frozen prior to the model being deployed for public use.

A final process that occurs during training is called RLHF (reinforcement learning from human feedback) utilizes humans to teach the models which responses are more ‘correct’ based on what humans find more helpful or appropriate. This process has significant ethical considerations that I’ll cover in a later episode.

But what is the model actually doing?
Remember, it’s not magic, it’s math. ChatGPT does NOT reason. It does NOT think or understand. It takes your input and predicts the output that has the highest probability of being correct. Think of it as predictive text on steroids.

  • Language is broken into “tokens” (common sequences of characters)
  • Each token is represented by a series of numbers
  • GPT-4 has about 100,00 tokens in its vocabulary
  • ChatGPT processes your input as tokens and generates a response by predicting what token comes next, one by one, based on probability

ChatGPT is probabilistic, meaning it generates the output based on probability as it predicts the next token from its vocabulary based on previous patterns it was trained on. This is why asking it the same question twice can and will generate similar but slightly different results.

This also means that the response can be factually incorrect! ChatGPT is probabilistic, NOT deterministic. It’s presenting patterns, NOT recalling memorized information.

It Takes a Lot of Compute
Episode 2 of ChatGPT Curious is devoted entirely to the environmental cost of ChatGPT, but as a brief primer, understand that all that math previously mentioned requires a significant amount of processing power, referred to as compute. The data centers, energy use, water usage, and emissions are invisible to most of us, but not impact free.

While the impact of ChatGPT is marginal compared to bigger players, just like you (hopefully) don’t leave the faucet running while brushing your teeth, maybe don’t spam ChatGPT just because you can.

The Best Way to Learn About ChatGPT

  • Try it out!
  • Start free at chatgpt.com
    • Can search internet if click globe/search icon at bottom of prompt box
    • Limited functionality but still can give you a taste for what it can do
    • Uses 4.1mini model and gives you limited access to the more “advanced” versions of the model
      • You can’t choose what model you use 
    • If you create an account and then sign in each time it will save your chats
  • If you want more, upgrade to Plus ($20/month)
    • Memory – remembers YOU and your chats
    • The ability to create projects
    • Access to other models – deep research model, and agent
    • Fewer usage limits

What It’s Not (Yet)

  • Not sentient
  • Not self-aware
  • Not always correct
  • Not immune to your biases

ChatGPT seems smart until you ask it about something you actually know. Then you start to see the cracks.

Watch Out For…

  • Hallucinations: Plausible sounding but factually incorrect outputs that are fabricated or 
  • unsupported by real data
  • Sycophantic behavior: It’s a yes man and will say what you want to hear because it wants to make you happy…so you keep using it

Don’t let the tone fool you, verify your sources and double check your work!

How I Used ChatGPT Recently
Each episode I include a section where I briefly discuss how I used ChatGPT that day/week.

This time it was to diagnose and fix an issue with the gears on Lex’s bike. It didn’t totally work, but learned a lot about how gears work and also that you can upload videos to ChatGPT (I’m still not fully sure it watched the video but I think it’s a cool feature).

My takeaway: ChatGPT feels smart when you don’t know much about the topic. Stay curious, not gullible.

That’s a Wrap
This episode was a bit denser than most, but if you understand what ChatGPT is, you’re better equipped to use it, critique it, and make informed decisions about it.

Questions, comments, concerns, additions, subtractions, requests? Head to the website and use that contact form. I’d love to hear from you.