Human brain has 86 billion neurons, which forms 100 trillion synaptic connections.
That 100 trillion is the first-order proxy of model “weights”.
Currently, SOTA AI models could have ~2 trillion parameters or model weights.
For example, OpenAI’s GPT 5.2 model is estimated to have 1-5 trillion parameters, while GPT 3 has 175 billion parameters. Meta Llama 4 Behemoth (MoE) has nearly 2 trillion parameters.
Thus AI models now are closer to human brains. Only 50x difference.
However, human brains is not just 100 trillion synaptic connections.
-
A synapse isn’t a single scalar. It has multiple properties (strength, short-term plasticity, release probability, receptor composition, timing effects, etc.). So raw physical degrees of freedom per synapse could be >1.
-
Not all synapses are independently controllable. Biology adds constraints and correlations (developmental wiring rules, local learning, neuromodulators, homeostasis). That means the effective independent DoF is likely lower than “#synapses × variables”.
-
The brain has lots of additional state beyond synapses. Neuron membrane potentials, ion channel states, neuromodulator concentrations, glial regulation, oscillations, etc. That adds dynamic DoF that don’t map cleanly to “parameters” the way a static model does.
Another thing need to keep in mind how energy-efficient a human brain is.
A typical adult brain runs on about ~20 W.
How to operate a SOTA model?
ChatGPT gives me this
for a dense FP16 2T model, 32 H200 GPUs is the “it loads and runs” baseline, while 48–64+ GPUs is where you start getting reasonable headroom + throughput, depending on your target context and requests/sec.
So about 40 kw.
That would be about 2000x energy consumption than human brain.
Of course human is not just about brain, so about 400x.
GPT 5.2 estimated parameters
-
“Hundreds of billions to >1T” style ranges (very hand-wavy)
-
“~1.7–1.8T dense-style estimate” (site-level speculation)
-
“~2T–5T” (again, speculation)
