i was speaking with mt, who asked "when are humans going to evolve
to being creatures of more intellect?" adjacently, km believes
that the extinction of learning is inevitable because we will have
physical ai integrations with us at all time (think syed's
glasses/the meta ray-bans or friend/humane ai wearables), reducing
the need to comprehensively understand topics at an intuitive
level. i.e. in a world where humans have low-compute RAG always
accessible, there will be no need to learn and retain information.
unintentionally, short-form content prevalence is already
facilitating this transition by shortening attention spans. even
i, hoping to alleviate the burden on my already weakened LTM,
yearn for a tool that i can speak to verbally and have
automatically "file" information, tasks, and thoughts in a
low-latency database distributed across all devices, wearables,
etc (eventually with task-completion capabilities by either
agentically connecting to public facing apis or operator-style
cv). however, though i only intended for this to replace my LTM
for things like biographical data and ungrouped opinions, i now
see how, over time, this obviates the need to intuitively
understand ideas. still, i disagree with km: i do not see humanity
facing an extinction of learning, but instead,
i predict an intellectual speciation (with conversational
accessibility as a wedge).
i expect a good litmus test for ~"intellect" is the capacity to
rationally have existential conversation. my reasoning follows:
even with high-spend ai research the general consensus is that
the median human is still more intelligent, else it would be
somewhat universally agreed that agi has been achieved. what is
intelligence here? in my model, intelligence is the ability to
process information to form conclusions. i use this definition
to facilitate a direct parallel with artificial intelligence,
which can fundamentally be characterized as a data processing
algorithm. neural nets are trained on finite datasets, but if
benchmarking was only done on training data these models would
display remarkable accuracy. it seems intuitive to think of
humans as different and not "overfitting" in the same way
because we aren't formally trained on a fixed dataset, but i
expect that we only possess one or two reasoning "layers"
superior to existing models, and with self-reasoning models like
r1 and o3, even this superiority is blurred (is this agi?).
humans perhaps learn these parameters from the world's data more
quickly than these models, but the proposition that we have less
formalized data "training" fails because we are exposed to
millions of sensory data points every second, so humans really
are just physical rl and self learning trained neural nets. with
this premise, i claim that our day-to-day processing is still
roughly an extension of testing on training data because
patterns propagate, so experiences are never wholly
unique.
conversely, imagination is inference. i.e. imagination
definitionally involves generating information beyond patterns
of reality. here is where the divergence begins, seen with
existential conversation as a vessel. discussing the nature of
existence and evaluating forward-facing claims about our
trajectory requires an ascendant level of critical thinking.
beyond simply processing input sensory information and forming
pattern-based conclusions, it is the thinker's burden to
produce the input information for analysis prior to applying
rational thought to these premises to evaluate whatever
framework is on hand. any human by nature can engage in
basic thought. this capacity for intellectual
conversation is dwindling. now, generalize this to all hypotheticals - there appears to
be a tangible distinction between basic reasoning and this
"hypothetical-processing" intelligence. in fact, as dangerous conspiracy theorizing can be, it still
aligns with this more meta-level thinking and requires
abstract thought. for simplicity, we can label those capable of just basic
reasoning as "reasoners" and those able and willing to
productively engage in discussions on the nature of value,
religion, morals, belief systems, etc. as
"thinkers".
this is where the divergence begins.
first, for comparison in the status quo, we can borrow from
Brave New World, equating thinkers to a superset of the
"alpha" and "beta" groups while reasoners are similar to a
combination of the "beta" and "gamma" classes. there is
overlap, which i defend because there are people that are
borderline and are capable of intellectual thought
sporadically but generally have nothing churning. the cracks
already show. then, the overfitting becomes pervasive and
irreversible, identical to how overfitting cannot be reversed
when training a model. with instant access to any piece
information on-demand, we will stop thinking deeply about the
patterns that would have organically led us to that
information. ~ learning the guitar just by playing tabs of
specific songs versus through systematic learning and theory.
but, if you learn the guitar song-by-song, unless you play
thousands of songs (and retain learned patterns), making
inferences to new, unlearned songs will be arduous, and
sometimes impossible. generally, with context-specific
information becoming our only consumption, those who lean
heavily on ai will forget how to recognize patterns and draw
inference. also, short-form content is the other driver.
consumption of this rapid-fire content, which is almost
stochastically distributed for what it covers (your preferred
content, which is already distributed +
app-recommended/trending content), leaves one with (1) a flood
of hyperspecific and uncategorized information and (2) little
time to actually process and sort said information. so,
capacity for inference begins to decline and incipient
speciation is put into motion.
second, the divide accelerates along current gaps in wealth.
even though the cost of compute will go down, i still believe
that the wealthy will amass compute. notwithstanding the ideas
of the "democratization" of ai, i expect that the proportion of
the population that has the technical capacity to work on ai
development and distribution is marginal, so the compute-rich
will retain control. then, they fully control what the
beta/gamma reasoners consume. companies like meta currently
control the content that 500m+ people see daily. the impact is
as follows: these content-distribution algorithms gradually eat
at the beta/gamma reasoners' belief systems. if people no longer
have the
time or ability to critically think about the
information they "process" (not even processing information
anymore, really) those with current systems of belief will see
them slowly dissipate and others will never develop these
systems at all. with no independently-constructed values to
defer to when one is posed with a decision or dilemma, the herd
mentality that is already growingly present will become dominant
- but only in the beta/gamma class. so, interacting with km's
initial prediction, i expect that learning won't go extinct
globally, but perhaps for the beta/gamma class. as learning
falls, so will opinions, because opinion and disagreement
is predicated on the capacity for critical thinking. case in
point [cow], who is arguably a vegetable already and is perhaps
among the first to have fallen. now, with opinion and skepticism dwindling, terminal
speciation begins.
i foresee the creation of three classes during this process.
first, traditional thinkers (epitome: mt) who have harnessed
critical thinking and applied it to literature, the arts, and
philosophy. they will find meaning in independent abstract
thought, irreplaceable by any artificial
intelligence. second, builders, who are the technically gifted minority
subset of the population that constructs the tools and systems
used to entertain, control, sustain, and leverage the general
populace. people in this beta group are capable of
understanding systems, innovating, and leadership, setting
them apart even if the technical barrier ceases to
exist. i do not think of these two classes as distinct species -
rather, they are one species subject to niche differentiation,
finding commonality in critical thought. however, there will
be a concrete and irreversible divide between the thinker and
builder superclass and the remaining majority of the
population. what was once a beta/gamma class of reasoners who
retained even sparse capacity for intellectual thought will
regress into a fully gamma and even potentially
majority-"delta" population. they will be consumers, generally
averse to thought, and more of a market than contributing
group to the general economy. perhaps the gamma/delta class
will be given ubi of some sort. i say "speciation" not because
i expect a barrier in reproduction (although with gene editing
techniques initially accessible to the wealthy, i cannot rule
this out completely), but instead because i predict the
accessibility of conversation to be the barrier. the
alpha/beta species will be unable to engage in meaningful and
fulfilling conversation with the gamma/delta groups, and the
formation of different intellectual classes will propagate to
new social classes, eventually creating a relatively firm
distinction between the groups.
some people (sd, an) are academically successful and innately
technically proficient, but incapable of intellectual or deep
thought and succumb to the same intellectual overfitting. in
conversation with dq, we concluded that as the value of
academic prowess declines greatly, only intellectual and
emotional capability will matter. the speciation will be
anisotropic, with the divide extending strongly on the line of
intellect, but marginal on the academic grain. there will be
academics on both sides. we continue that (dlt, al)-esque thinkers will thrive,
despite their current perceived weaknesses. discussion also
involved theorizing about how to insulate from overfitting.
obviously, the brain must be exercised and trained with
intellectually stimulating tasks (eg: dq annotations) and
reliance on shortcut tools and overstimulation must be
mitigated. however, dq claims that if one can make a clear
distinction between (1) "non-stimulating" or "rote" tasks to
be completed with the freedom of artificial intelligence,
along with limited doses of short-form stimulation and (2)
intellectually demanding and necessary work to which critical
thinking must be applied, there can be a moderate approach
involving both. the only problem i see with this is that this
forces the individual to place the burden on himself to
accurately determine what is intellectual or not, which i see
as extremely easy to misjudge. however, this can perhaps be
achieved with strong will and proper reference points, and
maybe the ability to accurately judge what is intellectual/not
and moderate information consumption accordingly can actually
serve as a self-selector to enter the intellectual class.
interesting to see how this plays out, but for now i will
strive to avoid bad influence altogether.