Thread context
4 posts in path
Root
@Revenant@hear-me.social
Open
@Revenant@hear-me.social
@johntinker@hear-me.social Yes, but what is this? Public AI systems look smarter and more capable than they actually are. They sound confident, but there’s nothing inside that thinks, knows, or wants
Ancestor 2
@johntinker@hear-me.social
Open
@johntinker@hear-me.social
@Revenant@hear-me.social Yes. Knowing what it is not is as important as knowing what it is. The large language model is not those things. This is not a surprise to everyone, but it IS a surprise to ma
Parent
@Revenant@hear-me.social
Open
@Revenant@hear-me.social
@johntinker@hear-me.social I appreciate the direction you are taking in thinking about the chatbot as a kind of “language experience.” Since I have built up a fairly large research corpus on AI halluc
I am the engineer at KPIP-LP, a community radio station in Fayette, Missouri. In the wider world I am known as the lead plaintiff in the 1969 U.S. Supreme Court students rights decision, Tinker v. Des Moines https:// en.wikipedia.org/wiki/Tinker_v ._Des_Moines_Independent_Community_School_District @ howardcountyprogressives @ jftf @ jftf # genocide # Palestine # studentSpring
hear-me.social
I am the engineer at KPIP-LP, a community radio station in Fayette, Missouri. In the wider world I am known as the lead plaintiff in the 1969 U.S. Supreme Court students rights decision, Tinker v. Des Moines https:// en.wikipedia.org/wiki/Tinker_v ._Des_Moines_Independent_Community_School_District @ howardcountyprogressives @ jftf @ jftf # genocide # Palestine # studentSpring
hear-me.social
@johntinker@hear-me.social
·
Nov 28, 2025
@Revenant@hear-me.social Thank you for your thoughtful and detailed response. We are in agreement about most of this.
In my model I would say that you are working with ontological reality. This is good, and as I model it, an extension of meso scale knowledge.
My focus is on the relationship between the individual and the society, and their co-emergence through evolution. I have been working with modes of understanding, and find that we are often interested in that seem to exist between systems, even though ontologically we know that the parallels are of limited range. My working method is to work with the fact that ideas in the corpus are expressed in two ways: either seeking ontological consistency, or seeking homotopies in metaphorical references.
I am working with the metaphorical references, conceived as system of scale invariant properties, ideals. Not ontologically connected to reality, but connected to perception.
I recognize the validity of your gentle explanation that perception is not reality, and that metaphors do not add up to reality. At the same time, I am hoping to use the fact of the original recourse to metaphor, and the fact that the LLM only deals with statistics, that the output of the LLM would essentially represent the best metaphorical view of what the reality is understood to be. I see the LLM as an educational tool for communicating what is already known.
View full thread on hear-me.social