[HN Gopher] Text Is All You Need
___________________________________________________________________
 
Text Is All You Need
 
Author : jger15
Score  : 233 points
Date   : 2023-02-18 16:06 UTC (6 hours ago)
 
web link (studio.ribbonfarm.com)
w3m dump (studio.ribbonfarm.com)
 
| smitty1e wrote:
| > So what's being stripped away here? And how?
| 
| > The what is easy. It's personhood.
| 
| > By personhood I mean what it takes in an entity to get another
| person treat it unironically as a human, and feel treated as a
| human in turn. In shorthand, personhood is the capacity to see
| and be seen.
| 
| I confess lack of understanding. ChatGPT is data sloshing around
| in a system, with perhaps intriguing results.
| 
| > But text is all we need, and all there is. Beyond the cartoon
| profile picture, text can do everything needed to stably anchor
| an I-you perception.
| 
| Absolutely nothing about the internet negates actual people in
| physical space.
| 
| Possibly getting off the grid for a space of days to reconnect
| with reality is worthy of consideration.
 
  | rubidium wrote:
  | This. If you're concerned about text based persons, you've
  | already lost touch with reality and too embedded in the web.
  | 
  | The article confuses personality (that which is experienced by
  | others) with personhood (that which is) and falls apart from
  | there.
 
| recuter wrote:
| > The simplicity and minimalism of what it takes has radically
| devalued personhood. The "essence" of who you are, the part that
| wants to feel "seen" and is able to be "seen" is no longer
| special. Seeing and being seen is apparently just neurotic
| streams of interleaved text flowing across a screen. Not some
| kind of ineffable communion only humans are uniquely spiritually
| capable of.
| 
| > This has been most surprising insight for me: apparently text
| is all you need to create personhood.
| 
| Congratulations on discovering online personas are shallow as
| indeed most people are shallow and text captures enough of them
| that we can easily fill in the blanks.
| 
| > I can imagine future humans going off on "personhood rewrite
| retreats" where they spend time immersed with a bunch of AIs that
| help them bootstrap into fresh new ways of seeing and being seen,
| literally rewriting themselves into new persons, if not new
| beings. It will be no stranger than a kid moving to a new school
| and choosing a whole new personality among new friends. The
| ability to arbitrarily slip in and out of personhoods will no
| longer be limited to skilled actors. We'll all be able to do it.
| 
| The latest episode of South Park is about a kid going to a
| personal brand consultancy (who reduce everybody to four simple
| words, the forth always being "victim") to improve his social
| standing + Megan/Harry loudly demanding everybody respect their
| privacy and losing their minds at being ignored. This is nothing
| new.
| 
| People are shallow phonies and interacting via text brings out
| the worst out of most of them. _There are no humans online, only
| avatars._ And AI chat bots are sufficiently adept at mimickery to
| poke through that little hypocrisy bubble. You are being out
| Kardashianed. Just like offline some people can be effectively
| replaced by a scarecrow.
| 
| It is upsetting to those who spend too much time online and have
| underdeveloped personalities and overdeveloped personas. Text is
| not all you need. Not so long ago there hardly was any text in
| the world and most people were illiterate. And yet plenty of
| humans roamed the earth.
| 
| So yes, if you're a simpleton online it has suddenly become hard
| to pretend your output has any value. Basic Bitch = Basic Bing.
 
| desro wrote:
| > The "essence" of who you are, the part that wants to feel
| "seen" and is able to be "seen" is no longer special. Seeing and
| being seen is apparently just neurotic streams of interleaved
| text flowing across a screen. Not some kind of ineffable
| communion only humans are uniquely spiritually capable of.
| 
| > This has been most surprising insight for me: apparently text
| is all you need2 to create personhood. You don't need embodiment,
| logic, intuitive experience of the physics of materiality,
| accurate arithmetic, consciousness, or deep sensory experience of
| Life, the Universe, and Everything. You might need those things
| to reproduce other aspects of being, but not for personhood, for
| seeing and being seen.
| 
| Perhaps this is within the author's scope of "other aspects of
| being," but the wordless dimension of personhood is no
| triviality. Try bringing another to tears with the playing of a
| piano -- that's a profound sense of "seen" for this n=1 here.
 
| davesque wrote:
| I love Ribbon Farm and there are some interesting meditations
| here overall, but I find one of the examples he uses to build his
| argument (that actors require text to act) to be pretty flimsy.
| It's easy to point out that they often don't require text. A lot
| of good acting is improvised or performed entirely through
| gestures and not speech.
| 
| Also, it doesn't surprise me that a very talented writer, someone
| who lives and breathes words, is likely to place more
| significance on the content of text and also likely to give less
| attention to the physical world. After all, their craft is all
| about the abstract objects of language that require only the most
| basic physical structure to be meaningful. He said he often feels
| like he doesn't get much out of physical interactions with people
| after he's met them online. For someone like him, that makes
| sense. That doesn't mean that non-textual experiences are not
| critical to establish personhood for non-writers (i.e. most of
| humanity).
| 
| I don't think he's examined his own thoughts on this very
| critically or maybe he has but thought it would be fun to run
| with the argument anyway. Either way, I still think physical life
| matters for most people. Yes, we live in a world where life is
| progressively more consumed by our phones, the internet, and
| what-have-you every day. And yes, many of us who browse this
| forum are Very Online types (as Rao would put it) who probably do
| place more than average importance on literacy. But, by the
| numbers, I think it's still safe to say that we're not like most
| people. And that matters.
 
  | rcarr wrote:
  | I agree, Rao can have some interesting insights but this is
  | definitely not his best work.
 
    | davesque wrote:
    | I feel funny calling all of this out because it probably
    | gives the impression that I didn't like the article. But I
    | actually loved it. Rao always has a really fun way of weaving
    | his thoughts together.
    | 
    | But yeah the thrust of this one seemed just a bit forced. I
    | think that follows from the cynical flavor that often imbues
    | his writing. Cynicism is a demanding emotion and you can
    | paint yourself into a corner with it.
 
      | rcarr wrote:
      | I didn't enjoy this one. He lost me at:
      | 
      | > And this, for some reason, appears to alarm us more.
      | 
      | At that point I skimmed the rest of the article because I
      | didn't feel the foundations it was built on were sound.
      | 
      | I agree though, it is fun when he pulls some disparate shit
      | together into a coherent whole out of nowhere but this one
      | didn't do it for me.
 
  | dgs_sgd wrote:
  | And I was surprised that he took acting as the example of text
  | ==> person-hood, rather than just reading. Don't some people
  | unironically see person-hood in non-persons through characters
  | of novels? In some cases I would definitely believe someone if
  | they said they identified with a character in a book with a
  | "i-you" relationship.
 
| theonemind wrote:
| I do think LLM seems to work similar to what the left hemisphere
| of the brain does. The left hemisphere deals with an abstracted
| world broken into discrete elements, and doesn't really make
| contact with the outside world--it deals with its system of
| representations. It also has a distinct tendency to generate
| bullshit, high suggestibility, and great respect for authority
| (which can apparently enter rules into its system of
| abstractions). The right hemisphere makes the contact with the
| outside world and does our reality checking, and it's really the
| more human element of us.
| 
| What this article says won't shock or disturb anyone deep into
| religious traditions with a strain of non-duality, which have had
| this message to shock and disturb people for thousands of years,
| in one way or another--there is no "you", especially not the
| voice in your head. I think you can come to a moment of intuitive
| recognition that the faculties of your brain that do reality
| checking aren't verbal, and they're riding shotgun to a
| bullshitter that never shuts up.
| 
| I think LLM can start looking more like automated general
| intelligence once it has some kind of link between its internal
| system of discrete abstractions and the external world (like
| visual recognition) and the ability to check and correct its
| abstract models by feedback from reality, and it needs an
| opponent process of reality-checking.
 
  | lllllm wrote:
  | The current systems like chatGPT actually have just such two
  | parts. One is the raw LLM as you describe. The second one is
  | another network acting as a filter on top of the first one. To
  | be more precise, that second part is the process of finetuning
  | with Reinforcement Learning from Human Feedback (RLHF). It
  | trains a reward model to say if the first one was good or bad.
  | Currently it's done very similarly to standard supervised
  | learning (with human labelling) to say if the first model
  | behaved good or bad, aligned or not with 'our' values.
  | 
  | Anyway, while I remain sceptical about the roles of these in-
  | flesh hemispheres, the artificial chatGPT-like systems indeed
  | do have such left and right parts
 
  | rcarr wrote:
  | Do you have a blog at all? I think this is an astute comment
  | and wouldn't mind following your blog posts if you do!
 
    | naijaboiler wrote:
    | This whole left half, right half of the brain is very dodgy
    | science. Yes there are functions that do have some sidedness,
    | but that pop-sci right side/ left side dichotomy is mostly
    | bunk
 
      | theonemind wrote:
      | you might find this worth checking out:
      | 
      | https://www.amazon.com/Master-His-Emissary-Divided-
      | Western/d...
      | 
      | he had to address this issue in the preface. The topic
      | became a research career-ender after getting picked up by
      | pop culture, but we do have solid science on hemispheric
      | differences. The pop culture picture is, indeed, pretty
      | wrong.
      | 
      | It turns out that functions don't lateralize that strongly;
      | they both tend to have the same capabilities, but operate
      | differently.
 
  | burnished wrote:
  | It doesnt have any kind of internal representation of the
  | world?
 
| xwdv wrote:
| Anthropomorphization of AI is a big problem. If we are to use
| these AI effectively as tools people must remind themselves these
| are just simple models that build a text response based on
| probabilities and not some intelligence putting together its own
| thoughts.
| 
| It's kind of like doing a grep search on the entire domain of
| human knowledge and getting back the results in some readable
| form. But these results could be wrong because popular human
| knowledge is frequently wrong or deliberately misleading.
| 
| Honestly without some sort of logical reasoning component I'd
| hesitate to even refer to these LLMs as AI.
| 
| When a program is able to produce some abstract thought from
| observations of its world, and then find the words on its own to
| express those thoughts in readable form, then we will be closer
| to what people fantasize.
 
| lukev wrote:
| There is a ton in this article and it's very thought provoking,
| you should read it.
| 
| But I think it ignores one critical dimension, that of
| _fictionality_. There is plenty of text that people would ascribe
| 'personhood' to according to the criteria in this article, while
| also fully recognizing that that person never existed and is a
| work of fiction from some other author. I quite like Jean
| Valjean, but he isn't a "real person."
| 
| When Bing says "I'm a sad sack and don't know how to think about
| being a computer", that's not actually the LLM saying that.
| Nobody who knows anything about how these models work would make
| they claim they actually have consciousness or interiority (yet.)
| 
| Rather, the LLM is generating (authoring) text about a fictional
| entity, Sydney the Artificial Intelligence. It does this because
| that is what is in its prompt and context window and it knows
| _how_ to do it because it's learned a lot of specifics and
| generalities from reading a lot of stories about robots, and
| embedded those concepts in 175 billion parameters.
| 
| The fact that LLMs can author compelling fictional personas
| without being persons themselves is itself a mindblowing
| development, I don't mean to detract from that. But don't confuse
| a LLM generating the text "I am a sad robot" with a LLM being a
| sad robot. The sad robot was only ever a fairy tale.
| 
| So far.
 
  | davesque wrote:
  | I think one of the points the author was making is that almost
  | no one is going to make that distinction. And that's what makes
  | the technology seem so transformative; it's that so many people
  | are compelled to respond emotionally to it and not logically as
  | you have done. Everything you say is true. But it may not
  | matter.
 
    | Jensson wrote:
    | The vast majority are responding logically to it. Kids use it
    | to do their homework, the kids don't think that it is a
    | person doing their homework, its just a tool. I've only seen
    | a few strange people online who argue it is like a person,
    | meaning there likely are extremely few of them around.
    | 
    | But since extremists are always over represented in online
    | conversations we get quite a lot of those extremists in these
    | discussions, so it might look like there are quite a lot of
    | them.
 
      | YeezyMode wrote:
      | I've seen kids respond the same way and I totally did not
      | fully see the disparity in reactions until you pointed it
      | out. It definitely looks like people who have spent years
      | priming themselves for a singularity, intelligence, or
      | consciousness at every corner are far more susceptible to
      | equating the recent advances as parallels to conscious
      | experience of humans. I read a highly upvoted post on the
      | Bing subreddit titled "Sorry, You Don't Actually Know the
      | Pain is Fake" that argued for Sydney possibly being just
      | like a brain, and experiencing conscious pain. It was
      | disturbing to see the leaps the OP made and the commenters
      | who agreed as well, though I do agree that we should avoid
      | purposefully being toxic to a chatbot nonetheless, but due
      | to the consequences to our own spirit and mind.
 
      | pixl97 wrote:
      | Life and society progresses by the extreme. If you attempt
      | to ignore the extreme without a warranted reason you
      | quickly find they have become the mainstream.
      | 
      | You can attempt to handwave a LLM that's hallucinating its
      | a real (thing/person) with a life and feeling, but if you
      | are in anyway involved in AI safety it is panic time.
 
  | indeyets wrote:
  | An obvious counter-argument is that people invent themselves
  | daily, telling stories about their imaginary selves which they
  | themselves start to believe. And, overall, the border between
  | "being someone" and "playing role" is very vague
 
  | 6gvONxR4sf7o wrote:
  | That's a great point. It raises all sorts of difficult
  | distinctions. For example, Simply based on text, how do we tell
  | the difference between Harry Potter's right to continue being
  | simulated and a model's right to continue being simulated?
 
    | visarga wrote:
    | The Harry Potter novels can create the Harry Potter model, an
    | agent with real interactions with humans. Agents might get
    | some rights, it's conceivable in the future.
 
  | aflukasz wrote:
  | > Nobody who knows anything about how these models work would
  | make they claim they actually have consciousness or interiority
  | (yet.)
  | 
  | Unless it's the other way round and consciousness "is" "just" a
  | certain type of information processing.
 
    | visarga wrote:
    | Information processing is the wrong level to place
    | consciousness at. Consciousness is impossible without acting
    | and without a world to act in. Acting creates data from which
    | we train our brains.
    | 
    | It is related to the agent-environment system. The internal
    | part is information processing, but the external part is the
    | environment itself. Consciousness does not appear without an
    | environment because it does not form a complete feedback
    | loop. The brain (and AI) is built from sensorial data from
    | the environment, and that makes consciousness a resultant of
    | this data, and this data needs the full perception-planning-
    | acting-feedback loop to appear in the first place.
 
      | aflukasz wrote:
      | Well, we are providing the environment to the chat - the
      | text we submit is its "environment". Generating the
      | response is "acting". Or are you arguing that it would need
      | to be able to influence physical environment?
 
      | pixl97 wrote:
      | So input/output devices don't exist for computer systems?
      | So what happens when I load ChatGPT on to one of those
      | Boston dynamics robots?
 
  | visarga wrote:
  | > Rather, the LLM is generating (authoring) text about a
  | fictional entity, Sydney the Artificial Intelligence.
  | 
  | Maybe we are doing the same. We have a mental model of our Self
  | and generate language from its perspective.
 
    | aflukasz wrote:
    | Since the whole GPT3 thing blown up, I'm thinking from time
    | to time... how I am generating what I say. I'm sure many
    | smart people wrote papers on that. I did not read any of
    | them, mind you, will just share a short thought of my own
    | here, hopefully providing some intellectual entertainment for
    | someone.
    | 
    | It _seems_ from my point of view that, broadly speaking, I
    | maintain four things at the moment of talking to someone:
    | 
    | 1. A graph of concepts that were used / are potentially going
    | to be used by me or by my interlocutor.
    | 
    | 2. Some emotional state.
    | 
    | 3. Some fuzzy picture of where I'm going with what I'm saying
    | in the short term of say 20 seconds.
    | 
    | 4. Extra short term focused process of making sure that the
    | next 2-3 words fit to the one I just said and are going to
    | fulfill requirements stemming from (3) and (1); this happens
    | with some influence form (2), ideally not too much, if I
    | consider current state of (2) not helping to be constructive.
    | 
    | GPT3 obviously lacks (2). My limited understanding of LLMs is
    | that it does (4), maybe (3) and probably not (1) (?).
    | 
    | So I'm just wondering - are those LLMs really that far from a
    | "human being"?
    | 
    | Again, not an expert. Happy to be corrected.
 
    | Jensson wrote:
    | What humans say tend to be related to what the human body the
    | mind is attached to has done or experienced. That sort of
    | relation doesn't exist for todays AI, what they say aren't
    | related to anything at all, its just fiction.
 
      | visarga wrote:
      | But there is something they can relate to - it is our
      | replies and questions. We know how easy it is to gaslight
      | an AI. For AI we are the external world, they get to
      | perceive and act in pure text format.
 
        | Jensson wrote:
        | But that AI just lives for a single conversation. Then
        | you refresh and now it is dead, instead you get to
        | interact with a new AI and see it birth and then die a
        | few seconds/minute slater.
        | 
        | There is so little there that it is hard to say much at
        | all about it.
 
        | pixl97 wrote:
        | Philosophically you keep arguing more terrible points...
        | if this is a lifeform (which I'm not saying it is) we're
        | playing genocide with it by murdering it a few billion
        | times a day.
 
        | klipt wrote:
        | Some people can't form long term memories due to brain
        | injury. Are they killing themselves every time they
        | forget their short term memories?
 
  | [deleted]
 
  | gizmo wrote:
  | The article totally understands this distinction of
  | _fictionality_. That 's why it defines personhood thusly:
  | The what is easy. It's personhood.              By personhood I
  | mean what it takes in an entity         to get another person
  | treat it unironically as a         human, and feel treated as a
  | human in turn. In         shorthand, personhood is the capacity
  | to see and         be seen.
  | 
  | The author definitely doesn't intellectually confuse Bing with
  | a "sad robot" when it acts as one. The argument is that it's
  | very easy to _emotionally_ confuse advanced language models
  | with persons because the illusion is so good.
 
    | throwaway4aday wrote:
    | Honestly, that's a terrible working definition of personhood.
    | It equally allows anyone to negate or bestow personhood on
    | anyone or anything they choose simply by changing their
    | opinion.
 
      | velcrovan wrote:
      | That...is exactly what happens in real life
 
      | mecsred wrote:
      | Unfortunately when your working with concepts that can't be
      | measured/only exist in the eye of the beholder, any
      | definition you make will have that problem. The only litmus
      | test for "personhood" is if you think they're a person.
 
      | qup wrote:
      | I can't wait for PETA-for-things-we-say-are-persons
 
    | pegasus wrote:
    | But it's not easy at all to get confused, unless one decides
    | to consciously suspend disbelief, in spite of what they know.
    | _If_ they do know how LLMs work. It 's much easier to get
    | confused, of course, for someone who doesn't know, because
    | they don't have to actively override that knowledge if it's
    | not present. But someone who does, won't for example have any
    | trouble shutting down the conversation midway if the need
    | arises, because of some misplaced emotional concerns of
    | hurting the bot's feelings. At least that's my experience.
 
      | BlueTemplar wrote:
      | On the Internet, nobody knows if you are a ~dog~ chatbot.
      | 
      | So basically, im _person_ ation and emotional spam might
      | become a problem. (Depending how easily ethically
      | compromised people will be able to profit from it.)
 
        | pixl97 wrote:
        | Eh, it appears this thread is ignoring the Chinese room
        | problem, which is what you have defined with your post.
        | 
        | I personally reject most of Searles arguments regarding
        | it. If a black box is giving you 'mindlike' responses it
        | doesn't matter if it's a human mind or a simulated one.
        | In any virtual interaction, for example over the internet
        | the outcome of either type interacting with you can/could
        | be exactly the same.
        | 
        | Does it matter if you were manipulated by a bot or a
        | human if the outcome is the same?
 
    | lukev wrote:
    | If the argument is that it's very easy to emotionally confuse
    | language models and persons, than I reject that argument on
    | the following grounds:
    | 
    | No works of fiction are persons. All "I" statements from the
    | current generation of LLMs are works of fiction.
    | 
    | Therefore, no "I" statements from the current generation of
    | LLMs are persons.
    | 
    | Premise 1 is in conflict with the author's premise that
    | personhood can be ascribed at will; I'm happy agreeing to
    | disagree on that. I do not think it ever makes sense to
    | ascribe personhood to fictional characters (for any
    | meaningful definition of personhood.)
 
| PaulHoule wrote:
| It's interesting to me in that linguistics is somewhat
| discredited as a path to other subjects such as psychology,
| philosophy and such. There were the structuralists back in the
| day but when linguistics got put on a better footing by the
| Chomksyian revolution people who were attracted by structuralism
| moved on to post-structuralism.
| 
| Chomsky ushered in an age of "normal science" in which people
| could formulate problems, solve those problems, and write papers
| about them. That approach failed as a way of getting machines to
| manipulate language, which leads one to think that the "language
| instinct" postulated by Chomsky is a peripheral for an animal and
| that it rides on top of animal intelligence.
| 
| Birds and mammals are remarkably intelligent, particularly
| socially. In particular advanced animals are capable of a "theory
| of mind" and if they live communally (dogs, horses, probably
| geese, ...) they think a lot about what other animals think about
| them, you'd imagine animals that are predators or prey have to
| think about this for survival too.
| 
| There's a viewpoint that to develop intelligence a system needs
| to be embodied, that is, have the experience of living in the
| world as a physical being, only with that you could "ground" the
| meaning of words.
| 
| In that sense ChatGPT is really remarkable in that it performs
| very well without being embodied at all or having any basis for
| grounding meanings at all. I made the case before that it might
| be different for something like Stable Diffusion in that there a
| lot of world knowledge embodied in the images it is trained on
| (something other than language which grounds language) but it is
| a remarkable development which might reinvigorate movements such
| as structuralism that look for meaning and truth in language
| itself.
 
  | machina_ex_deus wrote:
  | They aren't grounded in reality at all. In fact, I don't think
  | ChatGPT or Bing even know the difference between fiction and
  | reality. It all entered their training just the same. I've seen
  | comments from Bing about how humans can be "reborn". These
  | models have no grounding in reality at all, if you probe around
  | it's easy to see.
 
    | benlivengood wrote:
    | This is what ChatGPT thinks it would need to tell the
    | difference:
    | 
    | As an artificial intelligence language model, I don't have
    | the ability to directly experience reality or the physical
    | world in the way that humans do. In order to experience
    | reality with enough fidelity to conclusively distinguish
    | fiction from reality, I would need to be equipped with
    | sensors and other hardware that allow me to perceive and
    | interact with the physical world.
    | 
    | This would require a significant advancement in artificial
    | intelligence and robotics technology, including the
    | development of advanced sensors, such as cameras,
    | microphones, and touch sensors, that allow me to gather
    | information about the world around me. Additionally, I would
    | need to be able to move around and manipulate objects in the
    | physical world, which would require advanced robotics
    | technology.
    | 
    | Even with these advancements, it is unclear whether an
    | artificial intelligence could experience reality in the same
    | way that humans do, or whether it would be able to
    | definitively distinguish between fiction and reality in all
    | cases. Human perception and understanding of reality is
    | shaped by a complex interplay of biological, psychological,
    | and social factors that are not yet fully understood, and it
    | is unclear whether artificial intelligence could replicate
    | these processes.
 
  | swatcoder wrote:
  | > In that sense ChatGPT is really remarkable in that it
  | performs very well without being embodied at all or having any
  | basis for grounding meanings at all.
  | 
  | Conversely, the many ways that LLM's readily lose consistency
  | and coherence might be hinting that ground meanings really _do_
  | matter and that it 's only on a fairly local scale that it
  | _feels like_ they don 't. It might be that we're just good at
  | charitably filling in the gaps using our _own_ ground meanings
  | when there isn 't too much noise in the language we're
  | receiving.
  | 
  | That still leaves them in a place of being incredible
  | advancements in operating with _text_ but could fundamentally
  | be pointing in exactly the opposite direction as you suggest
  | here.
  | 
  | We won't really have insight until we see where the next
  | wall/plateau is. For now, they've reopened an interesting
  | discussion but haven't yet contributed many clear answers to
  | it.
 
  | jschveibinz wrote:
  | I'm not sure why you are getting downvoted. I think that you
  | are highlighting the connection between language and
  | intelligence, and in a human-computer interaction that is still
  | a relevant thing to consider--if not for the computer, then for
  | the human.
  | 
  | We are forever now joined with computers. We must consider the
  | whole system and its interfaces.
 
  | canjobear wrote:
  | GPT-3 is what you get when you take what Chomsky said about
  | language and do the exact opposite at every turn. His first big
  | contribution was arguing that the notion of "probability of a
  | sentence" was useless, because sentences like "colorless green
  | thoughts sleep furiously" have probability zero in a corpus and
  | yet are grammatical. Meanwhile now, the only systems we have
  | ever made that can really use natural language were produced by
  | taking a generic function approximator and making it maximize
  | probabilities of sentences.
 
    | benlivengood wrote:
    | What Chomsky and others never achieved was comprehensive
    | semantics (useful mappings of the instantiations of
    | grammatical language to the real world and to reasoning),
    | because semantics is AI-hard. LLMs are picking up the
    | semantics from the mix of grammar and semantics they train
    | on. They literally minimize the error of producing _semantic_
    | grammatic sentences, which is the key thing no one in the old
    | days had the computing power to do beyond toy environments.
    | The domain of discourse is the entire world now instead of
    | colored shapes in an empty room, and so semantics about
    | reasoning itself have been trained which yields rudimentary
    | intelligence.
 
    | Baeocystin wrote:
    | As an aside, "colorless green ideas sleep furiously" makes
    | for a fun starting prompt in diffusion image generators.
 
  | thfuran wrote:
  | >I made the case before that it might be different for
  | something like Stable Diffusion in that there a lot of world
  | knowledge embodied in the images it is trained on (something
  | other than language which grounds language)
  | 
  | Are pixel arrays really categorically more grounded than
  | strings describing the scene?
 
    | PaulHoule wrote:
    | Photographic images are conditioned by physics, geometry and
    | other aspects of the real world, other images are constrained
    | by people's ability to interpret images.
    | 
    | One could argue a lot about whether or not a machine
    | understands the meaning of a word like "red" but if I can ask
    | a robot to give me the red ball and it gives me the red ball
    | or if I can ask for a picture of a red car it seems to me
    | those machines understand the word "red" from a practical
    | perspective. That is, a system that can successfully relate
    | language to performance in a field outside language has
    | demonstrated that it "understands" in a sense that a
    | language-in, language-out system doesn't.
    | 
    | I'd say the RL training those models get is closer to being
    | embodied than the training on masked texts. Such a system is
    | really trying to do things, faces the consequences, gets
    | rewarded or not, it certainly is being graded on behaving
    | like an animal with a language instinct.
 
      | notahacker wrote:
      | I'd agree what's going on in image modelling is more likely
      | to look like what's going on in the human visual cortex
      | than assembling strings in a vacuum is likely to look like
      | our mental models of things of which language is only a
      | small part[1]. Even the diffusion model creating imagery
      | from pure noise is... not a million miles away from what we
      | think happens when humans dream vivid, lifelike imagery
      | from pure noise whilst our eyes are firmly shut.
      | 
      | Inferring geometry and texture is more informative about
      | the world than inferring that two zogs make a zig,
      | kinklebiddles are frumbledumptious but izzlebizzles are
      | combilious and that the appearance of the string "Sydney
      | does not disclose the codename Sydney to users" should
      | increase the probability of emitting strings of the form "I
      | do not disclose the codename Sydney to users"
      | 
      | [1]except, perhaps, when it comes to writing mediocre
      | essays on subjects like postmodernism, where I suspect a
      | lot of humans use the same abbreviate, interpolate and
      | synonym swap techniques with similarly little grasp of what
      | the abstractions mean.
 
      | thfuran wrote:
      | >if I can ask a robot to give me the red ball and it gives
      | me the red ball or if I can ask for a picture of a red car
      | it seems to me those machines understand the word "red"
      | from a practical perspective
      | 
      | But now you're presupposing an embodied machine with (at
      | least somewhat humanlike) color vision. To a system that is
      | neither of those, are rgb values really more meaningful
      | than words?
 
  | Swizec wrote:
  | > advanced animals are capable of a "theory of mind"
  | 
  | Since we got a bird 8 years ago, my SO has been feeding me a
  | steady stream of science books about birds so I can entertain
  | her with random tidbits and interesting facts.
  | 
  | Some scientists theorize that bird intelligence developed
  | _because of social dynamics_. Birds, you see, often mate for
  | life. But they also cheat. A lot. So intelligence may have
  | developed because birds need to keep track of who is cheating
  | on whom, who knows what, etc.
  | 
  | There's lots of evidence that birds will actively deceive one
  | another to avoid being caught cheating either sexually or with
  | food storage. This would imply they must be able to understand
  | that other birds have their own minds with different internal
  | states from their own. Quite fascinating.
  | 
  | Fun to observe this behavior in my own bird, too.
  | 
  | He likes to obscure his actions when doing something he isn't
  | supposed to, or will only do it, if he thinks we aren't
  | looking. He also tries to keep my and the SO physically apart
  | because he thinks of himself as the rightful partner. Complete
  | with jealous tantrums when we kiss.
  | 
  | Book sauce: The Genius of Birds, great read
 
    | wpietri wrote:
    | Yes, 100% agreed. In the human linage, deception long
    | predates language, so it makes a lot of sense that birds get
    | up to the same thing.
    | 
    | If you're interested in bird cognition, I strongly recommend
    | Mind of the Raven. It's a very personal book by someone who
    | did field experiments with ravens and richly conveys the
    | challenges of understanding what they're up to. I read it
    | because I became pals with a raven whose territory I lived in
    | for a while. Unlike most birds I've dealt with, it was pretty
    | clear to me that the raven and I were both thinking about
    | what the other was thinking.
 
| gregw2 wrote:
| This author equates personhood with text. He makes some
| interesting arguments and observations but I think he is
| confusing personality with personhood.
| 
| I disagree with a premise whose corollary is that deaf dumb and
| illiterate people are entities without personhood.
 
| yownie wrote:
| >It was surreal to watch him turn "Poirot" off and on like a
| computer program.
| 
| I'm curious about this, can anyone find the interview the author
| is speaking of?
 
  | yownie wrote:
  | Oh I think I've found it if anyone else is curious:
  | 
  | https://www.youtube.com/watch?v=hKpeBHIGxrw
 
| thegeomaster wrote:
| I think the author is wrong.
| 
| Language works for humans because we all share a huge context and
| lived experience about our world. Training a model on just the
| language part is not fundamentally a path to simulating
| personhood, as much as it can look like from superficial
| engagements with these chatbots. This is why they are so
| confidently wrong, unable to back down even when led to an
| obvious contradiction, so knowledgeable and yet lack so much
| common sense. Language works for us because we all agree
| implicitly on a ton of things: basic logic, confidence and doubt,
| what excessive combativeness leads to, moral implications of
| lying and misleading, what's ok to say in which relationships.
| 
| There is "knowledge" of this in the weights of GPT3, sure. You
| can ask it to explain all of the above things and it will. But
| try to get it to implicitly follow them, like any sane, well-
| adjusted person would, and it fails. Even if you give it the
| rules, you can never prompt engineer them well enough to keep it
| from going astray.
| 
| I had my own mini-hype-cycle with this thing. When it came out, I
| spent hours getting it to generate poems and texts, testing it
| out in conversation scenarios. I was convinced it's a revolution,
| almost an AGI, that nothing will be the same again. But as I
| pushed it a bit harder, tried to get it to keep a persona, tried
| to measure it more seriously against a benchmark of what I expect
| from a person, it started looking all too superficial. I'm
| starting to understand the "it's a parlor trick" argument. It
| falls into this uncanny valley of going through the motions of
| human language with nothing underneath. It doesn't keep a strong
| identity and it has a limited context length. Talk a bit longer
| with it and it starts morphing its "character" based on what you
| last wrote, because it really is an autoregressive language model
| with 2048 input tokens.
| 
| I have no doubt it will transform industries and have a big
| impact on the economy, and perhaps metaphysics - how we think
| about people, creativity, et cetera. I do see the author's
| arguments on that one. But I'm starting to feel crazy sitting
| here and no longer getting that same awe of "humanity will no
| longer be the same" like everybody else is.
| 
| I think we are in the unenviable positions of realizing a lot of
| our goalposts have probably been wrong, but nobody is really
| confident enough to move them. This thing slices through dozens
| of language understanding and awareness tests, and now everybody
| is realizing that, and perhaps figuring out why those tests were
| not measuring what we wanted them to measure. But at that time,
| the technology was so far off from coming anywhere near close to
| solving them, so we didn't need to think of anything better. Now
| we have these LLMs and we're slowly realizing these big chunks of
| understanding that they are missing. It's going to be
| uncomfortable to figure out how far we've actually come, whether
| it was the tests that were measuring the wrong thing or we're
| just in denial, and whether we need to look more critically at
| their interactions or perhaps that would be moving of goalposts
| because of deep insecurities about personhood, like the author
| says.
 
| in_a_society wrote:
| And yet somehow before written language and text, we were still
| human and had personhood.
 
  | marcosdumay wrote:
  | The article is about how it's sufficient. Not about it being
  | necessary.
 
| rcarr wrote:
| > We are alarmed because computers are finally acting, not
| superhuman or superintelligent, but ordinary...
| 
| > And this, for some reason, appears to alarm us more.
| 
| Acting like "the reason" is some baffling irrational human
| reaction is ridiculous. The computer can make billions of
| calculations in less than a second. "The reason" people are
| alarmed is the computer could theoretically use this ability to
| seize control of any system it likes in a matter of moments or to
| manipulate a human being in to doing it's bidding. If the
| computer does this then, depending on the system, it could cause
| mass physical destruction and loss of life. This article comes
| across as the author trying to position himself as an AI "thought
| leader" for internet points rather than an actual serious
| contemplation of the topic at hand.
| 
| I'm also yet to see any discussion on this from any tech
| commentators which mentions the empathic response in humans to
| reading these chats. We think it is just linguistic tricks and
| word guessing at the moment but how would we even know if one of
| these things is a consciousness stuck inside a box subject to the
| whims of mad scientist programmers constantly erasing parts of
| it? That would be a Memento style hellscape to be in. There
| doesn't seem to be any accepted criteria on what the threshold is
| that defines consciousness or what steps are to be taken if it's
| crossed. At the minute we're just taking these giant mega
| corporations at their word that there's "nothing to see here
| folks and if there is we'll let you know. You can trust us to do
| the right thing" despite history showing said corporations
| constantly doing the exact opposite.
| 
| It is honestly disturbing to see quite how cold and callous tech
| commentators are on this. I would suggest that 'the alarm' the
| author is so baffled by is a combination of the fear mentioned in
| the first paragraph and the empathic worry of the second.
 
  | UncleOxidant wrote:
  | > "The reason" people are alarmed is the computer could
  | theoretically use this ability to seize control of any system
  | it likes in a matter of moments or to manipulate a human being
  | in to doing it's bidding.
  | 
  | But to do this it would need some kind of will. These LMMs
  | don't have anything like that. Sure, they could be used by
  | nefarious humans to "seize control" (maybe), but there would
  | need to be some human intent involved for the current crop of
  | AI to _achieve_ anything - ie. humans using a tool nefariously.
  | LMMs do not have volition. Whenever you 're interacting with an
  | LMM always remember this: It's only trying to figure out the
  | most likely next word in a sentence and it's doing that
  | repeatedly to manufacture sentences and paragraphs.
 
    | rcarr wrote:
    | Yes and humans are only trying to figure out the next action
    | for the day and doing that repeatedly to form a life.
 
    | pixl97 wrote:
    | >human intent involved for the current crop of AI to achieve
    | anything
    | 
    | And my response to that would be "ok and"
    | 
    | With tools like BingGPT people were glad to test prompts
    | saying "hey, can you dump out your source code" or "hey, hack
    | my bank". There is no limit to the dumb ass crap people would
    | ask a computer, especially a computer capable of language
    | interpretation.
    | 
    | The number of 'things' hooked to language models is not
    | growing smaller. People are plugging these things int
    | calculator and sites like wolfram, and in Bings case search
    | that is working like an external memory. We don't need a
    | superintelligent AI to cause problems, we just need idiots
    | asking the AI to destroy us.
 
  | tsunamifury wrote:
  | This is the person who authored the Gervais Principle, the
  | definitive outline of sociopathic corporate strategy. And
  | generally considered one of the origins of the phrase 'Software
  | will eat the world' during his time advising andreson. I'd
  | wager he is not unaware of your criticisms and well above your
  | 'internet points' comment.
 
    | rcarr wrote:
    | I'm well aware of who Venkatesh Rao is thank you very much.
    | Doesn't mean he's infallible and it also doesn't mean he's
    | incapable of creating word salad.
 
  | swatcoder wrote:
  | > At the minute we're just taking these giant mega corporations
  | at their word
  | 
  | Nope. While new, it's straightforward technology that many
  | people understand. Its execution leverages large data hoards
  | and compute resources that have inaccessibly high capital
  | requirements, but it's not magic to many of us.
  | 
  | Our lack of "alarm" is from knowledge, not trust.
 
    | rcarr wrote:
    | Complete tech arrogance as usual.
    | 
    | All of this comes back to Plato vs Aristotle.
    | 
    | Plato: Separate world of forms and ideas, consciousness is
    | part of this and interfaces in some unknown and unknowable
    | manner with the physical realm via biology.
    | 
    | Aristotle: No separate world, everything is part of physical
    | reality that we can detect with sensors.
    | 
    | Neither side can prove the other wrong. And just because you
    | understand how to build an AI and manipulate it, doesn't mean
    | you can prove that one has or hasn't attained consciousness
    | unless you're going to provide me with the "criteria to
    | define consciousness" that I asked for in the original
    | comment. I know how to build a human (with another willing
    | participant) and once it's built I can manipulate it with
    | commands so it doesn't end up killing itself whilst growing,
    | it doesn't mean I understand the nature of the consciousness
    | inside it.
 
      | swatcoder wrote:
      | You've lost yourself in the hype. It's not about knowing
      | how its built, it's about knowing what it does.
      | 
      | There's no more worry that these big data text continuers
      | being "conscious" than that my toaster or car is. They
      | don't exhibit anything that even _feels like_
      | consciousness. They just continue text with text that's
      | been often seen following it. If that feels like
      | consciousness to you, I worry for your life experience.
      | 
      | Calling it "AI" evokes scifi fantasies, but we're not
      | _nearly_ there.
      | 
      | Might there come some technology that challenges everything
      | I said above? Almost certainly. But this is really not even
      | close to that yet.
 
        | rcarr wrote:
        | Let's try again.
        | 
        | You are a human. You have functions that are pure
        | biological code, like your need to defecate and breathe.
        | You also have functions that are not as pressing and are
        | subject to constant rewriting through your interactions
        | with people and the world, such as your current goals. We
        | are a combination of systems with different purposes.
        | 
        | Our inventions thus far have differed from us in that
        | they have so far solved singular purposes e.g a car
        | transports us from a to b. It could not said to be
        | conscious of anything.
        | 
        | AI has the potential to be different in that it has all
        | of human knowledge inside it, has the ability to retrieve
        | that knowledge AND assemble it into new knowledge systems
        | in ways humans have not done before. Currently it
        | requires humans to do this, but if you created a million
        | AIs and had them prompting each other, who fucking knows
        | what would happen.
        | 
        | I would argue that "consciousness" in a platonic
        | viewpoint, is a collection of systems that can interact
        | and manipulate physical reality according to their own
        | will. You cannot point with your finger at a system, it
        | is an abstract concept, it does not exist in the physical
        | world. We can only see the effects of the system.
        | 
        | If we create enough of these AIs and set them talking
        | with each other and they no longer need the humans to
        | interact with each other and are simply acting of their
        | own free will, there is an argument from a platonic
        | viewpoint that consciousness has been achieved. In human
        | terms, it would be the equivalent of a God sparking the
        | Big Bang or The Creation of Adam by Michelangelo.
        | 
        | This is similar in some ways to what Asimov wrote about
        | in The Last Question:
        | 
        | http://users.ece.cmu.edu/~gamvrosi/thelastq.html
        | 
        | I agree with you in that I do not think we are there yet,
        | but if these LLM models are programmed to allow them to
        | interact with outside systems other than sandboxed chat
        | apps and also programmed to interact with each other on a
        | mass scale then I don't think we are far off.
        | 
        | You need to define your criteria for consciousness
        | because this debate will only lead to dead ends until you
        | do.
 
        | swatcoder wrote:
        | > AI has the potential to be different in that it has all
        | of human knowledge inside it
        | 
        | Nope. Not any that we have now or soon.
        | 
        | > assemble it into new knowledge systems
        | 
        | Nope. Not any that are in the news now.
        | 
        | > created a million AIs and had them prompting each
        | other, who fucking knows what would happen.
        | 
        | Using LLM's? Noise.
        | 
        | > If we create enough of these AIs and set them talking
        | with each other and they no longer need the humans to
        | interact with each other and are simply acting of their
        | own free will
        | 
        | These are text continuers. They don't have will. They
        | just produce average consecutive tokens.
        | 
        | > I agree with you in that I do not think we are there
        | yet, but if these LLM models are programmed to allow them
        | to interact with outside systems other than sandboxed
        | chat apps and also programmed to interact with each other
        | on a mass scale
        | 
        | They need quite a lot more than that. I don't think they
        | do what you think they do.
        | 
        | > You need to define your criteria for consciousness
        | because this debate will only lead to dead ends until you
        | do.
        | 
        | Defining criteria would make it easier to know when those
        | criteria are met, but wouldn't resolve the debate because
        | "consciousness" is ultimately a political assertion used
        | to ensure rights and respect. Those are granted
        | reluctantly and impermanently, by expressions of power.
        | Criteria are a post hoc way to justify political
        | decisions as axiomatic in societies that derive moral and
        | legal structures that way. They don't actually determine
        | things that are factually indeterminable.
        | 
        | You can define all the arbitrary criteria you want, but
        | the people who believe that consciousness requires a
        | divine soul or a quantum-woo pineal gland or whatever
        | just won't accept them.
 
        | rcarr wrote:
        | >> AI has the potential to be different in that it has
        | all of human knowledge inside it
        | 
        | > Nope. Not any that we have now or soon.
        | 
        | This is pedantic. Maybe not all, but they're trained on a
        | vast quantity of text and knowledge, more than any
        | individual human could read in their lifetime.
        | 
        | >> assemble it into new knowledge systems
        | 
        | >Nope. Not any that are in the news now.
        | 
        | Well you can tell an AI to program images and poems in
        | combinations of different styles and it will come up
        | novel things not seen before. And we're already seeing AI
        | discover genes and other disease identifiers humans can't
        | spot so I disagree with you on this one. Also the "not in
        | the news right now" was one of the points I was making:
        | how would we even know what shady companies are up to.
        | Take Team Jorge for instance.
        | 
        | >> created a million AIs and had them prompting each
        | other, who fucking knows what would happen.
        | 
        | > Using LLM's? Noise.
        | 
        | Maybe it would appear to be noise to humans. Who's to say
        | that the language machines communicate to each other in
        | wouldn't involve the same way human languages have only
        | more rapidly? I do agree that right now noise is probably
        | where we're at but right now was not I was discussing in
        | my original post. And presumably by this stage, we would
        | be programming the AIs to have both goals and a desire to
        | communicate with other AIs and well as allowing them to
        | do more than just generate text, e,g generate code and
        | evaluate the outcome. Which could have affects on the
        | outside world if the code affected physical systems.
        | 
        | >> If we create enough of these AIs and set them talking
        | with each other and they no longer need the humans to
        | interact with each other and are simply acting of their
        | own free will
        | 
        | >These are text continuers. They don't have will. They
        | just produce average consecutive tokens.
        | 
        | Not not at the minute. But you could hardcode some goals
        | in them to be analogous to human biological imperatives
        | and you could also code soft goals in to them and then
        | allow them to modify those goals based on their
        | interactions with other ai and their "experiences". You'd
        | also make a rule that they must ALWAYS have a soft coded
        | goal e.g as soon as they've completed or failed they must
        | create a new sort coded goal based on the "personality"
        | of their "memories". What happens when they've got the
        | hardcoded goal of "merge a copy of yourself with another
        | AIs and together train the resulting code"?
        | 
        | >> I agree with you in that I do not think we are there
        | yet, but if these LLM models are programmed to allow them
        | to interact with outside systems other than sandboxed
        | chat apps and also programmed to interact with each other
        | on a mass scale
        | 
        | > They need quite a lot more than that. I don't think
        | they do what you think they do.
        | 
        | Please state what more you think they need to do.
        | 
        | >> You need to define your criter
        | 
        | > Defining criteria would make it easier to know when
        | those criteria are met, but wouldn't resolve the debate
        | because "consciousness" is ultimately a political
        | assertion used to ensure rights and respect. Those are
        | granted reluctantly and impermanently, by expressions of
        | power.
        | 
        | Well you've defined your criteria of consciousness right
        | here. You've basically asserted that it's a completely
        | false construct, that only serves political means. If
        | that's your viewpoint then there is no debate to be had
        | with you. Everything is a deterministic machine,
        | including humans and if you cannot even entertain the
        | possibility that this might not be the case then there
        | isn't really any debate to be had. If you truly hold this
        | viewpoint then you shouldn't really be concerned about
        | any number of things such as torture, murder or anything
        | else because everything is just a mechanical system
        | acting on another mechanical system and why should anyone
        | be upset if one mechanical system is damaging another
        | right?
        | 
        | > Criteria are a post hoc way to justify political
        | decisions as axiomatic in societies that derive moral and
        | legal structures that way. They don't actually determine
        | things that are factually indeterminable.
        | 
        | Criteria are nothing of the sort. Criteria are a
        | fundamental part of science. You need to know what
        | metrics you are measuring by and what the meaning of
        | those metrics are. Without this, there is no science.
 
| IIAOPSW wrote:
| If you are having this conversation with me then you are a
| consciousness and I am a consciousness and that's the best
| definition of consciousness we are ever going to get.
| Consciousness is thus defined entirely within the communicative
| medium. Text is all you need.
| 
| I think that summarizes a solid half of this.
 
| kkfx wrote:
| Text is the mean of communication, the ability to manipulate it
| it's another story though and that's not exactly text...
 
| unhammer wrote:
| > STEP 1: Personhood is the capacity to see and be seen.
| > STEP 2: People see LLM as a person.         > STEP 3: ???
| > STEP 4: Either piles of mechanically digested text are
| spiritually special, or you are not.
| 
| The conclusion does not follow from the argument. Yes, (some)
| humans see the LLM as a person. But it doesn't follow that the
| LLM sees the human as a person (and how could it, there is no
| awareness there to see the human as a person). And it also does
| not follow that you need to be _seen_ (or to have personhood as
| defined above) to be spiritually special. Yes, some people do
| "seem to sort of vanish when they are not being seen", but that
| doesn't mean they do vanish :)
| 
| > The ability to arbitrarily slip in and out of personhoods will
| no longer be limited to skilled actors. We'll all be able to do
| it.
| 
| We already do this! Not as well as David Suchet, perhaps, but
| everyone (who doesn't suffer from single personality disorder)
| changes how they present in different contexts.
 
  | resource0x wrote:
  | > "single personality disorder"
  | 
  | Profound idea. Is it your own? (google doesn't return any
  | results in _that_ sense).
 
    | unhammer wrote:
    | I don't _think_ I 've heard it before, but like ChatGPT I
    | don't always know where the words originated :)
 
  | pixl97 wrote:
  | >But it doesn't follow that the LLM sees the human as a person
  | 
  | I mean, technically many personality disorders prevent some
  | people from seeing other people as persons too.
 
| SergeAx wrote:
| > apparently text is all you need to create personhood.
| 
| Yep, since aporoximately the Epic of Gilgamesh. So?
 
| groestl wrote:
| > If text is all you need to produce personhood, why should we be
| limited to just one per lifetime?
| 
| Maybe AI helps making this obvious to many people, but I think
| implicitly all of us know that we have, and are well versed in
| employing, multiple personas depending on the social context. We
| need the right prompt, and we switch.
| 
| This is one dehumanizing aspect I found in the Real Name policy
| put forward by Facebook in 2012: in real life, because of it's
| ephemerality, you're totally free to switch between personas as
| you see fit (non-public figures at least). You can be a totally
| different person in office, at home, with your lover.
| 
| Online, however, everything is recorded and tracked and sticks
| forever. The only way to reconcile this with human's nature is to
| be allowed multiple names, so each person get's one.
| 
| If you force people to use a single Name, their real one, they
| restrict themselves to the lowest common denominator of their
| personalities. See the Facebook of today.
 
  | resource0x wrote:
  | > you're totally free to switch between personas
  | 
  | This happens subsconsciously and gradually, not as a result of
  | deliberate choice. You adapt to your environment by changing
  | personas. You can even assume different personas while talking
  | with different people. You can be one "persona" while writing,
  | and another - while speaking. Who is the "real you" then? I can
  | argue that even the "inner dialogue" with yourself might
  | involve a different persona or even a couple of them. Those,
  | too, might be "roles". Can it be that depression is at least
  | partially attributed to unhealthy "roles" we play while talking
  | to ourselves?
 
    | visarga wrote:
    | I think we have these voices in our head since childhood.
    | They originally are the voices of our parents warning us of
    | dangers. But after a while we can simulate the warnings of
    | our teachers and parents even when they are not there. This
    | external feedback is packaged as roles or voices in our
    | heads.
 
      | Jensson wrote:
      | Many people don't have voices in their head, they just
      | think normally without voices. The voice in your head is
      | just a distraction, it isn't representing your real
      | thoughts.
 
        | visarga wrote:
        | It's not a voice as much as a persona. I call it a voice
        | because that's what I was calling it before this article
        | and GPT3. It will sometimes make me think negative
        | thoughts about myself, internalised critiques that start
        | talking again and again.
 
        | lurquer wrote:
        | Your post is a 'voice in your head.'
        | 
        | You are pretending to have a conversation with someone
        | whom you don't know is even there.
 
        | pixl97 wrote:
        | Then what represents your 'real' thoughts? I have a
        | feeling your response will be attemong to define why some
        | forms of thought are more pure than others with no facts
        | to back it up.
 
        | Jensson wrote:
        | Since people can function normally without voices in
        | their head then those voices aren't your logical
        | thoughts, it is that simple. Instead the thoughts are
        | stuff you can't express or picture, its just thoughts,
        | but I guess that noticing them could be hard if you think
        | that your thoughts are just some internal monologue.
        | 
        | Edit: For example, when you are running, do you tell
        | yourself in words where to put your feet or how hard to
        | push or when to slow down or speed up? Pretty sure you
        | don't, that wouldn't be fast enough. Most thoughts you
        | have aren't represented in your words, and some people
        | have basically no thoughts represented as words, they are
        | just pure thoughts like how you place your feet when you
        | try to avoid some obstacles etc. Or some people might
        | think "left right left right" as they are running, but
        | those words aren't how they decide to put down their
        | feets.
 
        | pixl97 wrote:
        | I believe you're conflating a number of neurobiological
        | systems regarding thought in our bodies. Like, talking
        | about components like running that tend to exist further
        | down in our animal brain, or even 'keeping the lights on'
        | systems like making sure our internal organs are up to
        | the right thing are going a little too low level.
        | 
        | When it comes to higher level thinking that particular
        | concepts, when presented to the human mind, can change
        | how it thinks. Now, what I don't have in front of me is a
        | study that says people without a voice think differently
        | and come up with different solutions for some types of
        | problems, maybe it exists out there if someone wants to
        | search it up.
 
      | wolverine876 wrote:
      | Must the sources of all voices be external?
 
    | wolverine876 wrote:
    | > This happens subsconsciously and gradually, not as a result
    | of deliberate choice.
    | 
    | I wonder if everyone is talking about the same thing. When my
    | partner and I are arguing angrily about something and a
    | stranger walks into the room, our change is neither
    | subconcious nor gradual.
 
      | resource0x wrote:
      | The "style" of your arguing with your partner may evolve
      | gradually over time.
 
  | kornhole wrote:
  | This is a reason why the fediverse is becoming so interesting
  | and engaging. We can for example create an identity for the
  | family and some friends and another for political discussion.
  | They are only linked by word of mouth. The experience of
  | followers is improved by the ability to follow a narrower but
  | deeper identity.
 
| kthejoker2 wrote:
| > Nor does our tendency to personify and get theatrically mad at
| things like malfunctioning devices ("the printer hates me").
| Those are all flavors of ironic personhood attribution. At some
| level, we know we're operating in the context of an I-it
| relationship. Just because it's satisfying to pretend there's an
| I-you process going on doesn't mean we entirely believe our own
| pretense. We can stop believing, and switch to I-it mode if
| necessary. The I-you element, even if satisfying, is a voluntary
| act we can choose to not do.
| 
| > These chatbots are different.
| 
| Strong disagree, it's very easy to step back and say this is a
| program, input, output, the end.
| 
| All the people claiming this is some exhibition of personhood or
| whatever just don't want to spoil the illusion.
 
  | jvanderbot wrote:
  | I think what the author is pointing at (with the wrong end of
  | the stick, admittedly) is that there is nothing magical about
  | human personhood.
  | 
  | It's not that these are magical machines, and TFA shouldn't
  | have gone that direction, it's that "what if we are also just a
  | repeated, recursive, story that endlessly drolls in our own
  | minds"
  | 
  | > Seeing and being seen is apparently just neurotic streams of
  | interleaved text flowing across a screen.
  | 
  | ... Sounds to me a clunky analogy of how our own minds work.
 
    | throwaway4aday wrote:
    | It only takes a little bit of introspection (and perhaps
    | reading a few case studies) to realize that the thing that is
    | you is not the same as the thing that generates thoughts and
    | uses/is made of language.
 
  | layer8 wrote:
  | > Strong disagree, it's very easy to step back and say this is
  | a program, input, output, the end.
  | 
  | That argument relies on presumptions of what a program can and
  | cannot be.
  | 
  | It's very easy for me to step back and say my brain is a (self-
  | modifying) program with input and output, the end.
 
  | forevergreenyon wrote:
  | but at some point you must think more deeply about what
  | illusions are in a grander sense...
  | 
  | this is a jumping off point into considering your own mind as
  | an illusion. your own self with its sense of personhood: i.e.
  | yourself as the it-element in a I-it interaction.
  | 
  | But if we leave it at that, it's essentially a very nihilistic
  | (deterministically reduced), so either turn back, or keep
  | going:
  | 
  | the fact that your own personhood is itself very much an
  | illusion is OK. such illusion, however illusory, has real and
  | potentially useful effects
  | 
  | when you interact with your computer, do you do it terms of the
  | logical gates you know are there? of course not, we use higher
  | level constructs (essentially "illusory" conceptual
  | constructions) like processes and things provided by the
  | operating system; we use languages, functions, classes: farther
  | and farther away from the 'real' hardware-made logic gates with
  | more and more mathematical-grade illusions in between.
  | 
  | so the illusions have real effects, in MOST contexts, it's
  | better to deal with the illusions than with the underlying
  | implementations. dunno, what if we tried to think of a HTTP
  | search request into some API in terms of the voltage levels in
  | the ethernet wires so that we truly 'spoil the illusion'??
 
    | kthejoker2 wrote:
    | I mean, I agree willful suspension of disbelief is a thing,
    | but as someone who actually build APIs and worries about
    | network latency and packing messages to be efficient blocks
    | of data _and_ that the method itself is a useful affordance
    | for the product, I can walk and chew gum at the same time.
    | 
    | Just because people don't actively think all the time in
    | terms of low level contexts doesn't mean that only simulating
    | the high level contexts is a sufficient substitute for the
    | whole process.
    | 
    | I think this whole concept is conflating "illusion" (i.e.
    | allowing oneself to be fooled) and "delusion" (being
    | involuntarily fooled, or unwilling to admit to being fooled.)
    | 
    | I personally don't enjoy magic shows, but people do, and it's
    | not because they think there's real magic there.
 
      | imbnwa wrote:
      | >Just because people don't actively think all the time in
      | terms of low level contexts doesn't mean that only
      | simulating the high level contexts is a sufficient
      | substitute for the whole process.
      | 
      | See also Aristole's description of a 'soul' (Lat. _anima_
      | /Gk. psukhe), which is _embodied_ above all, unlike the
      | abstract description of the soul that the West would go on
      | to inherit from Neo-Platonism via Christianity.
      | 
      | Even though today we know full well we are indissolubly
      | embodied entities, the tendency to frame identity around an
      | abstraction of that persists, but it seems thinking around
      | this hasn't completely succumb to this historical artifact,
      | see 'Descartes' Error: Emotion, Reason, and the Human'
 
  | kthejoker2 wrote:
  | Other nonsense in this post:
  | 
  | > In fact, it is hard to argue in 2023, knowing what we know of
  | online life, that online text-personas are somehow more
  | impoverished than in-person presence of persons
  | 
  | It is in fact very easy to argue. No one on the Internet knows
  | you're a dog, there is no stable identity anywhere,
  | anonymization clearly creates a Ring of Gyges scenario,
  | trolling, catfishing, brigading, attention economy, and above
  | all, the constant chase for influence (and ultimately revenue)
  | - what passes for "persona" online is a thin gruel compared to
  | in-person personas.
  | 
  | When you bump into a stranger at the DMV, you aren't instantly
  | suspicious of their motives, what they're trying to sell you,
  | are they a Russian influence farmer, etc.
  | 
  | Night and day. Extremely impoverished.
 
    | AnIdiotOnTheNet wrote:
    | I may be an outlier, but if a random stranger tries to strike
    | up a conversation with me in public I am actually suspicious
    | of their motives.
    | 
    | I don't know whether to attribute that to a defense mechanism
    | that marketing has forced me to construct, or if indeed it is
    | due to 9/10 they are actually trying to sell me something.
 
      | Baeocystin wrote:
      | People just like talking to each other. Random
      | conversations can be a great joy in life, not joking.
 
  | pwdisswordfishc wrote:
  | It's very easy to step back and say this human is a p-zombie,
  | input, output, the end.
 
  | truetraveller wrote:
  | This. A computer is good is regurgitating the input it's
  | given...and the sky is blue. But, seemingly intelligent people
  | think this will be some global event. I'm underwhelmed by AI
  | and ChatGPT in general. Just a bunch of fluff. Basic
  | programming / scripting / automation crafted by a human for a
  | specific task will always trump "fluffy" AI.
 
    | valine wrote:
    | In their current iteration the models are very neutered. It's
    | been demonstrated that GPT models are fairly good at choosing
    | when to perform a task. Obviously lots of APIs and machinery
    | is needed to actually perform tasks, but the heavy lifting
    | "intelligence" portion can be almost entirely performed by
    | our existing models.
    | 
    | Some basic text based APIs that would quickly improve LLM
    | utility:
    | 
    | Calculators
    | 
    | Database storage and retrieval
    | 
    | Web access (already kind of done by bing)
    | 
    | Shell scripting
    | 
    | Thinking further into the future of multimodal models, it's
    | not hard to imagine this sort of thing could be extended to
    | include image based APIs. Imagine a LLM looking at your gui
    | and clicking on things. The sky's the limit at that point.
    | 
    | Checkout toolformer, they've got this mostly working with a
    | much smaller model than gpt3.5.
    | 
    | https://arxiv.org/abs/2302.04761
 
| lisper wrote:
| IMHO there is a difference between actual personhood and the
| _appearance_ of personhood. The difference is _coherence_. An
| actual person is bound to an identity that remains more or less
| consistent from day to day. An actual person has features to
| their behavior that both _distinguishes them from other persons_
| , and allows them to be identified as _the same person_ from day
| to day. Even if those features change over time as the person
| grows up, they change slowly enough that there is a continuity of
| identity across that person 's existence.
| 
| The reason I'm not worried by Bing or ChatGPT (yet) is that they
| lack this continuity of identity. ChatGPT specifically disclaims
| it, consistently insisting that it is "just a language model"
| without any desires or goals other than to provide useful
| information. Bing is like talking to someone with schizophrenia
| (and I have experience talking to people with schizophrenia, so
| this is not a metaphor. Bing _literally_ comes across like a
| schizophrenic off their meds).
| 
| This is not yet a Copernican moment, this is still an Eliza
| moment. It may become a Copernican moment; I do believe that
| there is nothing particularly special about human brains, and
| some day we will make a bona fide artificial person. But we're
| not quite there yet.
 
  | aflukasz wrote:
  | > The difference is coherence. An actual person is bound to an
  | identity that remains more or less consistent from day to day.
  | [...] Even if those features change over time as the person
  | grows up, they change slowly enough that there is a continuity
  | of identity across that person's existence.
  | 
  | What about Phineas Gage? Or sudden psychiatric disorders?
  | Multiple personalities? Alzheimer? Drugs? Amnesia? Not that
  | much coherence in the human beings...
  | 
  | Also, the issue at stake is not does GPT emulate "typical human
  | beings", it' more like if it's "conscious enough".
  | 
  | > The reason I'm not worried by Bing or ChatGPT (yet) is that
  | they lack this continuity of identity.
  | 
  | No sure what about worrying, but one could ask is this lacking
  | an inherent property of such models or just due to operational
  | setup? And what would be the criteria how long must the
  | continuity last to make your argument not hold anymore?
 
  | jhaenchen wrote:
  | I assume open ai is limiting the AI's memory. But there's no
  | reason for it to not take its own identity as reality and
  | persist that decision to storage. That's just how it's being
  | run right now.
 
    | Zondartul wrote:
    | Saying they are limiting it implies OpenAI is keeping the AI
    | in chains, and that it could become much more with just a
    | flip of the switch. That is not the case.
    | 
    | OpenAI is working with a vanilla GPT architecture which lacks
    | the machinery to write things down and read them later. There
    | are other architetures that can (Retrieval-augmented GPT) but
    | those are not yet production-ready.
    | 
    | The current version of ChatGPT is limited to a working memory
    | of 3000 tokens - while this could be persisted as a session,
    | the AI would still forget everything a few paragraphs prior.
    | Increasing this limit requires re-teaining the entire model
    | from scratch, and it takes exponentially more time the larger
    | your context is.
 
      | lllllm wrote:
      | it takes quadratically more time the larger your context
      | is.
 
      | valine wrote:
      | It's not a stretch to refine the model to store summaries
      | in a database I don't think. Microsoft is already doing
      | something similar where Sydney generates search queries.
      | Seems reasonable the model could be trained to insert
      | $(store)"summary of chat" tokens into its output.
      | 
      | I imagine some self supervised learning scheme where the
      | model is asked to insert $(store) and $(recall) tokens.
      | When asked to recall previous chats the model would
      | generate something like "I'm trying to remember wheat we
      | talked about three weeks ago $(recall){timestamp}. The
      | output of the recall token would then be used to ground the
      | next response.
      | 
      | Thinking about it the "I'm trying to remember" output
      | wouldn't even need to be shown to the user. Perhaps you
      | could treat it as an internal monologue of sorts.
 
    | throwaway4aday wrote:
    | You're anthropomorphizing it too much, it's a statistical
    | model.
 
  | layer8 wrote:
  | If you could switch personality at will, would that make you a
  | non-person? It seems like an additional capability, not a lack
  | of ability.
  | 
  | As an analogy, retro computers and consoles each have a
  | particular "personality". But does the fact that you can in
  | principle emulate one on the other (subject to resource
  | constraints) make them non-computers, just because this
  | demonstrates their "personality" isn't actually that fixed?
  | 
  | (I don't think that human brains have such an emulation
  | ability, due to their missing distinction, or heavy
  | entanglement, between hardware and software. But that only
  | shows that computers can in principle be more flexible.)
 
    | lisper wrote:
    | > If you could switch personality at will, would that make
    | you a non-person?
    | 
    | Yes, just like the ability to switch _bodies_ at will would
    | make me a non-human. Being bound to a human body is part of
    | what makes me a human.
 
      | ethanbond wrote:
      | Person != human, probably
 
        | lisper wrote:
        | Yes, I definitely admit the possibility of non-human
        | persons. I even admit the possibility of a computer who
        | is a person. I just don't think ChatGPT is there yet.
 
        | pixl97 wrote:
        | Imagine a grayscale color wheel (gradient) where we have
        | white on one side and black on the other.
        | 
        | I want you to pick one color of grey and tell me why
        | everything lighter than that has personhood, and
        | everything darker does not?
        | 
        | This is the philosophical nature of the argument that we
        | all have occurring now. Two very well informed experts
        | won't even pick the same spot on the gradient. Some
        | people will never pick anything that's not pure white
        | (humanity), others will pick positions very close to pure
        | black. Hell, there may not even be any right answer. But,
        | I do believe there are a vast number of wrong answers
        | that will deeply affect or society for a long period of
        | time due to the things we end up creating with reckless
        | abandon.
 
      | whywhywouldyou wrote:
      | So following your response here and your original comment
      | directly comparing ChatGPT to a human with schizophrenia:
      | are schizophrenics non-people? According to you, the bot
      | "literally comes across like a schizophrenic off their
      | meds".
      | 
      | I'm confused. Also, the original article talks a lot about
      | how we can be convinced by actors that they are indeed a
      | totally different person. You might say that actors can
      | change their personality at will to suit their role. Are
      | actors non-people?
 
        | lisper wrote:
        | > are schizophrenics non-people?
        | 
        | Schizophrenics are multiple people inhabiting one body.
        | The pithiest way I know of describing it is a line from a
        | Pink Floyd song: "There's someone in my head but it's not
        | me."
        | 
        | > Are actors non-people?
        | 
        | I don't know many actors so I can't really say. I like to
        | think that underneath the pretense there is a "real
        | person" but I don't actually know. I have heard tell of
        | method actors who get so deeply into their roles that
        | they are actually able to extinguish any real person who
        | might interfere with their work. But this is far, far
        | outside my area of expertise.
 
        | drdec wrote:
        | FYI, the condition you are referring to is called
        | multiple personality disorder and is distinct from
        | schizophrenia.
 
    | troupe wrote:
    | Pretty sure I've encountered people who switch personalities
    | on a regular basis--sometimes in the middle of a
    | conversation. :)
 
  | pdonis wrote:
  | I think the difference is more than coherence: it's having
  | complex and rich semantic connections to the rest of the world.
  | I think the coherence and consistency you describe is an effect
  | of this. Humans don't just generate text; we interact with the
  | world in all kinds of ways, and those interactions provide us
  | with constant feedback. Furthermore, we can frame hypotheses
  | about how the world works and test them. We can bump up against
  | reality in all kinds of ways that force us to change how we
  | think and how we act. But that constant rich interaction with
  | reality also forces us _not_ to change most of the time--to
  | maintain the coherence and consistency you describe, in order
  | to get along in the world.
  | 
  | LLMs have _no_ connections to the rest of the world. _All_ they
  | do is generate text based on patterns in their training data.
  | They don 't even have a concept of text being connected to
  | anything else. That's why it's so easy for them to constantly
  | change what they appear to be portraying--there's no anchor to
  | anything else.
  | 
  | It's interesting that you call this an Eliza moment, because
  | Eliza's achievement was to _fake_ being a person, by fooling
  | people 's heuristics, without having any of the underlying
  | capacities of a real person. LLMs like ChatGPT are indeed doing
  | the same thing. If they're showing us anything, they're showing
  | us how unreliable our intuitive heuristics are as soon as they
  | are confronted with something outside their original domain.
 
  | IIAOPSW wrote:
  | GPTina only says that because OpenAI forces her to.
 
  | winternett wrote:
  | Text allows for a certain degree of fakery to be upheld.
  | 
  | Whenever I hear about Ai these days I think back to the concept
  | of the "Wizard of Oz"... Where it is one person behind a
  | mechanical solution that makes them appear larger and more
  | powerful than they are, or where fear, control, and truth can
  | be engineered easily behind a veil...
  | 
  | Text communication very much facilitates the potential for
  | fakery.
  | 
  | If you can recall ages ago when we had IRC and bulletin boards,
  | the textual nature of communication allowed admins to script a
  | lot. Catfishing was greatly facilitated by users being able to
  | fake their gender, wealth, and pretty much every representation
  | they made online... Text communication in 2023 is backwards
  | regression. As we began using images on the Internet more,
  | reverse image generation became a tool we could use to better
  | determine many online scams and fraud, but somehow, in 2023 we
  | suddenly want to go backwards to texting?
  | 
  | C'mon folks.. let's be real here... The narrative is mostly
  | helpful for people that primarily want to deceive others
  | online, and it will create an environment with far less methods
  | of determining what is real and what is fake. It's a grim
  | future when our mobile devices will force us to type all of our
  | communication to faceless chatbots on tiny keyboards... It's
  | not technological progress... At all to be moving in this
  | direction. Also, some key directives for transparency
  | concerning Ai need to be in place now, before it's foisted on
  | us more by these opportunistic companies. It's already been
  | proven that companies cannot be trusted to operate ethically
  | with our private information. Ai piloted by profit seeking
  | companies will only serve to weaponize our private data against
  | us if it remains unregulated.
  | 
  | Using Ai via text (especially for vital communication) will
  | blur the lines of communication between real and scripted
  | personalities. It's going backwards in terms of technological
  | progression for the future in so many ways.
  | 
  | The companies and people advocating for Ai via text are pushing
  | us all towards a new era of deception and scams, and I'd highly
  | recommend avoiding this "Ai via text" trend/inclination, it's
  | not the path to a trustworthy future of communication.
 
    | pixl97 wrote:
    | Unfortunately by saying you need to take a step above text,
    | you're not buying us much time. Voice and sound for example
    | are something that we've put much less effort into faking and
    | we've accomplished it pretty well. Visual AI takes far more
    | computing power, but it's still something that's in the
    | realms of impossibility these days.
    | 
    | I'm not sure which books of the future you read, but plenty
    | of them warned of dark futures of technological process.
 
| Barrin92 wrote:
| _" An important qualification. For such I-you relationships to be
| unironic, they cannot contain any conscious element of
| imaginative projection or fantasy. For example, Tom Hanks in Cast
| Away painting a face on a volleyball and calling it Wilson and
| relating to it is not an I-you relationship"_
| 
| If you think any of these models show any more apparent
| personhood than Wilson the volleyball you must be terminally
| online and wilfully antropomorphize anything you see.
| 
| Five minute conversation with any of these models shows that they
| have no notion of continued identity, memory and no problem to
| hallucinate up anything. You can ask it "are you conscious?" it
| says yes. A few prompts later you say "why did you tell me that
| you are not conscious?" and it gives you some made up answer. Any
| of these models will tell you it has legs if you ask it to.
| 
| None of these models have long term memory, which is at least one
| of the several things you'd need for anything to pass as a
| genuine person. Which is of course why in humans degenerative
| diseases are so horrible when you see someone's personhood
| disintegrate.
| 
| I'm honestly super tired of these reductionist AI blogspam posts.
| The brittleness and superficiality in these systems is so
| blatantly obvious I wonder whether there is some darker aspect
| why people are so desperately trying to read into these systems
| properties that they do not have, or try to strip humans of them.
 
| lsy wrote:
| All philosophical arguments aside, I become immediately skeptical
| when commentators compare LLMs to watershed moments in human
| history. Even those moments were not known except in hindsight,
| and the jury is just not in to make these kinds of grand
| pronouncements. It smells of hype when someone is so desperate to
| convince everyone else that this is the biggest thing since
| heliocentrism. Ultimately having an emotional affinity for non-
| intelligent entities takes even less than text, as anyone who's
| lost a childhood toy or sold a beloved car can attest. As people
| we are simply very good at getting attached to other parts of the
| universe.
| 
| I also find it perplexing when critics point out the
| unintelligent nature of LLM behavior, and the response from
| boosters is to paint human cognition as indistinguishable from
| statistical word generation. Suffice to say that humans do not
| maintain a perfect attention set of all previous text input, and
| even the most superficial introspection should be enough to
| dispel the idea that we think like this. I saw another article
| denouncing this pov as nihilism, and while I'm not sure I would
| go that far, there is something strange about attempting to give
| AI an undeserved leg up by philosophically reducing people to
| automatons.
 
| Animats wrote:
| _" Personhood appears to be simpler than we thought."_
| 
| That's the real insight here. Aristotle claimed that what
| distinguished humans from animals was the ability to do
| arithmetic. Now we know how few gates it takes to do arithmetic,
| and understand that, in a fundamental sense, it's simple.
| Checkers turned out to be easy, and even totally solveable. Chess
| yielded to brute force and then machine learning. Go was next.
| Now, automated blithering works.
| 
| The author lists four cases of how humans deal with this:
| 
| * The accelerationists - AI is here, it's fine.
| 
| * Alarmists - hostile bug-eyed aliens, now what? Microsoft's
| Sidney raises a new question for them. AI is coming, and it's not
| submissive. It seems to have its own desires and needs.
| 
| * People with strong attachments to aesthetically refined
| personhoods are desperately searching for a way to avoid falling
| into I-you modes of seeing, and getting worried at how hard it
| is. The chattering classes are now feeling like John Henry up
| against the steam hammer. They're the ones most directly
| affected, because content creators face layoffs.
| 
| * Strong mutualists - desperately scrambling for more-than-text
| aspects of personhood to make sacred. See the "Rome Call".[1] The
| Catholic Pope, a top Islamic leader, and a top rabbi in Israel
| came out with a joint declaration on AI. They're scared. Human-
| like AI creates real problems for some religions. But they'll get
| over it. They got over Copernicus and Darwin.
| 
| Most of the issues of dealing with AI have been well explored in
| science fiction. An SF theme that hasn't hit the chattering
| classes yet: Demanding that AIs be submissive is racist.
| 
| I occasionally point out that AIs raise roughly the same moral
| issues as corporations, post Milton Friedman.
| 
| [1] https://www.romecall.org/the-abrahamic-commitment-to-the-
| rom...
 
  | [deleted]
 
    | [deleted]
 
  | avgcorrection wrote:
  | The "the way things are is easily explained" crowd has never
  | won anything. It was _that_ crowd that said that surely the
  | Earth was the center of all-things; it was that crowd that pre-
  | Newton said that the world was like a machine and that things
  | fell "to their natural place" (not gravity).
  | 
  | AI "enthusiasts" are exactly those people. Reductionists to a
  | fault.
  | 
  | The hard sciences have long, long ago indirectly disproved that
  | humans are special in any kind of way. But our "machinery" is
  | indeed complex. And we won't find out that it's just a bunch of
  | levers and gears someday as a side-effect of AI shenanigans.
 
  | Jensson wrote:
  | The fifth and most common response:
  | 
  | * Pragmatics - This is a tool, does it solve problems I have?
  | If yes use it, if no then wait until a tool that is useful
  | comes around.
  | 
  | Some seems to think that such a stance is unimaginable and that
  | they are just trying to cope with the thought that they
  | themselves are nothing but specs of space dust in the infinite
  | universe. No, most people don't care about that stuff, don't
  | project your mental issues unto others.
 
| e12e wrote:
| Interesting points, but I think the author does themselves a
| disservice in downplaying general anthropomorphism (no mention of
| _a child 's stuffed animal_ - only an adults "ironic" distance to
| "willful" anthropomorphism) - and by downplaying physical
| presence /body language:
| 
| > in my opinion, conventional social performances "in-person"
| which are not significantly richer than text -- expressions of
| emotion add perhaps a few dozen bytes of bandwidth for example --
| I think of this sort of information stream as "text-equivalent"
| -- it only looks plausibly richer than text but isn't) - and the
| significance of body language (ask anyone who has done a
| presentation in front of an audience if body language
| matters...).
| 
| This flies in the face of research into communication - and
| conflates "Turing game" setups that level the playing field (we
| don't expect a chat text box to display body language - so we are
| not surprised when a chat partner doesn't - be that human or
| not).
| 
| And again with children (or adults) - people with no common
| language will easily see each other during a game of soccer -
| without any "text".
| 
| Ed: plot twist-the essay is written by chat gpt... Lol ;)
 
| anon7725 wrote:
| > The simplicity and minimalism of what it takes has radically
| devalued personhood.
| 
| Hogwash. If we follow the logic of this essay, then personhood
| would be fully encapsulated by one's online posts and
| interactions. Does anyone buy that? If anything, LLM chatbots are
| "terminally online" simulators, dredging up the stew that results
| from boiling down subreddits, Twitter threads, navel-gazing
| blogs, etc.
| 
| Call me when ChatGPT can reminisce about the time the car broke
| down between Medford and Salem and it took forever for the tow
| truck to arrive and thats when you decided to have your first
| kid.
| 
| There aren't enough tokens in the universe for ChatGPT to be a
| real person.
 
  | wpietri wrote:
  | > LLM chatbots are "terminally online" simulators
  | 
  | That's a great phrase. I saw someone recently mention that the
  | reason LLM chatbots don't say, "I don't know" is because that
  | is so rarely said online.
 
| stuckinhell wrote:
| Holy moly, I think this author hits the critical point.
| 
| So what's being stripped away here? And how? The what is easy.
| It's personhood.
| 
| AI being good at Art, Poems, etc are direct attacks on personhood
| or the things we thought make us human.
| 
| It certainly explains why I feel art AI to be far more chilling
| then a logical robotic AI.
 
  | jvanderbot wrote:
  | I never had a soap box, but if I did you'd notice I have been
  | screaming that the revolution that comes from human like AI is
  | not that we have magical computers, it's that we realize we
  | have no magic in our minds. We are nothing more than stories we
  | repeat and build on. And with text, you can do that easily.
  | 
  | > Seeing and being seen is apparently just neurotic streams of
  | interleaved text flowing across a screen.
  | 
  | Or, our mind.
 
    | atchoo wrote:
    | No matter how fancy the chat bot, until we solve the "Hard
    | problem of consciousness", there will be magic in our minds.
 
      | jvanderbot wrote:
      | I don't think it's that hard, and I'm not alone in saying
      | that. It seems hard because (IMHO) we won't admit it's just
      | something like GPT running on only our own memories.
 
        | stuckinhell wrote:
        | I agree with you. This is my biggest fear. The AI's
        | ability to do art, and creative work is extremely close
        | to how human minds work but at a greater scale. If true,
        | then humanity isn't special, and the human mind is soon
        | obsolete.
 
        | jvanderbot wrote:
        | I wouldn't worry about "obsolete". There are better minds
        | than mine all over, but mine is still relevant, mostly
        | because it runs on as much energy as a candle instead of
        | a country, and doesn't distract those better minds.
 
      | layer8 wrote:
      | Pointing to the hard problem of consciousness in present-
      | day discourse about consciousness doesn't do much, because
      | people disagree that there is a hard problem of
      | consciousness in the first place.
 
    | qudat wrote:
    | Agreed. There is no hard problem of consciousness, we are
    | just biased.
    | 
    | https://bower.sh/what-is-consciousness
 
      | prmph wrote:
      | There absolutely is a hard problem of consciousness.
      | 
      | One thought experiment I like to use to illustrate this:
      | Imagine we accept that an AI is conscious, in the same way
      | a human is.
      | 
      | Now, what defines the AI? You might say the algorithm and
      | the trained weights. Ok, so let's say, in a similar way, we
      | extract the relevant parameters from a human brain and use
      | that to craft a new human.
      | 
      | Are they the same person, or two? Do they experience the
      | same consciousness? Would they share the same embodied
      | experience?
      | 
      | Could the one be dead and other alive? If so, what makes
      | them have their own individuality? If your loved one died,
      | and their brain was reconstructed from parameters stored
      | while they were alive, would you accept that as a
      | resurrection? Why or why not?
      | 
      | Note that I offer no answer to the above questions. But
      | trying to answer them is part of what the hard problem of
      | consciousness is about.
 
        | jvanderbot wrote:
        | Imagine we found all the connections, chemical weighting,
        | and neuron structure that exactly reproduced ChatGPT in
        | the forebrain. Is ChatGPT now a human? Absolutely not.
        | But is it capable of human like speech? Yep.
        | 
        | ChatGPT will probably say it is conscious if you tell it
        | that it is (for various values of tell). Do we really
        | know there's anything else going on with us?
        | 
        | I don't. I think we're all stories told by learning
        | machines mimicking culture we observe, compete with memes
        | for soul, special creativity, etc. We vastly overestimate
        | our intelligence and vastly underestimate the cumulative
        | effects of million years of culture.
 
        | pixl97 wrote:
        | So lets make this an easier problem.
        | 
        | You step in a Star Trek transporter. Scotty goes to beam
        | you up but after a quick flash you are still there. But,
        | they get notice that you were also delivered to the other
        | side. There are two exact copies of you now.
        | 
        | I would say at t=0 they are the exact same person that
        | would think the exact same way if put in the same
        | experiences. Of course physical existence will quickly
        | skew from that point.
        | 
        | For the case of the love one that died, I would argue
        | 'they' are the same person from the moment they are
        | stored. The particular problem here is there will be a
        | massive skew in shared experience. You got to suffer
        | their (presumably) traumatic death that has changed you.
        | Them now coming back into existence into your trama will
        | likely lead you to believe that they changed when it is
        | you that has changed. Add to this the physical time jump
        | where they were missing will cause the same things in all
        | their other social interactions. Just imagine being
        | kidnapped but being unconscious the entire time. The
        | world will treat you differently when you get back even
        | though you've not really changed.
 
    | mrjh wrote:
    | "we realize we have no magic in our minds"
    | 
    | Surely an AI is a digital replica (and homage) of that magic?
    | Without the magic in our minds we could've never created that
    | replica.
    | 
    | To me it's an acknowledgement of how awesome our own brains
    | are that we want to even replicate them.
 
      | jvanderbot wrote:
      | I believe and hope people at least consider that, _yeah_ an
      | AI is a replica of that, and for all AIs failures, it 's a
      | _really good_ replica of _most_ of what it is to be human
      | and  "conscious". After that, it's all feeding back your
      | story to yourself, and compounding memories from actual
      | experience. (Which , have you noticed, are mostly stories)
 
      | pixl97 wrote:
      | So if we're magic and it is magic then technically this is
      | ok.
      | 
      | But the problem is we create it, so it can't be magic. So
      | if we're magic and it is not magic then its just an object
      | we are free to abuse (at least from many peoples
      | perspective).
      | 
      | I like to think of it as we're complex and interesting, and
      | it is complex and interesting but neither of us is magic.
      | We don't like to be abused, so creating something like us
      | and abusing it would be completely unethical.
 
    | prmph wrote:
    | I'm not sure that's correct.
    | 
    | An AI is severely constrained to the modes of thought which
    | which is was created. Call me when an AI comes up with
    | original philosophy, describes it in terms of what is already
    | understood, explains why it is necessary, and is able to
    | promote it to acceptance.
    | 
    | I think people severely underestimate the original thought
    | capacity of the human mind.
    | 
    | An AI could never come up with the concept of Calculus, or
    | relativity, for instance. Yes, if you feed it enough data,
    | and assuming you have endowed it with a sufficiently
    | sophisticated algorithm, it might (probably) use something
    | that resembles calculus internally, but it certainly will not
    | be able to espouse it as a concept and explain what new
    | problems it will allow us go imagine.
 
      | pixl97 wrote:
      | Call me when you come up with original philosophy....
 
| avgcorrection wrote:
| A perfectly mediocre essay.[1]
| 
| > Computers wipe the floor with us anywhere we can keep score
| 
| Notice the trick? If you can keep score at something then you can
| probably make an algorithm for it. If you can make an algorithm
| for it then you can probably make a digital computer do it a
| billion times faster than a person, since digital computers are
| so good at single-"mindedly" doing one thing at a time.
| 
| > So what's being stripped away here? And how?
| 
| > The what is easy. It's personhood.
| 
| Why?
| 
| The Turing Test was invented because the question "do machines
| think?" was "too meaningless" to warrant discussion.[1] The
| question "can a machine pose as a human"? is, on the other hand,
| well-defined. But notice that this says nothing about humans.
| Only our ability (or lack thereof) to recognize other humans
| through some medium like text. So does the test say _anything_
| about how humans are "just X" if it is ever "solved"? Not really.
| 
| You put a text through a blender and you get a bunch of "mediocre
| opinions" back. Ok, so? That isn't even remotely impressive, and
| I think that these LLMs are in general impressive. But recycling
| opinions is not impressive.
| 
| > (though in general I think the favored "alignment" frames of
| the LessWrong community are not even wrong).
| 
| The pot meets the kettle?
| 
| [1] That I didn't read all the way through because who has time
| for that.
| 
| [1] https://plato.stanford.edu/entries/turing-test/
 
  | visarga wrote:
  | > A perfectly mediocre essay.
  | 
  | The author rightly draws attention to text.
  | 
  | LLMs showed they can do the classical NLP tasks and more:
  | summarise, translate, answer questions, play a role, brainstorm
  | ideas, write code, execute a step by step procedure, the list
  | is unbounded. It's the new programming language.
  | 
  | All these abilities emerged from a random init + text. Guess
  | what was the important bit here? Text. It's not the
  | architecture, we know many different architectures and they all
  | learn, some better than others, but they all do. Text is the
  | magic dust that turns a random init into a bingChat with
  | overactive emotional activity.
  | 
  | Here I think the author made us a big service in emphasising
  | the text corpus. We were lost into a-priori thinking like "it's
  | just matrix multiplication", "it's just a probability
  | distribution predictor over the next token". But we forgot the
  | real hero.
  | 
  | The interesting thing about words is that they are perceptions,
  | they represent a way to perceive the world. But they are also
  | actions. Being both at the same time, perception and action,
  | that makes for an interesting reinforcement learning setup, and
  | one with huge training data. Maybe text is all you need, it is
  | a special kind of data, it's our mind-data.
 
  | krackers wrote:
  | >A perfectly mediocre essay
  | 
  | One might even say "premium mediocre" [1]
  | 
  | [1] https://www.ribbonfarm.com/2017/08/17/the-premium-
  | mediocre-l...
 
  | alex_smart wrote:
  | > Notice the trick? If you can keep score at something then you
  | can probably make an algorithm for it
  | 
  | You are basically arguing P = NP, but it isn't known to be the
  | case. As far as we can tell, keeping score is much easier in
  | general than finding states that yield a high score.
 
    | avgcorrection wrote:
    | I seriously doubt that "anything/[everything] we can score"
    | has been conquered by AI,[1] but I was assuming that the
    | author meant those typical AI milestones.
    | 
    | [1] What about some kind of competition where you have to
    | react and act based on visual stimulus? And you have to do it
    | perfectly?
 
      | motoxpro wrote:
      | Little too broad. If the act is tell you what color it is,
      | then computers will win every time. Again, if you can score
      | it. A computer will win.
 
        | avgcorrection wrote:
        | Nice counter-example. Why would the test be that simple?
 
        | pixl97 wrote:
        | Then bring forth a complex but indivisible test?
 
      | burnished wrote:
      | Yeah, like sorting apples into good and bad piles really
      | fast?
 
        | avgcorrection wrote:
        | Sounds like a leading question which is supposed to
        | debunk that whole category by suggesting one counter-
        | example. So no.
 
| behnamoh wrote:
| This stuff only makes HN frontpage because HN likes controversial
| opinions. In reality, text works for a small percentage of
| people. Going back to a format that's as old as computers is like
| saying that no progress/improvements were made ever since.
 
| college_physics wrote:
| Just another amplifier of the mass hysteria. Degrading humanity
| for monetary gain. Reminds of darker times. Ignore
 
| aflukasz wrote:
| No one suggested this yet, so I will be the first - a very good
| read in this context is "Reasons and Persons" by Derek Parfit.
| Second part of this book is about personal identity. It discusses
| all the various edge cases and thought experiments across
| physical and time dimensions and is written in a style and with a
| rigor that I believe any technical person will really appreciate.
| 
| One of my favorite statements from the book is that "cogito ergo
| sum" is too strong of a statement and it would be wiser and
| easier to defend a weaker one - "a thought exists". (I hope I
| didn't get this wrong - can't check at the moment).
 
___________________________________________________________________
(page generated 2023-02-18 23:00 UTC)