Elemental AI                                                  11/15/23
----------------------------------------------------------------------
I'm  no expert  on artificial  intelligence. Which  is another  way of
saying, I am not  qualified to write on this subject  but I'm going to
anyway. To make matters even worse, I'm not well read on this subject,
so you'll be getting nothing but opinion here.

For people  like me, the term  AI has long implied  actual independent
intelligence, a complex ability to make decisions without instruction,
otherwise known  as free will.  Lately, it seems  to me, the  term has
been  used to  describe  the  ability to  trick  humans into  thinking
programs  have this  ability; or,  to describe  complex programs  that
perform  impressive parlor  tricks  that appear  human-like (by  using
massive amounts of human-generated inputs).

If you  accept the  AI description  as it is  attached to  things like
ChatGPT, it seems to me (the  non-expert) that you're lowering the bar
of human  achievement from  creating artificial intelligence,  or free
will, to simply tricking people into thinking that you've done so. Not
to minimize the accomplishment, which is  immense... but do we have to
accept it as something it is not?

That got me to thinking about all  the things we can now call AI. Take
typesetting as  an example.  Previously, an  intelligent human  had to
determine all kinds of  things, such as when to break  a line. Now, AI
can instantly determine  where to break a line as  I resize my browser
window, or  any other  window. It's intelligent!  And that's  just the
beginning of this sort of elemental AI.

Ultimately, the kind of AI that is ChatGPT isn't much more impressive,
to me, than this ability for a computer to decide to break lines. Yes,
I realize that a  lot more work and data goes  into ChatGPT. But, it's
no closer  to being independent will  than the routines that  break my
lines for me. It's no more impressive in that sense.