Mastodon toots are sometimes too quick for a topic.

To abridge the evolution of that thread as I write this:

@iska@catposter.club has pointed out that if talking to one of the current LLM products
has replaced other programming tools, those programming tools must not have been very
good. This leads into the point that what was happening at e.g. Xerox, MIT etc in the
early 80s and before hasn't been seen or considered by the people viewing LLMs as an
improvement.

Which I obviously agree with, see my ongoing odyssey through Interactive Programming
Environments 1984.

This was apropos @freakazoid@retro.social's LLM-powered holodeck: A MOO experience where
vague desires were synthesised into the room based on LLM products.

Here I'm going to diverge from those toots and give 101 more reasons LLMs are bad.

The chatbots we are talking about are the result of performing some particular integral
over a finite domain of data, the training set resulting in a trained range of functions
functioning like this:

>"How can I make a sandwich?

Put some sauerkraut and swiss cheese on rye bread

Which in generative LLMS (like GPT) is stochastic, so the result of this function
involves some random variables, intended to simulate asking a human that it was trained
on the life of. The sandwich recipe response will be different, but still made from
integrating over the training data.

Problem 0:

Integration has a smoothing effect. (Check me on this please), this means the result of
the integral is more compressible than the input data. This makes it possible for calls
in the trained function range to be answered quickly and more consistently. The cost of
this is that in order to increase the resolution or particular utility of the trained
function range requires an astronomical amount of harvested human lives as data, and an
environmental catastrophe of resources to perform the initial training.

;;; Side note

freakazoid also pointed out that scrapping the internet is no longer useful for training
your own bot, since the I'm going to say post-2016 internet became rapidly filled with
trash content generated by the initial and ongoing flights of these chatbots.

Problem 1:

Problem 0 implies that great productions within the scraped human lives are worsened by
combination with the regrets of those and other human lives. This leads to the
professional chatbot ticklers, who use their Expert Familiarity with a chatbot product
to produce superior answers such as by re-asking the above question:

>"How would Gordon Ramsey make a Ruebin sandwich quickly?

(} asking how to do something in the style of a prolific but also constrained famous
person to receive better answers was an MIT discovery)

this works like this: Some places in the trained function range were dominated by a
small number of input data. Training methods have been designed to answer the Gordon
Ramsey question similarly to the small amount of data relating to both Gordon Ramsey
and sandwich recipes. The chatbot tickling amounts to finding places where very little
training data contributed to the trained function output. Finding a place means choosing
the words of the question to the chatbot in these cases.

The chatbot tickler has snuck in the knowledge that their customer will applaud Gordon
Ramsey's idiosyncracies. One imagines searching for a recipe in a signature recipe book.

Problem 10:

This is a restatement of problems 1 and 0 together. What these LLM chatbots are doing is
facilitating the plagiarism of the human lives that were harvested, with expert ticklers
knowing a few places in the range weighted towards tiny amounts of irregularly high
quality moments in the glut of scraped human lives. Back when search engines worked,
say before 2014, there was a funny website, lmgtfy.com. When asked for an expert opinion,
users would send back lmgtfy.com?search=Gordon+Ramsey+Ruebin+Recipe. The website would
play a funny GIF of the search terms being typed into a search engine, and the mouse
moving to and clicking on the first web result followed by a web redirect to that web
page. Note this no longer works due to LLM and adjacent content, and hasn't since at
least 2016.

Problem 11:

Expert reliance on data-poor regions of the training data in order to clever-Hans in
apparently high quality results increases susceptibility to attacks. Famously, the usual
npm and pip attacks: Upload a package to the community package source with a name easily
confused with a well-known package, that is the well-known package but with a backdoor
inserted. Packages are packages of programming language code built on top of by
programmers to reduce their responsibility for a project.

Problem 100:
Follows from problem 11. Since we know how to cook hostile misbehaviours into this type
of technology, we can expect Private Data Sets and Private Models (training methods)
to cook hostile behaviours in. Hostile like pushing people towards affiliated purchases
and sabotaging corporate enemies, such as libre software and human rights. The hostile
behaviours could also be doped in by outside agents who knew they were being harvested
and detected that their specially crafted data had been included in a product release.