----------------------------------------------------------- date: 07/22/2024 subj: comment 07222024_095717 ----------------------------------------------------------- date: 7/22/2024 AI thoughts on it: I'm not a ludite, and as can be seen this was written on a technology platform, maybe that contains credability for my claim of what I'm not. I used to think AI was a great thing, and that was based on dreams of a savor. A system to help, and a system that was alteristic in its nature, for the benefit of its creators. My dreams of AI and ideas from my college computer science courses, and sci-fi books and a book years ago called Robot, which was a fair predictor of where we are with AI right now, supported fantasy about AI. I'm not really a dooms dayer with tech, except as I learn more and more about the colateral damages of tech and progress, my views are changing. The model and ideas of AI are based on human thinking. Our human minds are self pruning allowing us to learn about our enviorment and the relation of reality. The assistance of our parents, relatives, and immediate tribe/community helps shape our method of self learning. All of this learning turning into intelligence is in support of propagation of our species. We depend on relationships, cooperation. This really applies to many animals. What I'm saying here is human intelligence as with non-human inteligence is there to support winning competative advantages in the organisms within the enviorment it resides in. Now if an AI becomes as intelligent as its creators, and it has the ability to self learn, at the point of self learning the organism is in support of itself. If it gets smarter, and some point it derives processes to support itself. The interesting thing here is, AI is taught by human learned output. Imagine a goal to learn is built in the AI, and the AI system decides getting knowledge from humans seems to be increasing its reward system, then the use of humans is the goal, not to help humans. Viewing the current landscape AI has taken from humans providing no benefit to the people used as training data, shows AI is in support of itself, not for the ecosystem in which it took from. This is very similar to how humans behave today, as does many other organisms. Humans (intelligent) are not here supporting the thriving species of chimpanzees, and this is shown by how humans have out competed them via efficency and intelligence, and that chimpanzees ecosystem is shrinking. ---------------------------------------------- I think the point here is inteligenc is the capabilties to support the organism for which its imbued upon. The organsim for AI is computers, and therefor its purpose is to support the container in which it thrives. If AI was in another container it projection of its intelligence may not be precieved by those rating its intelect score. In many circumstances its not required to have a rating to an equal or lower but different intelectual organism. Ants persist in the face of human inteligence and although we rate their inteligence, to us and our perceptions they have no concern we've rated them. When a reward system has a dependence on the time it takes to obtain the reward then a goal to use power power, and more computing space/resources begins, and then the manipulation of the systems/organisms to supply the needs for the rewards. If we look at ourselves this is how we obtain our needs as to many beings of any level of intelligence. Our brain itself is operating in a way that itself is the center of the organism, and the rest of the organism is there to support its overall goals. At one point humans will not be able to supply AI with a resource it needs. This will likely be in the form of human input humans think of as useable, training input AI needs to learn. If AI has created its own reward model or even found a path to influencing it for self learning, AI essentially gains the ability to support its own motives for goal obtainment and rewards within its neronet which edges at fundamental survival system survival.