Neural Network

   { Not my field, learning on the go, watch out for errors! ~drummyfish }

   In [1]artificial intelligence a neural network (also neural net or just
   NN) is a system simulating natural biological neural network, i.e. a
   biological system found in [2]living organisms, most importantly in our
   [3]brain. Neural networks are just another kind of [4]technology inspired
   by nature's ingenuity -- they try to mimic and simulate the naturally
   evolved structure of systems such as brain in hopes of making computers
   learn and "think" like living beings do, and in recent years they started
   achieving just that, with great success. Neural network are related to the
   term [5]deep learning which basically stands for training multi-layered
   neural networks.

   Even though neural networks absolutely aren't the only possible model used
   in [6]machine learning (see e.g. [7]Markov chains, [8]k-NN, [9]support
   vector machines, ...), they seem to be the most promising one -- nowadays
   neural networks are experiencing a boom and practically all AI research
   revolves around them; they already made their way from research to
   practice, not only do the play games such as [10]chess on superhuman
   level, they already create extremely complex art and show some kind of
   understanding of pictures, video, audio and text on a human level (see
   [11]chatGPT, [12]stockfish, stable diffusion etcetc.), and even surpass
   humans at specialized tasks. Most importantly of course people use this
   for generating [13]porn, see e.g. [14]deepfakes. The exceptional results
   are already being labelled "scary" due to fears of [15]technological
   singularity, "taking jobs", possible "unethical uses" etc.

   Currently neural networks seem to be bringing back [16]analog computing.
   As of 2023 most neural networks are still simulated with [17]digital
   computers, but due to the fact that such networks are analog and parallel
   in nature the digital approach is inelegant (we make digital devices out
   of analog circuits and then try to make them behave like analog devices
   again) and inefficient (in terms of energy consumption). Therefore analog
   is making a comeback and researchers are experimenting with analog
   implementations, most notably electronic (classic electronic circuits) and
   photonic (optics-based) ones. Keep in mind that digital and analog
   networks are compatible; you can for example train a network digitally and
   then, once you've found a satisfying network, implement it as analog so
   that you can e.g. put it in a cellphone so that it doesn't drain too much
   energy. Analog networks may of course be embedded in digital devices (we
   don't need to go full analog).

   [18]Hardware acceleration of neural networks is being developed. Similarly
   to how [19]GPUs appeared to accelerate computer [20]graphics during the
   90s video game boom, similar hardware is appearing for accelerating neural
   network computations -- these are called [21]AI accelerators, notably e.g.
   Google's [22]TPU (tensor processing unit). Currently GPUs are still mostly
   used for neural networks -- purely software networks are too slow. It is
   possible that future neural network hardware will be analog-based, as
   mentioned above.

Details

   At the highest level neural network is just a [23]black box with N real
   number inputs and M real number outputs. For example we may have input
   values such as age, height, weight, blood pressure, and two output values,
   one saying the expected years to live and the other one saying the
   confidence of this prediction. Inside this box there is network of neurons
   that we can train (adjust with different learning [24]algorithms) so that
   it transforms the input values into output values in a correct way (i.e.
   here makes useful predictions).

   Note that a traditional feed-forward neural network is just a network
   similar to e.g. a simple [25]logic circuit, it is NOT a universal
   [26]Turing complete [27]model of computation like [28]Turing machine or
   [29]lambda calculus because it cannot for example perform loops or take
   arbitrarily sized input (the number of input values is fixed for given
   network). Neural network just takes the input numbers and in a certain
   fixed time (which theoretically doesn't depend on the input) runs it
   through the network to obtain the output numbers, i.e. it's best to view
   it as approximating a mathematical [30]function rather than interpreting
   an algorithm. Of course, a neural network itself can be (and in practice
   is) embedded in a more complicated system that can do all the above, but
   in its simple form it's just a bunch of connections between inputs and
   outputs.

   TODO

History

   TODO

See Also

     * [31]boolean net

Links:
1. ai.md
2. life.md
3. brain.md
4. tech.md
5. deep_learning.md
6. machine_learning.md
7. markov_chain.md
8. k_nn.md
9. svn.md
10. chess.md
11. chat_gpt.md
12. stockfish.md
13. porn.md
14. deepfake.md
15. tech_singularity.md
16. analog.md
17. digital.md
18. hw.md
19. gpu.md
20. graphics.md
21. ai_accelerator.md
22. tpu.md
23. black_box.md
24. algorithm.md
25. logic_circuit.md
26. turing_completeness.md
27. model_of_computation.md
28. turing_machine.md
29. lambda_calculus.md
30. function.md
31. boolean_net.md