# [2021.04.08] Neuroevolution Today I continued to attend a virtual school on reinforcement learning, mostly about genetic algorithms. I never thought they still develop this field and so intensely. Of course, they can't beat deep learning models in term of accuracy, but sometimes they can do well enough. For practical applications, that means precisely that. Enough is enough:) And some things like Pareto-optimisation are even more natural for evolutionary algorithms. Yes, they usually don't use GPU. But they are easily parallelisable and thus can be quite fast. Genetic algorithms are not about the differentiable of continuous functions, not to mention second-order derivatives. In general, that sounds amazing, especially if one thinks about an application to automated provers. When we develop a mathematical theory, it doesn't matter whether we can produce some particular theorems. What matters is whether we can consistently move it forward and prove more and more valuable results. If humans couldn't find proofs of some statements for hundreds of years, why we think machines should? But certainly, computers can develop theories in their way, I believe. That might be helpful.