Learnable Natural Systems
Learnable natural systems are natural patterns or processes that may be efficiently discovered and modeled by classical learning algorithms because physics, evolution, and selection leave exploitable structure.
Key points
- Hassabis’s Nobel-lecture conjecture frames many natural systems as learnable by classical algorithms, especially when the system was generated or shaped by nature rather than arbitrary randomness [src-063].
- The core intuition is that nature has already performed a search process; organisms, proteins, and physical structures that persist are not uniformly random but shaped by constraints and survival [src-063].
- AlphaGo and AlphaFold are used as analogies: models learn enough structure to guide search through spaces that brute force cannot cover [src-063].
- The boundary cases include chaotic systems, abstract number-theory-like problems, and phenomena where no learnable pattern exists or where quantum effects may be essential [src-063].
- The concept suggests a possible problem class for natural systems that classical neural networks can model efficiently [src-063].
Related entities
Related concepts
Source references
- [src-063] Lex Fridman – “Demis Hassabis: Future of AI, Simulating Reality, Physics and Video Games | Lex Fridman Podcast #475” (2025-07-23)