Natural abstractions don’t overlap
(Chan et al., 2023) argues that the space of natural abstractions is discrete, i.e. natural abstractions don’t overlap. They might overlap at the borders, but probably aren’t overlapping to a large extent.
Thus, cognitive systems trying to predict a rotating gear, are likely to learn an abstraction based on velocity and rotation, but whether they learn an abstraction is up to them. But they probably won’t learn a gear abstraction that is kinda similar to another gear abstraction.
Nevertheless, some cognitive architectures might end up with slightly different abstractions, based on their goals and architectures, but those differences shouldn’t be big enough for abstractions to overlap heavily.
Source: Chan, L., Lang, L. and Jenner, E. (no date) ‘Natural Abstractions: Key claims, Theorems, and Critiques’. Available at: https://www.alignmentforum.org/posts/gvzW46Z3BsaZsLc25/natural-abstractions-key-claims-theorems-and-critiques-1
(Accessed: 19 March 2023).