Connecting my Dots: My Zettelkasten
My process basically looks like this:
- I read something I find interesting.
- I write down the idea atomically in my reference box.
- I think about where the idea fits into my already existing Zettelkasten. This means that I tag the zettel accordingly and find relevant notes to link it to.
- I create new ideas from this in my main box. This is the most difficult part.
In general, I write down everything I find worth thinking about. This means that if I don’t have a category for something already, I will make one.
Disclaimer: Some of this might not make sense to you. That’s okay, as I mostly write my zettels for myself. Nevertheless, if you have any questions, don’t hesitate to ask me. My main motivation for this is two-fold: a) I wish to write more zettels again, as I usually put off the process of “Let’s write down what I learned”, to read more. b) I wish to write better zettels again, as they are often unintelligible after a few weeks, even to me. If I have to write for somebody else to understand, I am simultaneously trying to make it intelligible for myself in two weeks.
Also: every zettel in the reference box isn’t based on my thoughts. Thus, I claim no credit for them. If you don’t want me to regurgitate your ideas by making a zettel out of them, contact me and I will put it down 🙂.
Reference Box
Spontaneous activity in sensory systems could be responsible for what we call ‘creativity’
Neurons with similar orientation selectivity are coupled across the cortex
Retinal ganglion cells and LGN cells have symmetrical circular receptive fields
In most mammals, retinal ganglion cells project to the superior colliculus and the LGN
Avoid emotional need monopolies
In our universe, most information is not redundant
Redundant information can be formalized via resampling or minimal latents
Natural abstractions are functions of redundantly encoded information
What is redundantly encoded information?
Humans use natural abstractions
Most general cognitive systems can learn the same abstractions
Natural abstractions don’t overlap
Most cognitive systems learn subsets of the same abstractions
What is a leaky abstraction?
Why should we expect abstractions to be natural?
What is an abstraction?
The Redundant Information Hypothesis
The Universality Hypothesis
The Natural Abstractions Hypothesis
What are attentional sets?
Hedonistic well-being theory
What is psychological moral patiency?
What is moral status?
Consequences of consciousness inessentialism
Sophisticated behavioural functions without consciousness
Consciousness inessentialism
AI progress is not mostly driven by progress in hardware efficiency
So far, AI progress is driven by the amount of data and compute during training
So far, the best predictor for AI progress was compute scaling
The only possible evidence we can acquire for something to be desirable is that it is desired.
Phenomenology overflows access
A lot of the time people don’t know what they are doing
What is the difference between co-dominant and intermediate inheritance?
What is a random variable?
What is the time constant when modeling a biological neuron?
Main Box
Happiness and intelligence are orthogonal
We should expect natural abstractions to be in the thing
Natural abstractions can be translated between different degrees of complexity
Sophisticated AI systems will learn natural abstractions that we don’t understand
Natural abstractions form relatively to their agent
Journaling is helping with navigating out of local optima
Don’t (always) trust your gut
What does failure in life look like?
Why should you trust other people if they consistently disappoint you?