I didn’t do any work on any of the things I “wanted” to work on/learn; I didn’t make any Anki notes.
I still found interesting things to look at though. Here are some highlights of my past week’s browsing history:
- Slate Star Codex
- LessWrong
- Resist the Happy Death Spiral – LessWrong 2.0: rarely there is just a “Great Thingy” (in his words) that fixes everything; avoid that by splitting that idea into its smaller subcomponents
- I find that the “Great Thingy” is often an aesthetic thing. I sort of feel like there could be a “religion” around science and rationalism, “Great Thingy”s that do work (though it’s still not straight up magic)
- The Logical Fallacy of Generalization from Fictional Evidence – LessWrong 2.0: fictional evidence, is well, not evidence. Certain hypothetical will be more story-friendly, even if they aren’t probable. It’s dangerous when these contaminate the way we see–they “do your thinking for you”
- The ground of optimization – LessWrong 2.0: defines optimizing systems as those that tend toward a certain goal (“configuration set”) within a certain range (a “basin of attraction”). Useful for AI.
- Causal Universes – LessWrong 2.0 (kind of confusing for me): time travel that doesn’t change past is non-causal; causal is easily computable, non-causal isn’t. Reality seems somewhat tied/related to causality.
- The Least Convenient Possible World – LessWrong 2.0: when answering hypotheticals/philosophical thought experiments, think of the version of the hypothetical where it is the hardest to answer
- Commenter davidamann frames it instead as seeing what would need to be changed for their belief to change. It seems better because it’s done by the answerer. Reminds me of Anthony Magnabosco’s videos, where he asks that a lot.
- … there are a lot more. I spend a lot of/too much time binge reading LessWrong.
I think I need some sort of process of turning these hours and hours spent reading into something useful: namely, I think I should Ankify them.
I do find the content in reading these really fascinating, and useful (the rationality stuff), so I should memorize them. In addition, maybe the process of having to Ankify them will make me read less out of work-boredom, allowing me to do something different (and hopefully productive).
I think a good process for Ankifying these kind of essays, which are sort of subjective by being, is to be cognizant of the source (the title and author). I would Ankify it after I read it (maybe more than once if I had trouble understanding it) to find out what their central thesis/claim is, and memorize that claim and the reasons for that claim.
I have kind of told myself to “just Ankify everything I see”, but I rarely do that; often I’m kind of tired, but it also seems largely just that I never started. (I used to run in the morning everyday, and it was really kind of simple to tell myself that I needed to do that every morning, despite not really feeling filter and still not really enjoying the running; but right now, I don’t have that habit, so there is a kind of extra mental obstacle that prevents me from “feeling” like running.)
A problem: a lot of essays are really long (e.g., Meditations On Moloch); I suppose I should just be more comfortable with closing out of it to finish reading it later, and more comfortable with having to reread large chunks of it when I’m Ankifying it. I feel like reading these gives me an “emotion” of rationality, of the ability to do better (not just me, but society as a whole); but if I actually want to use these ideas in my thoughts and actions I need to memorize them.