When it comes to rationality, the classic dichotomy is between epistemic and practical.
Epistemic rationality is about forming true beliefs, about getting the map in your head to accurately reflect the territory of the world. We can measure epistemic rationality by comparing the rules of logic and probability theory to the way that a person actually updates their beliefs.
Instrumental rationality is about making decisions that are well-aimed at bringing about what you want. Due to habit and bias, many of our decisions don’t actually align with our goals. We can measure instrumental rationality with a variety of techniques developed in economics, for example testing whether a person obeys the ‘axioms of choice’.
– Lukeprog, The Cognitive Science of Rationality
I’d like to sketch out an alternative taxonomy. The difference between epistemic and practical rationality is useful, but it misses key parts of the picture. Or if it doesn’t miss them entirely, they’re only implied when they should be explicit. To use some old LessWrong jargon, this new taxonomy won’t say that the territory is any different from what the original, E/P taxonomy says it is, or that the E/P taxonomy incorrectly carves reality at it joints, just that this new taxonomy stresses other very important joints.
After introducing E/P rationality, the post quoted above states that the gold standard of what it would mean to be perfectly rational is specified in fields such as information theory, probability theory, decision theory, game theory, Bayesian statistics, and what not. Alas, we are humans who have limited and buggy brains, which means we can only attempt to approximate the theoretical ideal. That’s why we take an interest in cognitive bias and heuristics, in order to work with our meaty brains to get the best that they can do. (At least until we upgrade to another inherently limited medium and determine how to work with their limitations).
This point needs to be remembered. There are the laws of optimal belief and optimal action which would apply to any agent in our universe, and there are the specific applications of those general laws to the particular circumstances of humans and the kinds of problems we face. I’ll dub the former universal rationality or agent-agnostic rationality and the latter human rationality .
Almost all the attempts to improve rationality that I’ve seen are about human rationality. After all, that’s what we need; most people are not constrained by the unsolved problems in decision theory, they’re bottlenecked on getting their S1 and S2 aligned. CFAR once taught a class on Bayes’ theorem in many forms, but they eventually dropped it , focusing instead on propagating urges, aversion factoring, and Againstness . Heck, “CFAR. Rationality for humans” wouldn’t be all that bad a tagline.
There’s a further divide which is useful to make within human rationality, and really, this is the one I want to talk about. I’ve borrowed it from Science. Researchers sometimes divide mental processes into two kinds: cognitive processes and affective processes.
- Cognitive processes are those to do with attention, memory, language, intelligence, etc. They’re focused on processing information.
- Affect is the term for everything else going on in your mental life, which is actually a heck of a lot of stuff. Affect is the technical term for ‘emotion’, but it’s really much broader than that. Affective processes include emotion, mood, temperament, self-regulation, motivation, goals, and even values, depending who you ask. They’re pretty important.
The division is sufficiently meaningful that affective science is its own burgeoning field distinct from the main body of cognitive science. There’s only some small – and I’d say inconsequential – debate about whether affective science is a sub-field of cognitive science or wholly its own thing.
And if there’s cognitive science and affective science, it only makes sense to say that there’s cognitive rationality and affective rationality. The definitions follow quite simply.
- Cognitive rationality is about using your human head optimally to believe true things or take actions which get you results. Challenges include working with all the heuristics and biases which distort thinking.
- Affective rationality is about working with all those especially human aspects of your mind: emotions, motivation, willpower, personality, virtues, values, life satisfaction, feelings towards yourself and others.
An alternative taxonomy of rationality
This post was actually supposed to about affective rationality, before I realised that I needed to situate it with respect to the existing E/P split. So I’m going to give it a touch more attention.
Affective rationality is the branch of rationality which is about how to get your S1 and S2 aligned so that you feel good and feel motivated while working towards your goals. It’s about knowing when to use willpower and when not. It’s about managing your life such that your mood is typically good, you know how to handle the ups and downs, and you have a sense of well-being and gratitude. It’s about intentionally shaping your personality to be less neurotic and more equanimous. About generating feelings of compassion towards yourself and others, yet striving for more.
Like epistemic and practical rationality, cognitive and affective rationality skills are inextricably linked. For instance, you might need to be able to see that the failure of your startup doesn’t jeopardise any of your main goals (cognitive skill) in order to reduce negative affect around failure (affective skill). In fact, Cognitive Behavioural Therapy (CBT) is a blend of cognitive and affective, where the ability to see a more accurate picture of the world results in an ability to better manage mental and emotional states.
But if cognitive and affective skills are so tangled, why make the divide at all? The answer is that it’s easy to forget about one while focusing on the other. Labelling them can help prevent that from happening. For example, a student of rationality might be trying to decide whether to start postgraduate studies or seek immediate employment. It would be easy to feel that rationality dictates that she carefully weighs all the pros and cons of each option, account for hyperbolic discounting, nail down her uncertainty, and then decide.. All the while, she’s neglecting to notice that the whole process is making her anxious and she’s rushing through it, just going through the motions to act duly diligent, but not actually optimising. To make the decision in the best way, she should factor in how her own affective state might be affecting her judgement .
I hope that having the the concepts of cognitive and affective rationality salient will remind people to check their affective states when dealing in the cognitive issues, and their cognitive processes when handling affective issues. I even hope that this taxonomy is useful to others in assessing where they are strong and where they are weak. This relates to why I’m writing this post now.
I recently witnessed first-rate epistemic rationality skills in business. In contrast, I’ve been focused on productivity, emotions, motivation, willpower – all affective/practical stuff – for some time. I believe they were and still are my correctly priorities, and I think that’s true of most people.
Because of my focus, I’d had the thought that LessWrong was mistaken in its heavy emphasis on epistemic rationality. Instead I thought that it was CFAR, and all the other rationality writers I can think of, who are correct in focusing on practical rationality.
Seeing epistemic rationality skills getting big returns, I’ve now updated back to the position that epistemic are sure as hell important too (at least if you have any thoughts of entrepreneurship). And LessWrong, to the extent that its goal concerned AI safety, was on target. While I need going to continue working on my motivation systems/mental harmony, I will be on the lookout for the time when I need to radically up my epistemic rationality.
Currently I’m making affective rationality my domain of expertise, but the time will come to swing over to other branches.
 This division is the same as between Normative and Prescriptive rationality (HT to Julia Galef), but I think it’s useful to stress the human nature of the rationality we need to pursue.
 CFAR has run at least one 2nd tier workshop focused on “Epirat” though, where Bayes’ theorem might have been reintroduced.
 Of currently running classes, VOI (Value of Information) might be a genuinely agent-agnostic rationality skill.
 Hat tip to Julia Galef, again, for this example, and all the useful feedback she offered!
 The last I saw, CFAR’s classes on Making Hard Decisions started with ‘sanity-inducing rituals’, and I think that’s spot on.