What Artyom is thinking about, issues #1–4

What Artyom is thinking about is a newsletter with comments that are too short to turn into posts and too long to go on Twitter. Some of them might become posts anyway though. Would be kinda weird if I wasn’t writing posts about anything I am thinking about.

What to do if you haven’t subscribed yet

Here you go:

What’s inside

In addition to my own posts, the first four issues covered:

#1 – January 16

#2 – January 24

#3 – January 30

#4 – February 8

What to do if you have already subscribed

Good! You can subscribe again. If you subscribe twice, you will be subscribed twice.

 1 comment   2020   newsletter

Against against disputing definitions

Disputing definitions is thought to be a faux pas – at least by rationalists. Nowadays I can think of at least three different aspects to disputing definitions that are not an obvious faux pas. Some arguments about definitions are silly anyway – but much less often than I used to think.

Ontological remodeling

First, an argument about definitions can be an earnest attempt to figure out what your own ontology should be, where “ontology” roughly means “what things do you treat as existing things”. E. g. for some people “introvert” is a meaningful thing that exists, and for other people “introvert” is a bunch of traits that kinda correlate, but they never explicitly think about people “being” introverts.

The question of “what ontology should I adopt?” is hard to think about clearly, especially if:

  • you lack an explicit understanding of how exactly ontologies can be useful, and
  • you don’t yet know how to switch between ontologies without strongly committing to them.

So, instead of a discussion you engage in a play fight.

Play fights work because you have to actually defend your worldview instead of merely talking about it. Sure, you might not get anywhere during the fight itself, and it might look silly to the outsiders; the real process of figuring things out will commence after the fight, e. g. by looking at all the things you have committed to and realizing that some of them are dumb. (Without committing to anything, it might take you much longer to realize which of the potential commitments would have been dumb – if you manage to do it at all.)

If you only see the value of “objective truths” and not ontologies/paradigms/worldviews, it doesn’t make sense. But otherwise it should.

Defending formal systems

Second, an argument about definitions can be an attack: “fuck you for not following formal systems”. This is where the cursed dictionary comes in – it specifies the formal system people are “supposed” to follow. This is also where people start searching for inconsistencies in each other’s arguments: “you’re saying you have a different formal system, but I don’t believe you, and if I can prove it’s not consistent, your claim is bunk and you’re a liar”.

If you feel that formal systems are great and thinking inside formal systems is an essential skill, it might make sense to attack people who try to hurt your cause by disregarding formal systems. Cf. STEM-inclined people hating postmodernists. However, I think that out of the three arguments in favor of disputing definitions, this is the worst one.

(An aside: some people also rely on the authority of “official” formal system makers, and denying that authority is perceived as “I want to watch the world burn”. I don’t have much to say about it yet, but it probably also makes sense.)

Attacking hostile ontologies / defending your own ontologies

Third, an argument about definitions can be a different sort of attack: “I understand what ontology you are operating within, I just think it supports %bad paradigms% that lead to %bad things%, and I would like nothing more than for this ontology to disappear”. Is abortion murder?

In general, a lot of paradigms rely on drawing sharp distinctions between A and B. “There is actually no sharp distinction between A and B” can be used to shut down a discussion about A and B – and it’s very keenly felt by both sides. An example: a lot of anti-homosexuality people rely on homosexuality being a thing – it’s easier to wage a war on identity (“gay”) than on a practice (gay sex). But a lot of pro-homosexuality people also rely on homosexauality being a thing!

The result is a curious phenomenon of biphobia, where both sides are discriminating heavily against bisexuals – who don’t fit into the ontology, and thus undermine paradigms based on this ontology. What happens next is bisexual erasure, where both sides literally claim bisexuality is not a thing. You’d think the LGBT community is the last place where bisexuals would be discriminated against, but – no.

The same plays out with #NotAllMen and #YesAllWomen. If you want to talk about patriarchy, it’s not enough to observe that “men seem to have more power” – it ties you down, heavily. You can only say “we need to dismantle patriarchy” within an ontology where patriarchy exists, and otherwise you are limited to “we need the power to be more equally distributed”. Discussions, theories, hypotheses, thoughts within this ontology are very different from the ones outside of this ontology, even if both sides agree on the facts.

(Note: I’m not expressing any opinion on patriarchy one way or the other.)

At this point some rationalists claim: “Yeah, but since the distinctions aren’t as sharp as you say they are, you are wrong. More truth = better”. The unseen assumption here is that locally optimizing for maximum precision will also strategically maximize the amount of truth you find. I think it’s a wrong assumption.

So, at worst arguments about definitions are attacks on hostile ontologies – and this is only silly if you think ontologies can’t be hostile.

At best, they are defenses of your own ontology – “Hey, your proposed ontology is more precise, but less amenable to analysis. In other words, you are trying to erase concepts that we are relying on to think. No thanks”.

Further reading

Does Race Exist? Does Culture? provides a very catchy, very easy to grasp introduction to nebulosity, which is essential to believing that you are allowed to play with ontologies. Using “culture” as an example of a nebulous concept works great:

  • you can’t say how many cultures there are,
  • two people from the same culture might be more different than two people from different cultures,
  • there is nothing about a culture that applies to all members of this culture,
  • and yet we can talk about cultures as Things That Exist and even get something useful out of it.

A bridge to meta-rationality vs. civilizational collapse, by David Chapman, goes deeper regarding the defense of formal systems – and quite a bit more strongly than I am willing to:

Deconstructive postmodernism, their critique of stage 4 modernism/systematicity/rationality, is the basis of the contemporary university humanities curriculum. This is a disaster. The critique is largely correct; but, as Kegan observed, to teach it to young adults is harmful.

Ontological remodeling, also by Chapman, is helpful for understanding the concept of ontologies, but not as helpful as I would like. Kuhn’s The Structure of Scientific Revolution is somewhat better. Reading the whole of Chapman’s Meaningness is a good thing to do after reading Kuhn, though.

 11 comments   2020  

When it’s done, it’s done


Aperture: Senior QA (2004-2005) is an account by an ex-Apple engineer who worked on Aperture, a photo organizing app. There was some unpleasant stuff, apparently, but at the same time while reading it I acutely felt that I missed something important in life.

Here is a sample of the unpleasant stuff. Requiring people to work nights and weekends:

One day, they sent out a group email saying everyone needed to start working nights and weekends until the very end of the project. Keep in mind that the project still had roughly six months to go! People with kids would be sacrificing their entire summers.

Feels unthinkable to me. And so does this shouting incident:

I replied that I couldn’t go along with the mandatory work hours. [...] The person that sent that email came to my office within minutes. He slammed the door and shouted at me so loud that people heard it five offices away. HOW DARE YOU, he bellowed. I was fucking up the whole project, he said.

It’s not that “shouting” and “work” never occurred together in my life – I did once shout at a colleague for making my life harder by misunderstanding “git rebase”. But still, something feels very off there. As well as here:

Another fun story was that I was dragged into bug review meetings several times with management. They seriously yelled at us for writing bugs. ‘This bug should never have been written!’ they shouted. They argued that we shouldn’t write bugs on incomplete features. [...] My feeling was if you find a bug, you have to record it, or you won’t remember to check it later. [...] These managers were trying to keep their bug counts down, so they adopted all these tactics.


Managers love to plot bug counts and try to see if they can make the graph get to zero by a specific date. I found out second-hand that a high-level meeting took place late in the project where they were discussing risks to the project shipping on time. I was considered a risk to the project because of the number of bugs I filed.



Looking back, I have always worked according to “when it’s done, it’s done”. Even when there were deadlines, I would try to spend as much time as possible on the parts that did not have deadlines attached – refactoring, writing specs/documentation, improving the build process, working on continuous integration, etc.

When I worked on features with hard deadlines, I was stressed about getting them done in time, but what I cared about was getting them perfect. I remember a dialog from the time I was coordinating a two-week feature sprint at Wire:

— Me: Let’s rename this field in our public API from X to Y.
— Web team: But two days ago you said it would be called X.
— Me: Yeah, but now I think Y is a better name.
— Web team: Let’s leave it as X.

I wanted to say “BUT THEN A SUBOPTIMAL NAME WILL BE IN OUR PUBLIC API FOREVER”, but didn’t. I’m still slightly upset about it.

On the other hand, I did not care about temporary infelicities as long as I could eventually fix them:

— Me: Let’s add a fake endpoint implementation until I’m done with the real implementation.
— Backend team: What? Let’s not.
— Me: But then the web team will be able to move forward faster.
— Backend team: Still, let’s not.
— Backend team: Fine :/


What does it mean to have something “done”?

On the implementation side, things have to be as consistent as possible – same naming conventions, same code formatting, same technologies used. No duplication anywhere – pieces should be abstracted away where possible. Automatic tests for everything. No known bugs. (How can you ever require people to file less bugs? You might miss some if you do it!)

The codebase doesn’t have to be perfect as in “written exactly like you would write it if it was the N-th version of the codebase, with N approaching infinity”. But it should be in a local optimum – no easy ways to improve it, no low-hanging fruit.

On the product side, it gets trickier – something along the lines of “it’s perfect when any possible complaint can be dismissed as illegitimate”. If your app only works on Windows, well, you can say “it’s Windows-only”. If it works on Windows and macOS but not Linux, you no longer have an excuse and you should work on the Linux version.

In other words, you need to have every feature that you can’t justify not having, and the only admissible justifications are either “I don’t know how to do it” or “it goes against the product vision”. Note that “it’s not a priority” is not an admissible justification.

These quotes from a Twitter thread by Allen Holub perfectly capture my ideal workflow:

One assumption that underlies the belief that Jira is good is that a backlog is an infinitely long list of up-front requirements captured by experts who pass those requirements to teams to implement as specified. 1/6

Those requirements must be tracked to make sure they don’t get lost, and you succeed by retiring the requirement. 2/6

The requirements are (a) fixing all bugs anyone was able to find, and (b) implementing all features I can think of. When it’s done, the product deserves to graduate out of beta, but no earlier.

Maybe in some cases it’s even a good approach – I don’t know. But so far it seems to be a disaster.

I don’t want to work like this anymore.


Going back to the Aperture story, I see why I feel that “everyone needed to start working nights and weekends” is unthinkable. Not just unacceptable – “you can’t treat people this way” – but more like... why would you ever want to? When it’s done, it’s done; why does it matter if it gets done earlier or later?

And this is what I missed – the sense of urgency caused by having goals that are not about producing a perfect artifact. Those goals can be anywhere from “not getting fired” to “beating competitors” to “solving an important problem for customers”; they don’t necessarily justify asking people to work overtime, but they move it out of the territory of “why would I” into the territory of “I choose not to”.

I’m not saying I want to have worked somewhere I am shouted at, or made to work nights and weekends. But maybe it would have helped.

 3 comments   2020  

Against being a “blogger”

I. The problem

I want to write the kind of stuff that I like to read.

For instance, overarching theories are great – Kegan, The Gervais Principle, etc. I would also be happy to write about “things aren’t what you naively think they are!” – some of my favorite posts belong to that genre:

There are other preferences, too. I want to write “canonical” things, i.e. if somebody wants to learn why exactly X is wrong, my blog should be the place to start. Oh, and also entertaining and funny, somewhere between “laced with micro-humor” and “sarcastic drunken pirate” kind of funny.

So, what’s the problem with this? The problem is that I don’t care about anything. There is no goal. No benchmark. Nothing to optimize for. No “failure” or “success”.

I don’t have specific ideas that I want to convey to specific people. There are no lies that I am fighting against. No causes to promote, no changes to bring about. When I’m in the “gotta be cool” mode, I don’t have fun while writing, and I don’t learn anything.


II. The solution?

When writing this post, I had a specific goal: I wanted to internalize the problem by writing about it. And so, writing this post turned out to be quite a bit less tedious. (Unfortunately, I didn’t have a specific audience in mind – and so, it was still kinda tedious.)

I guess caring about more things would help, as well as writing for specific audiences / people. Or, to generalize a bit: I should excise “I’m a blogger” from my identity, and treat writing as just another tool in the toolbox.

 3 comments   2020  

Right hemisphere neglect

I am reading Iain McGilchrist’s The Master and his Emissary (available on LibGen) – a very long book about the left/right hemisphere divide. The first few chapters mostly detail what we know about the differences between the left and the right hemisphere, often just one sentence per study.

So far I am very tempted to conclude that my right hemisphere is – unfortunately – either atrophied, heavily repressed, or both. If you, like me, live your life with a constant feeling of “what’s wrong with all those people” – then read on.

Note: I think that strengthening the “right hemisphere” functions could be very useful in terms of (a) leading a happier life and (b) getting things done. The quotes below mostly cover (a) only. I will talk about (b) in later posts.


If it is the right hemisphere that is vigilant for whatever it is that exists ‘out there’, it alone can bring us something other than what we already know. The left hemisphere deals with what it knows, and therefore prioritises the expected – its process is predictive. It positively prefers what it knows.

I don’t enjoy unfamiliar things. I am scared of eating anything I can’t predict the taste of. I don’t get how people can listen to strangers’ playlists on Spotify all day long – when I do it, it’s because I’m willing to do the hard work of finding new music I would like, but I don’t enjoy anything about the process.

I almost always read plots of movies/shows before watching them – I don’t want to spend even a minute thinking “will they, won’t they”. I ask people to always check with me before buying me gifts, so that they would not accidentally get me something I would not like. Nobody seems to get that “it won’t be a surprise anymore” doesn’t matter at all – the feeling of surprise is momentary, and being stuck with a slightly inferior gift is forever.


[...] the right hemisphere uses unique referents, where the left hemisphere uses non-unique referents. It is with the right hemisphere that we distinguish individuals of all kinds, places as well as faces.

[...] Right-brain-damaged patients are not only poorer at identifying faces, compared with left-brain-damaged patients, but are poor at assessing such features as the age of a face with which they are not familiar.

I can’t distinguish faces well. When I’m watching a movie with someone, I would occasionally ask “is this guy the same guy we’ve seen before?”. On one level it’s not obvious to me that it’s the same guy, but on another level, even if I could take a guess, I don’t want to. I feel somewhat proud of not being able to recognize faces, or of not being able to spot familiar actors in movies, and I always make it known.

I always felt that written communication was in every aspect as good as face-to-face communication, and assumed that people were vehemently disagreeing because they couldn’t type fast, or lacked the ability to express themselves well via text, or something along those lines.

I used to never make eye contact when talking to people. I do now, but I have to force myself and it’s a bit scary.


In fact it is precisely its capacity for holistic processing that enables the right hemisphere to recognise individuals. Individuals are, after all, Gestalt wholes: that face, that voice, that gait, that sheer ‘quiddity’ of the person or thing, defying analysis into parts.

I remember a conundrum from a few years ago: everyone says you are supposed to treat your friends as “unique”, but they just happen to be my friends because I met them. If I met someone else, I would be friends with someone else. How can I justify treating anyone in my life as “unique” then?

Oh, and Tim Minchin’s If I Didn’t Have You:

Your love is one in a million
You couldn’t buy it at any price
But of the nine-point-nine-nine-nine-hundred-thousand other possible loves
Statistically, some of them would be equally nice

So I trust it would go without saying
That I would feel really very sad
If tomorrow you were to fall off something high
Or catch something bad
But I’m just saying
I don’t think you’re special
I-I mean, I think your special
But you fall within a bell curve


Although there has been much debate about the particular emotional timbre of each hemisphere (of which more shortly), there is evidence that in all forms of emotional perception, regardless of the type of emotion, and in most forms of expression, the right hemisphere is dominant.

[...] As well as emotional recognition, the right hemisphere plays a vital role in emotional expression, via the face or the prosody of the voice. The right frontal lobe is of critical importance for emotional expression of virtually every kind through the face and body posture. The one exception to the right hemisphere superiority for the expression of emotion is anger.

I’m low on empathy, I am somewhat unemotional, and I am often told that what my face shows doesn’t match the emotional content what I’m saying. Bingo.


In keeping with its capacity for emotion, and its predisposition to understand mental experience within the context of the body, rather than abstracting it, the right hemisphere is deeply connected to the self as embodied.

[...] The right hemisphere, as one can tell from the fascinating changes that occur after unilateral brain damage, is responsible for our sense of the body as something we ‘live’, something that is part of our identity, and which is, if I can put it that way, the phase of intersection between our selves and the world at large. For the left hemisphere, by contrast, the body is something from which we are relatively detached, a thing in the world, like other things [...], devitalised, a ‘corpse’.

I don’t have any special feelings towards my body. I think it would be fun living in a different one as long as it was not worse than my current one.

I don’t have an attachment to my gender identity as well (though if I woke up one morning and found out I was turned into a girl, I would be worried about the periods). I used to heavily dislike both excessive masculinity and excessive femininity.


How do you know it’s winter?

Anything that requires indirect interpretation, which is not explicit or literal, that in other words requires contextual understanding, depends on the right frontal lobe for its meaning to be conveyed or received. The right hemisphere understands from indirect contextual clues, not only from explicit statement, whereas the left hemisphere will identify by labels rather than context (e. g. identifies that it must be winter because it is ‘January’, not by looking at the trees).

Here is what I see outside – take a look:

It so happens that it’s January right now, but there is no snow, and the sky is blue. It took me about ten seconds to find anything that could be used to determine the season, even though the previous paragraph literally says “looking at the trees”. Huh.

Gear shift. I remember how my ex-girlfriend didn’t want to call me her boyfriend – because she disliked labels. Not only it was super scary, but I also couldn’t get what did she even mean by disliking labels. “This is not a ‘label’, this is how things are – why would you ever not want to admit it?”

Taking things literally

The right hemisphere takes whatever is said within its entire context. It is specialised in pragmatics, the art of contextual understanding of meaning, and in using metaphor.It is the right hemisphere which processes the non-literal aspects of language, of which more later. This is why the left hemisphere is not good at understanding the higher level meaning of utterances such as ‘it’s a bit hot in here today’ (while the right hemisphere understands ‘please open a window’, the left hemisphere assumes this is just helpful supply of meteorological data).

I will not take “it’s a bit hot in here today” as anything other than “it’s a bit hot in here today”. If you tell me that you meant “please open the window”, and I’m generally allowed to shout at you, I will be tempted to. Why didn’t you just say that you want me to open the window? Ugh.

I know that “how are you doing?” means “hello”, but nine times out of ten I will still tell you how I’m doing. I used to actually give people a rundown – this is how I’m feeling at the moment, today, this week, and this month. I don’t do this anymore, at least.


The left hemisphere, because its thinking is decontextualised, tends towards a slavish following of the internal logic of the situation, even if this is in contravention of everything experience tells us. This can be a strength, for example in philosophy, when it gets us beyond intuition, although it could also be seen as the disease for which philosophy itself must be the cure; but it is a weakness when it permits too ready a capitulation to theory.

For me, consistency and “following of the internal logic of the situation” are very important. I readily bite philosophical bullets. For instance: given how much chicken suffer in farms, buying a chicken breast is not worse – in terms of consequences – than torturing a cat, so if you don’t hate people who enjoy going to KFC, you are not allowed to hate people to torture cats. (I found a way around it – statistically there is probably something wrong with people who torture cats, so not wanting to be around them makes sense. Still, hating them doesn’t.)

Whenever I remember about the heat death of the universe, for a moment I distinctly feel “maybe I should just die, because in the end the universe won’t exist anyway and nothing I did would matter at all”. As long as I don’t remember about it, though, I’m fine. On the other hand, I bite the “cloning consciousness” bullet as well – it’s okay to clone me and kill the original version. It doesn’t mean I wouldn’t be scared – it only means that I feel like I shouldn’t be.

Knowing things vs. looking things up

When I’m in an unfamiliar place, I will use Google Maps for everything. Even if I’m going somewhere that is five minutes away, and I have been there before, I will still check the map. If someone tells they don’t have maps installed, I pity them, and if it’s on purpose, I treat them with suspicion. In fact, I’m actively rooting for maps in the maps–humans battle – if someone tells me “oh, I know a faster route”, I don’t think “this is cool”. I think “aw shit, I want the map to always know the best route, I want it to always beat you, ugh”.

Things that exist vs. hypotheticals

The left hemisphere operates an abstract visual-form system, storing information that remains relatively invariant across specific instances, producing abstracted types or classes of things; whereas the right hemisphere is aware of and remembers what it is that distinguishes specific instances of a type, one from another. The right hemisphere deals preferentially with actually existing things, as they are encountered in the real world.

I hate it when people say “some things shouldn’t be joked about”. I sort of get the concept, but it still makes me very irritated. A joke is a hypothetical, and all hypotheticals are allowed. “Ugh, I have a headache” — “Maybe you have head cancer?”. Alright, granted, not the most amusing joke. But why would you get angry?

On the other hand, thinking about hypotheticals is just as interesting as thinking about the real world. Some people try to subvert this – you tell them “let’s assume X” and they tell you “but this is not what happens in the real world”. Geez, I specifically said “assume”. Why won’t you slavishly follow the internal logic of the situation? Why?

In the Harris–Klein debate, I am firmly on Harris’s side.

Abstract paintings

It has to be said that, though it is involved with emotion, the left hemisphere remains, by comparison with the right, emotionally relatively neutral, something which is evidenced by its affinity for ‘non-emotional’, abstract paintings.

A friend was choosing a T-shirt for me. Here are literally the instructions I gave:

I like abstract things
I don’t like logos or text or generally things that show affiliation (cartoon characters etc)
I also don’t like patterns covering the whole T-shirt, but I don’t know why

When I was a kid, I had “crafts” lessons and I hated them. I can only remember two drawings I did: one was a tracing of my hand, with a different color for each finger, and the other was an eye with circle-shaped patterns in it. And of course, I used a pair of compasses instead of drawing the circle-shaped patterns by hand.


I have a need to take account of myself as a member of my social group, to see potential allies, and beyond that to see potential mates and potential enemies. Here I may feel myself to be part of something much bigger than myself, [which requires] more of an open, receptive, widely diffused alertness [that the right hemisphere specializes in]

Until recently, I didn’t feel the need to belong. I figured out that some people wanted to achieve things, and some people felt more comfortable helping others achieve things, and my feelings towards the latter group could be summarized as “you are useful, granted, but I would never ever want to be you, or even to be friends with you”.

A co-worker mentioned they were going to a protest; I made an incredulous face and told them that surely it was much more useful to organize one than to attend one.


I think that all those things are fixable.

In other words, I don’t think that I lack the actual capacity to do anything from the above – e. g. improv seems to be squarely in the right hemisphere department, but when I tried it, it was fine. (I was still scared of it, but it was fine.)

I don’t know how exactly it could be fixed, but I have a hunch. We’ll see. In the meantime, if you have any ideas, leave a comment or DM me on Twitter: @availablegreen.

Weird experiences: free will mishaps

This has been moved out of the Vipassana post.

Why do feel like I have free will? Well, for instance, I can pretty reliably control my body, e. g. I want my hand to move and it moves. If I couldn’t control my body or my thoughts at all – e. g. when heavily drugged – I would probably say I did not have much free will at that moment.

Furthermore, if I lost my internal monologue as well and could only process sensory data – had awareness of what I see, hear, etc, but no thoughts about it – I would say that perhaps I still had some degree of consciousness, but no free will at all.

Here are some weird experiences that rather complicate both of those intuitions. (For me, they render the whole question of “do I have free will?” rather meaningless. Your take might differ.)

Alien hand syndrome

The alien hand syndrome goes like this: you are trying to light a cigarette and your left hand actively prevents you from doing so. There’s no observable thought process, either – it just moves against your will. Or rather, it is certainly controlled by your brain, but you no longer feel like it’s true.

From Arin Bhattacharya’s An Overview On Rare Diseases, Volume II:

[...] patients frequently exhibit “intermanual conflict” in which one hand acts at cross-purposes with the other “good hand”. For example, one patient was observed putting a cigarette into her mouth with her intact, “controlled” hand (her right, dominant hand), following which her alien, non-dominant, left hand came up to grasp the cigarette, pull the cigarette out of her mouth, and toss it away before it could be lit by the controlled, dominant, right hand. The patient then surmised that “I guess ‘he’ doesn’t want me to smoke that cigarette.” Another patient was observed to be buttoning up her blouse with her controlled dominant hand while the alien non-dominant hand, at the same time, was unbuttoning her blouse.


It gets worse. Anosognosia: you want to move your hand, it doesn’t move (e. g. it is paralyzed), and your brain immediately make up a justification for why you actually didn’t want or didn’t try to move it after all, preserving the belief that you are in full control of your body. When called out on your bullshit, you make up another one, and another, and another.

Apparently, the only way to wake up from anosognosia is to get cold water sprinkled into your ear (?!), but it doesn’t last for long.

From The Apologist and the Revolutionary:

Anosognosia is the condition of not being aware of your own disabilities. To be clear, we’re not talking minor disabilities here, the sort that only show up during a comprehensive clinical exam. We’re talking paralysis or even blindness. Things that should be pretty hard to miss.

Take the example of the woman discussed in Lishman’s Organic Psychiatry. After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn’t actually her arm, it was her daughter’s. Why was her daughter’s arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter’s hand? The patient said her daughter had borrowed it. Where was the patient’s arm? The patient “turned her head and searched in a bemused way over her left shoulder”.


If somebody severes the connection between your brain’s hemispheres, tells one hemisphere to do something, and ask the other hemisphere “why did you do it?”, it will make something up – and, again, be completely convinced that this is the true justification. No matter what your actual actions are, you can still find a way to believe that “you” caused them, even when you didn’t:

[...] a split-brain patient was shown two images, one in each visual field. The left hemisphere received the image of a chicken claw, and the right hemisphere received the image of a snowed-in house. The patient was asked verbally to describe what he saw, activating the left (more verbal) hemisphere. The patient said he saw a chicken claw, as expected. Then the patient was asked to point with his left hand (controlled by the right hemisphere) to a picture related to the scene. Among the pictures available were a shovel and a chicken. He pointed to the shovel. So far, no crazier than what we’ve come to expect from neuroscience.

Now the doctor verbally asked the patient to describe why he just pointed to the shovel. The patient verbally (left hemisphere!) answered that he saw a chicken claw, and of course shovels are necessary to clean out chicken sheds, so he pointed to the shovel to indicate chickens.

Persistent non-symbolic experiences

Even given all these aberrations, it is still hard to believe that all of our actions, not just some, could be post hoc rationalizations. And sure, I cannot prove it.

But I can throw one more stone into the bucket of doubt. Here is a man who has no inner monologue at all and seems to function exactly as he functioned before he lost it, Gary Weber:

For the next 25 years, as Weber finished his PhD, married and raised two kids and made his way through a string of industry jobs – eventually culminating in a senior management position running the R&D operations of big manufacturing business – he got spiritual. He read lots of books, he meditated with Zen teachers, mastered complicated yoga postures, and practiced what is known in Vedic philosophy as “self-enquiry” – a way of directing attention backwards into the center of the mind. To make time for all this, Weber would get up at 4am and put in two hours of spiritual practice before work.

Although he says he never had the sense he was making progress, Weber kept at it anyway. Then, on a morning like any other, something happened. He got into a yoga pose – a pose he had done thousands of times before – and when he moved out of it his thoughts stopped. Permanently.

“That was fourteen years ago,” says Weber. “I entered into a state of complete inner stillness. Except for a few stray thoughts first thing in the morning, and a few more when my blood sugar gets low, my mind is quiet. The old thought-track has never come back.”

[...] What he cared about was that in an hour he needed to go to work, where he was supposed to run four research labs and manage a thousand employees and a quarter of a billion dollar budget, and he had no thoughts. How was that going to work?

“There was no problem at all,” Weber says, which he admits may say more about corporate management than about him. “No one noticed. I’d go into a meeting with nothing prepared, no list of points in my head. I’d just sit there and wait to see what came up. And what came up when I opened my mouth were solutions to problems smarter and more elegant than any I could have developed on my own.”

This sounds a bit / a lot like enlightenment. However, if you are curious but don’t feel like reading about spirituality much, you can search for “PNSE” instead. Scott Alexander’s review of _Clusters Of Individual Experiences Form A Continuum Of Persistent Non-Symbolic Experiences In Adults_ is a good place to start.


Are our actions determined by our brains? Not always and not entirely, but often enough that I feel comfortable saying “yes”.

Is it hard to predict what we will do? Also yes. (And if you want to drag quantum uncertainty into it to upgrade the status from “hard” to impossible, Scott Aaronson’s The Ghost in the Quantum Turing Machine is a good read.)

Are we more complicated than, say, worms? Hell yeah, even though worms are pretty complicated too.

Where does the strong feeling of having free will come from? I don’t know, but perhaps it is simply because we observe the correlation between our thoughts and our actions so often, that it seems that correlation must imply causation. An interesting read: neuroscience of free will on Wikipedia. Perhaps also Baer, Kaufman, Baumeister – Are We Free? Psychology and Free Will, but I haven’t read it yet.

Finally: do we have free will? Eh.

Resolving internal conflicts

Discovering the conflict

Internal double crux is sometimes sold as a technique for resolving internal conflicts, but in reality it’s closer to “discovering internal conflicts if the only thing you have is some very inaccurate clues”.

The idea is: if you have an internal conflict, write it in form of a dialogue and see where it leads you.

Let’s say you noticed that sometimes you want A and sometimes you want B, and they are at odds. Then the process goes like this:

— Statement by A
— Statement by B
— [while writing as A, empathize to the previous statement by B] Statement by A
— [while writing as B, empathize to the previous statement by A] Statement by B
— ...

Usually it leads to figuring out that neither A nor B were even close to how you actually feel. Here’s someone else’s experience after trying internal double crux:

The first IDC I tried started with two plainly-named sides “I should floss” and “Flossing is a waste of time.” After further focusing and felt shifts, the two sides sound more like “Flossing is a ritual of self-care showing myself I deserve love” and “Flossing is one of infinitely many impositions by which my parents want to curtail my liberty.” The underlying conflict finally emerges!

Note: writing on paper seems to work better than typing, and either is better than saying words out loud. I don’t know why, but I suspect it’s because writing is more permanent. Typing doesn’t work well because then I’m just tempted to edit myself all the time.)

Also note: after trying internal double crux several times, it turned out to be useful enough that I internalized “huh, apparently I literally can’t have some thoughts unless I write them down”, which kickstarted the process of Writing Things Down All The Time. I will talk about it in future posts, though I hinted at it in Giving advice.

Okay, now you know how the internal conflict looks. Where do you go from there?

Giving voice to the conflict

Ego Analysis as a Deeper Form of Cognitive Therapy introduces a crucial insight that sometimes arguments are silly enough that you can reject them on your own, without any outside help, if you actually try to argue for them. This is heavily in contrast with, say, cognitive behavioral therapy, where you and the therapist argue against your thoughts:

What the standard cognitive therapist (CT) does is to be an attorney in this case you are bringing against yourself. [...] Here you are in the courtroom and the case against yourself seems to have been decided. You have been found guilty, but the charges are vague and poorly substantiated, as Kafka knew. The CT shows that they demonstrate cognitive errors, like overgeneralizing, catastrophizing, all-or-nothing thinking, and jumping to conclusions.

The problem with arguing against your thoughts is that your brain is not dumb. It knows that a thought being “vague and poorly substantiated” is not a good reason to abandon it. (And it’s right.) Instead, you should bring out the thought, let it develop, and then see for yourself – is it total bullshit? Or not?

[...] What you find when you don’t try to refute the automatic thoughts or pathogenic beliefs, is that patients themselves not only have automatic thoughts, they have automatic refutations. So the patient may hear “You can’t do anything right,” but he also hears the rejoinder that “I do so do things right.” [...] Those refutations only work momentarily; we do a kind of broken-field running.

[...] Our approach is to bring out the whole internal argument. The whole courtroom scene. [...] [CTs] will argue, as your attorney in this courtroom drama, that just because you screwed up your VCR you are not a total washout. That can be very relieving. But, as I will get to, it can be even more relieving to have the person experience the full impact of the internal condemnation and of the weak internal refutations that only kept it hidden.

An example: I used to feel that I have to admit that Beethoven is great – so I tried to explain, as well as I could, why Beethoven is great. And then I explained why I felt it was a misguided argument. It took about half an hour, and what happened next is that I lost all desire to argue about Beethoven. I understand why he’s considered great, I understand why I don’t care, and I accept that I might start caring in the future. It will also be much easier to actually like Beethoven if I decide to spend more time exploring classical music, because now my values no longer depend on whether I’m right about Beethoven or not – I made up my mind.

What to google if you want to learn more about this, find a therapist, or whatever

If you need further guidance, the right keyword is coherence therapy. I have been recommended the Coherence Therapy Practice Manual & Training Guide by Bruce Ecker and Laurel Hulley (available on LibGen), but I’m not sure it can be used as a self-help book.

Gendlin’s Focusing is a somewhat different technique that additionally relies on bodily sensations. I have never tried it, but everyone recommends it.

Giving advice


Patri Friedman gives a very sensible hierarchy of advice. Cheap, individual experiences are useless. Right?

I think a lot of internet advice comes from the wrong place in the experience hierarchy, and that’s especially bad because advice has a power law distribution. Here is a draft hierarchy.

  1. I heard about this cool idea.
  2. I read for a few minutes about this cool idea...
  3. I actually considered trying this cool idea!
  4. I actually briefly tried this new thing!!
  5. I did this new thing for weeks or months and it so worked!!!
  6. I did this thing for years or decades and it’s deeply woven into my life.
  7. I’ve spoken with / taught a few others how to do the thing.
  8. I’ve studied most extant research about this thing & can summarize.
  9. I’ve spoken to / worked with / interviewed hundreds of people working on this thing (generally as a professional, or through net communities).

So many people preach shallow wisdom based on 1, 2, 3, or 4 (I used to do so a lot myself). But those are cheap, superficial, individual experiences. I have come to believe that 5 & 6 should usually be the minimum to give advice, and that 8 & 9 are vastly better.

Advice that has worked for someone long-term is easily 10x or 100x more valuable than 1,2,3,4,5. And advice based on a large cross-section of people is easily 10x or 100x more valuable than one person’s experience.

[...] Finally, I will admit this tweet is only a 6-7.

I will also admit that on the hierarchy of advice, this post (the one you are reading right now) is a zero. Where zero is defined as follows:

0) Based on my personal beliefs and biases, this sounds like a good idea so I would recommend it.”

That is: I already thought about “cheap, individual experiences seem useless” before and it sounds like a good thought, so here I am, quoting it at you, even though I have literally no datapoints concerning whether cheap experiences are useless or not.

Oh well.


So here is what else I think.

If people were not allowed to recommend anything based solely on their personal beliefs and biases, a lot of interesting things to try would never get recommended. Even if you are at the level 9 of expertise in some topic, what you probably learned is “half the people in my industry feel one way, and the other half feels the opposite way, and it’s a giant mess”.

And generally, in addition to advice people also need an environment in which they can generate their own thoughts – by sparring with others’ thoughts. It’s no fun to spar with “eh, maybe you should do X but I really really don’t know, sorry”. It’s also no fun to spar with “the scientific consensus is that you should do X”, because you would look stupid arguing with that.

Finally, giving advice is performative. Putting a piece of advice out there makes it easier to move on and admit that my advice is bullshit, which is kinda useful. (Hence this blog. Ha!)

Ranking these three points on the hierarchy of advice, I get 4, 5, and 6. Not that bad, but not great either. Perhaps in a few years I will be able to finally tell you whether the hierarchy of advice is good or not, but for now this is all I have.

Prisoner’s dilemma is the mind killer?

A follow-up to Decision theories, LW-style.

The contradiction goes like this: you want your players to use a very specific reasoning process that you like (“I will do whatever is better for me”), but somehow end up cooperating.

Let’s say I propose the following algorithm: “always cooperate”. You go:

But let’s consider the first player. Isn’t it better for them to defect?

I say “sure, but they don’t do what is better for them”. You retort:

Then the payoff matrix is different!

I say “the payoff matrix is the same, the algorithm is different”. You beg to disagree, and proceed to construct a payoff matrix that corresponds to my algorithm: “cooperate = 1, defect = 0”. I comment: “sure, IF YOU INSIST ON SELFISH PLAYERS, then you would have to use a different payoff matrix to replicate the behavior of cooperating players, and now your game is not a prisoner’s dilemma anymore”.

Here is an uncharitable interpretation. The dilemma does not arise when you explicitly treat the players as dumb automatons that you want to cooperate; it only arises when you put yourself into the shoes of one of the players. “I want to cooperate, but I want to choose it on my own and not be forced into it.”

Stop it! If you only consider decision algorithms that are acceptable for you to adopt, you will never find a good one. You will never consider, for instance, stealing tricks from the religion’s book to create better-cooperating players.

Decision algorithms are not for you. They are for stupid agents without free will. You have the power to make them do whatever you want. Don’t try to bestow free will upon them – they don’t need it. And don’t limit them to what you would do.

Decision theories, LW-style

There is a lot of confusion about Newcomb’s paradox, various decision theories discussed on LessWrong, free will, determinism, and so on. In this post, I try to make all of this less confusing.

I am not going to talk about the domain of decision theory in general. This post is purely about the parts LW is interested in.


Popular decision paradoxes are inherently contradictory – e. g. “what if you had free will but were still completely predictable” (Newcomb’s paradox), or “what if you really wanted agents to cooperate but also wanted them to be ruthless” (prisoner’s dilemma). Useful questions become useless mind-benders.

Decision theories are expressly for agents that don’t have free will – choosing a decision theory for an agent literally means “what algorithm should an agent blindly follow?”. Furthermore, a lot of interesting questions about decision theories only apply in settings when you have more than one agent following similar theories and you want something from them (like “cooperate without prior communication”).

Trying to design algorithms for those scenarios is much more productive than trying to spot contradictions in decision paradoxes where the roles of the algorithm designer and the algorithm executor coincide.

I. Newcomb’s paradox: what if you had free will but didn’t?

To mess with you, The Ultimate Predictor, also known as “Omega”, has maybe put an iPad into a box. Or maybe not.

On the top of the box, there is a note: if you punch yourself before opening the box, the box will have an iPad in it. You can take it and go home and brag to everyone and waste the next week watching funny British panel shows, especially Would I Lie To You?, alone, in the dark. (I would like to preemptively note that nothing of the sort has ever happened to me, except for the whole second part.)

Omega is not a god, but they are never wrong and can not possibly ever be wrong. This you know for sure. Do you obey and punch yourself before opening the box?

It seems like you definitely should. However! Omega can not change what’s inside the box, so let’s be Smart™. If there is an iPad, you should not punch yourself, because then you will have both the iPad and your dignity. If there isn’t an iPad, you should not punch yourself either, because why would you? So in both cases you can just skip that bit and open the box. Surely, it sounds like the proponents of this point of view – affectionately called “two-boxers” for complicated reasons – have a smart argument on their side.

However, those who obey Omega – “one-boxers” – have a good argument too, which is that they have iPads and the other side does not, despite being so very smart. So what should you do?

II. A rule of thumb: decision theories are for your kids, not for you

You are now living on the home planet of Omega. Not a day passes without being offered a box, or two boxes, or three boxes, and it gets old really fast. You resolve to never leave the house, and instead you have coaxed your kid to do your errands. Naturally, all iPads accumulated by the kid during those errands are your rightful property.

Now the question becomes: what should you teach the kid? And it’s a much easier question. You can teach the kid Evidential Decision Theory and off you go, while still believing Causal Decision Theory is smart and the previous one is dumb.

This is the deal with decision theories. Treat them as “what is the most useful behavior for a stupid agent in some stupidly convoluted world?”. They are not about you, the Mastermind Plotter, a free agent who is impossible to predict, yet somehow also possible to predict. They are about a kid, or maybe a self-driving car, or a religious community – i.e. an agent or set of agents that can be influenced.

III. Prisoner’s dilemma: what if you really wanted agents to cooperate but also wanted them to be ruthless?

Let’s apply this principle to another paradox, the prisoner’s dilemma. I don’t feel like inventing a silly framing for it, so you can read about it on Wikipedia.

Two players are playing a game:

  • If they both choose to cooperate, each gets a reward.
  • If one of them defects, the cheater gets a bigger reward and the nice player is, counterintuitively, punished.
  • If both of them defect, nobody gets anything.

The Smart™ reasoning goes like this: if the other player cooperates, you should defect and get a big reward. If the other player defects, you should also defect – to avoid punishment. Therefore, you should defect, period.

The dilemma lies in the fact that when the players are smart, the outcome is not. So being nice turns out to be better than being smart. Huh.

What is the right decision algorithm? To figure this out, again, think about a kid.

If you have a kid, and you only care about their success in life, and nothing else in the world, you should teach them to be smart. Maybe even psychopathic, though it is debatable.

If you have two kids, however, you should teach them to be smart but be nice to each other, so that they will get rewards whenever they happen to play with each other. (Or, if the defector’s reward is much bigger than the cooperator’s punishment, they should take turns at defecting and exploit the system.)

Why? Because you care about both kids! If you only care about one of them, you can teach one of them to be ruthless and the other – to be nice and turn the other cheek. If you care about both of them but want them to be ruthless, teach them to cooperate with each other and no one else. If you want them to be maximally ruthless and then you say “oh but why they don’t cooperate”, well, you want a contradiction.

Again, this is the deal with decision theories. I’m not going to use a fancy decision theory to decide how to live my life, but I am very interested in a fancy decision theory that I can instill into the malleable minds of my kids, readers, self-driving cars, whatever. And being Smart™ doesn’t quite cut it here – this is how you get, for instance, a fleet of murderous cars. This is why we need something better, and this is why thinking about decision theories is worth spending time on.

IV. XOR blackmail problem

In the bottomless chest of decision theory edge cases, there is another wacky one that we have to deal with.

An agent hears a rumor that their house has been infested by termites, at a repair cost of $1,000,000. The next day, the agent receives a letter from the trustworthy predictor Omega saying:

“I know whether or not you have termites, and I have sent you this letter if and only if exactly one of the following is true: (i) the rumor is false, and you are going to pay me $1,000 upon receiving this letter; or (ii) the rumor is true, and you will not pay me upon receiving this letter.”

Thinking “if I do X, it must be Y” does not work here. Evidential Decision Theory wants you to pay up to somehow magically end up in the universe where the rumor is false, because according to the problem statement, deciding to pay must mean that the rumor is false.

“But didn’t the same thing happen in the Newcomb’s paradox and you said exactly the opposite thing?” Once more, think of a kid to make it easier.

Do you want to teach your kid to pay upon receiving a letter like this? Then Omega will send them letters when someone spreads false rumors about them, and they will be bleeding money. Furthermore, they will not get any letters when the rumors are true.

Do you want to teach your kid to not pay? Then they will get the letters only when the rumors are true, and won’t have to pay anything. Awesome! They don’t bleed money and they get to know when their house is infected with termites, straight from the infallible Omega.

The difference between the two problems is that in Newcomb’s paradox, the contents of the box depend on your decision. In this problem, whether you have termites or not (i.e. what you care about) does not depend on your decision, the only thing that changes is whether you get a letter about it.

This way of thinking is called Functional Decision Theory. In particular, when you are confronted with entities that supposedly know exactly how you think, just go “What should I be thinking to get the best outcome? Okay, then I will think that.”

Note: I suspect that is that in real life it translates to “if you notice that people reward genuine kindness, try to figure out how to be genuinely kind and at the same time still screw people over, so that you get both the benefits of being kind and the benefits of screwing people over”. And it works!

V. Simulation as a causal mechanism

Going back to Newcomb’s paradox, it is still irritating that there is no causal link between “you decide to disobey Omega” and “the box is empty”. Maybe a kid can’t decide anything, but you definitely can, right?

The simulation argument could provide such a causal link.

If we know Omega is never ever wrong, they are probably simulating you to figure out what you will decide, kinda like in Black Mirror (e. g. Hang the DJ). So when you are deciding to take the box, you don’t know if it’s actually you, or you-in-the-simulation. By making the right decision while in the simulation, you can help out your non-simulation version.

This even works with otherwise mind-boggling variants of Newcomb’s paradox, like a variant where everything is the same except that the box is made of glass. You literally look at the box, see the iPad in it, and yet somehow you still have to obey Omega to get the iPad. Why? Because you’re in the simulation, and by choosing to obey the Omega, you will ensure that the real-world version of you will be presented with a full box, instead of an empty box.

A possible objection is: but what if Omega is just really good at psychology and statistics and so on, but doesn’t actually simulate anything? In this case...

VI. Determinism is a great answer to everything

There is no free will, it’s all an illusion, “what should you decide” is not a meaningful question. In fact, if Omega can look at your past life and predict what box you will choose, you personally don’t have much free will, sorry. “Omega probably just noticed that I always two-box when I’m having a grumpy day”. So why are you asking what should you choose, then? Are you having a grumpy day or not? It’s settled then.

Like, okay, you are staring at a glass box with an iPad in it. “Should” you obey Omega and punch yourself anyway? Or for people who have skipped my variant of Newcomb’s paradox entirely: should you one-box even when both boxes are transparent? The answer is: if you find yourself in this situation, you have learned something about yourself. Specifically, that you are a one-boxer. Or, to quote The Last Psychiatrist:

If some street hustler challenges you to a game of three card monte you don’t need to bother to play, just hand him the money, not because you’re going to lose but because you owe him for the insight: he selected you. Whatever he saw in you everyone sees in you, from the dumb blonde at the bar to your elderly father you’ve dismissed as out of touch, the only person who doesn’t see it is you, which is why you fell for it.

Note that this does not mean thinking about decision theories is meaningless – the question of “how should you indoctrinate your kid?” or “what should the self-driving car do?” is still relevant. The difference between you and the self-driving car is that the self-driving car does not have free will, but you supposedly do. Of course the question “what algorithm should I use?” becomes maddening then – you can not, at the same time, (a) follow an algorithm and (b) have free will, aka the ability to overrule the algorithm whenever you feel like it.

VII. The psychopath button

Here is another illustration: the psychopath button problem.

Paul is debating whether to press the ‘kill all psychopaths’ button. It would, he thinks, be much better to live in a world with no psychopaths. Unfortunately, Paul is quite confident that only a psychopath would press such a button. Paul very strongly prefers living in a world with psychopaths to dying. Should Paul press the button?

Should Paul press the button? If he does, he’s a psychopath and he shouldn’t have pressed it. If he doesn’t, he’s not a psychopath and he should have pressed it.

If you treat the button press as a choice between being a psychopath and not being one, the answer is clear: Paul should not press the button, i.e. should not be a psychopath.

If you assume that Paul does not have a choice, the question disappears completely – he will press the button if he’s a psychopath, he won’t if he’s not, in both cases the consequences won’t be good, but that’s how life is sometimes.

The question is only a conundrum when you insist on it being a choice and not being a choice at the same time. Well, good luck with that.

VIII. Conclusion

This is how I recommend approaching decision problems.

If you want to figure out how your robots/kids/agents/cars should behave, mostly drop the philosophy. Look at the history of e. g. cooperation tournaments and what tends to work well there. Do your own experiments. Think about whether you care about the agents, or about the world that the agents are in, and in what proportion. Think about whether you can build a reliable way for agents to read each other’s intentions – e. g. humans can’t hide being angry because their faces get red, stuff like that. Trusted computing, remote attestation. Vitalik Buterin’s vision for Ethereum is ultimately a cooperation platform: inspectable agents, non-forgeable identities, zero-knowledge proofs.

If you want to figure out how you should behave, there are usually two separate questions: “what kind of behavior will win in this implausible scenario?” and “how do I justify this to my/someone’s intuition?”. The first one is often straightforward, and the second one is often resolvable with a combination of the determinism hypothesis and the simulation hypothesis.

Finally, if the problem happens to lie along the lines of “you will do X, but doing X is bad for you, so what should you do, huh?”, just reduce it to this form explicitly and banish it from your mind forever. There are more interesting things to think about.

Earlier Ctrl + ↓