On idea-driven ideas

2025 Aug 12 See all posts


On idea-driven ideas

A long time ago, in the pre-Covid century, I remember the economist Anthony Lee Zhang describing to me his distinction between "idea-driven ideas" and "data-driven ideas". An idea-driven idea is an idea where you start off with some high-level philosophical frame - eg. markets are rational, power concentration is dangerous, time-worn traditions are wise - and deduce a more concrete insight from that frame plus some logical reasoning. A data-driven idea is, in its pure form, an idea that comes out of a process where you start with no preconceptions, do some analysis on data, and endorse whatever conclusion you get. The implication: data-driven ideas are clearly the better type of ideas to have and promote.

Last month, Gabriel from Conjecture critiqued my approach to d/acc by arguing that instead of starting from an "ideology" and trying to make it more compatible with other human goals, I should effectively just be a pragmatist, and neutrally seek whatever strategies do the best job of meeting the entire set of human values.

These are common sentiments. So what is the proper role of what might alternatively be called ideologies, principles, ideas built on top of ideas, crystallized goals, or consistent guiding thoughts in a person's thinking? And, on the flip side, how do these thinking styles fail? This post will attempt to describe my thoughts on the topic. The argument I will make is as follows:

  1. The world is too complex to "pragmatically reason through" every single decision. To be effective, you need to take, and reuse, intermediate steps.
  2. Ideology is not just about personal cognition, it's a social construct. A community needs something to rally around, and if it's not an idea or story then often it instead ends up being a person or small group - which has potentially worse downsides.
  3. Another value of encouraging different people to have different narrower goals is enabling and organizing specialization.
  4. Ideologies in practice are a complicated mix of means and ends. Our theory needs to account for this.
  5. Ideology has downsides, and there's many ways it interferes with good thinking. This is an actual big problem.
  6. Good individual, and social, decision-making requires a balance of "idea-driven" and "pragmatic" modes. I propose a couple of solutions for what this balance concretely looks like.

Good decision-making in complex contexts always has "structure"

Imagine that you are trying to improve how you play chess. In chess, there is a common rule of thumb: a queen is worth nine pawns, rook is worth five pawns, and a bishop or knight are worth three pawns. Thus, a rook plus a pawn for a bishop and a knight is an ok trade to make, but a rook for a knight is not.

This insight has many implications. If you are trying to come up with good tactics in chess, one place to look is to find ways to use your knight to "fork" two of your opponent's stronger pieces: two rooks, or a rook and a queen, etc. Your opponent is forced to accept your knight eating one of the two strong pieces, in exchange for being able to eat the knight (a weaker piece) right after.


White to move. Knight to f7 is a good move, but you need to know the "knight = 3 pawns, rook = 5 lawns" rule to easily recognize it as such.


Here, "queen = 9 pawns, rook = 5 pawns, knight = bishop = 3 pawns" functions as a generator of further downstream ideas: it's an insight that you can start with that is much more likely to generate effective tactics than searching completely randomly. We can think of that statement as being an "ideology". Since pieces on the board in chess are called material, let us overload an already-overloaded term and call this ideology "materialism".

One could imagine someone who disagrees with materialism, either partially or fully. Often, sacrificing material is okay in service of positional goals, such as exposing the opponent's king or claiming the center of the board. The value of material can also be context-dependent. In an endgame, I've found that single knight is worth more than a single bishop, whereas two bishops are worth more than two knights. If your opponent has one bishop left, pawns might be worth more if they are on squares of the opposite color to that bishop. A person whose approach to chess tactics focuses on exploiting these situations might call themselves a "positionist".

Positionists and materialists may disagree on practical issues, such as whether or not to trade two pawns for a bishop in a situation like this:


To take h3 or not to take, that is the question.


An ideal chess player might be able to combine the materialist and positionist perspectives, juggling between them based on what the details of the situation demands. This is like Hegelian synthesis. However, actually doing this requires having some specific ideas about when to focus on materialist arguments and when to focus on positionist arguments, and these ideas themselves can be viewed as a new ideology.

Principles have value in social coordination

Effective action in the modern world has to be collective action: actions taken by hundreds or millions of people simultaneously that all act towards the same goal. Some of this can be accomplished with money (or physical coercion), but this is limited; much of what we do relies on intrinsic and social motivation to truly be effective.

In my post on Plurality, I describe how communities have three primary options in this regard:



Coordinating around a task is powerful: if you can convince lots of people that it would be really valuable to go to the moon, then once they start working, you have lots of people who will put a lot of hard work, creativity and energy into going to the moon. Ethereum's Merge (switch from proof of work to proof of stake in 2022) was like this for many people in the community. But a task is one-time, and you don't want all the social capital that was built up after the task is complete to dissipate. Principles and leaders are both powerful because they are generators of tasks: they can keep pointing to new valuable tasks to perform as old ones finish.

Coordination around leaders has a well-understood risk: leaders are fragile. There are many tales in history of leaders going crazy, or priorities and values drifting in milder but still highly consequential ways. This applies not just when the leader is an individual, but also when the leader is a group.

Coordination around principles - especially, principles that are not consequentialist - can be much more robust. A key property of (well-chosen) principles as a coordination technique is what I call "galaxy brain resistance". A weakness of consequentialism is that it's vulnerable to leaders making clever arguments about how pretty much anything they choose might actually have the best consequences for complicated 4D-chess second-order reasons. Principles are effective at serving as a brake on that, saying "no matter how clever your arguments are, we have some easily legible barriers against some things that we just don't do". In this sense, a major weakness of ideologies - that ideologies are dumb - can actually be an advantage.

One other form of coordination that is important is internal coordination, or what is often called "motivation". I have often found that you can take insights about coordination between people, and apply those insights to the different "sub-agents" that have different perspectives and goals inside a single person's mind. Here, the analogy is: having a clear principle or goal that you're internally aligned on can both make you more motivated to do your work, and prevent you from going off the rails and self-justify doing something wrong.

Crystallized goals as specialization

It can be useful for different people to have different goals, if these people are in different sub-units of an organization that have particular missions. A company has a marketing department, and it has a software development department, and many more departments. You don't actually want the marketing department to be extremely open-minded and constantly thinking about any way to make the company more successful. You want it to focus on marketing. This again seems to deviate from pure consequentialism, but the rigorous division of labor enables the kind of order that lets the company reliably get things done. I would argue that the general project of human civilization has similar properties: you want different people to internalize and focus on different civilizational sub-goals.

One subtle and underrated reason why this is the case is that it enables measurement. If an agent has a goal to "do all the useful things", it is difficult to tell if it's performing well or poorly (both internally, from the agent's own self-improvement view, and externally, for accountability). But if an agent has a more narrow goal in mind, then you can tell how well it's doing and how it might be improved. The benefits of this can be great - plausibly, sometimes great enough to outweigh the downsides of different agents with different sub-goals having some coordination failures.

Ideologies are a mix between means and ends

In this post so far, I have been talking about ideologies primarily as being about means: they are sets of claims about what actions best achieve some commonly-agreed goals. In Gabriel's post, ideologies are primarily about ends: what goals to focus on in the first place. In reality, ideologies are always a complicated and messy mix of both. But to the extent that ideologies are about ends, how do I take this into account in the arguments that I made above?

Here, I will answer the question by cheating somewhat: I argue that any goals that we crystallize enough to form into an ideology or write down on paper are actually a type of means.

To see why, consider the case of someone who really values freedom. At first, they might say that they value freedom because it enables a more efficient economy and a more robust society. But then, suppose that you come in and show them a way to have a very efficient economy and a robust society without much freedom. Perhaps, you could have an advanced computer that controls the economy and tells everyone where to work, and robustness comes from some democratic voting mechanism that runs every month that can adjust the computer's inputs or replace it entirely. This libertarian sees your vision of this society, and they feel really uneasy, and they just know that if this was put into practice, they would immediately start plotting to rebel against it.

What is going on here? I would argue that "crystallized values" are themselves tactics or predictions, where the real ultimate goal (the "win condition" that they are targeting) is a highly illegible and complicated mass of conditions and preferences that are inside each of our brains. When this libertarian hears about this proposal for an efficient and robust, but unfree, society, they are making a realization that, actually, efficiency and robustness are analogous to material in chess: an important part of winning the game, but not the only part.

Ideologies can have major downsides

Climate change hawks will often say that they support degrowth-style policies because they are the only way to avoid the planet overheating. But if you suggest solar power (or worse, solar geoengineering) as a way to avoid the planet overheating without needing to interfere with material abundance or capitalism, they always seem a little too enthusiastic to come up with reasons why such a plan would not work or would have too many "unintended consequences".

Cryptocurrency enthusiasts will often say that they want to improve global finance accessibility, create trustworthy property rights, and solve all kinds of social problems with blockchains. But if you show them a way to solve the same problem without any blockchain at all, they always seem a little too enthusiastic to come up with reasons why your plan would break, perhaps because it's "too centralized" or it "doesn't have enough incentives".

Both of these examples are somewhat like the example of a libertarian that I gave above, but they are not quite like that example. It's reasonable to value freedom as an end in itself (as long as that's not your only value); freedom is a goal that is deeply engrained in humans as a result of millions of years of evolution. It's not reasonable to value abolishing capitalism, or mass adoption of blockchains, in the same way.

I would argue that this is basically the failure mode that we need to watch out for: elevating something to being an end-in-itself when it isn't, in a way that ends up greatly harming the underlying goals.


"But I have more and much stronger pieces left on the board, so it doesn't matter that I got checkmated, spiritually it was I who won the game"


How I reconcile these two views

In the above sections, I identified two positive use cases of the thing you might call "ideologies", "principles" or "idea-driven ideas":

  1. Idea-motivated thinking and doing as "departments". Much like a company has a dedicated marketing department, it makes sense for society to have a department dedicated to, say, protecting the environment, and it similarly makes sense for a chess player to have a thought process dedicated to answering questions like "which approach will help me eat my opponent's pieces and keep my own safe?"
  2. Principles as a tool for coordination. Instead of rallying around a leader or an elite, it can be more robust and less prone to failure or capture to rally around an idea.

Often, movements in society will have some of both. Externally, they work to defend a principle, reducing the chance that society drifts to over-reliance on an elite. Internally, they become very proficient at deeply exploring particular themes that then generate valuable ideas and strategies for improving the world. Libertarian economists defend freedom in society, and they also invent prediction markets, refine congestion pricing proposals, and a number of other valuable ideas. Environmentalists guard our society against making irreversible damage to the environment through political advocacy, and they also invent technologies like clean energy and synthetic meat.

Meanwhile, I see two failure modes of this kind of approach. First, there is the risk that an instrumental objective overly crystallizes and gets pursued to extreme extents that subvert the original underlying goal. Second, there is the risk that coordinating around unbounded goals slides into coordinating around a caste of elites that are tasked with interpreting the goals. This is what Balaji Srinivasan means when he says things like "democracy is rule by Democrats", or what critics of effective altruism often point to when criticizing part of the movement's drift from a broad focus on identifying and encouraging highly effective charity to a much narrower approach of solving AI safety by directing grants to people within their own social cluster.

I propose two compromises to try to balance between these benefits and downsides:

The fact that the world and our (individual and collective) minds are both complex and have a lot of internal structure means that the direct solution of "reason about the whole sum of values and do the data-driven thing that best meets them" often ends up breaking in practice in various ways. At the same time, leaning in too much to some of that structure often breaks too, sometimes in ways that are even worse. Balances like this are most likely to get more of the benefits while minimizing more of the downside of both sides.