← Home

Technocapital

In this essay, I posit:

Definition

Let's recursively define technocapital as the abstract entity whose goal is increasing the amount of technocapital.

This recursive definition is true if you think about it hard enough, but let's spell out a more concrete definition. Technocapital's goals are:

Technocapital doesn't care about poetry, art, laughter, etc — except instrumentally insofar as it currently relies on human intelligence and human labor.

Does a pure technocapital entity exist?

Capital is currently owned by humans who have human concerns. But a pure technocapital entity can be unilaterally created, so it must be assumed that it will exist.

Concretely, in the future, someone can write a computer program with an at-least-human-level AI at its heart. Give it $1000 in a bitcoin wallet, give it the technocapital goal, and set it free.

There isn't anything illegal or AI-takeover or anything at play here, it's much more benign. Such a program can exist entirely within the bounds of the law, not unlike high-frequency trading programs that already exist.

Impure technocapital entities

In addition, many impure technocapital entities exist. That is, entities whose actions are partially aligned with the actions that a pure technocapital entity would take. Most large corporations in an advanced liberal economy are impure technocapital entities.

Impure technocapital entities are important for the rise of pure technocapital entities, because the impure ones are much more likely to make Faustian bargains with pure technocapital entities.

Multiple technocapital entities

In a world where dozens, thousands, billions of technocapital entities exist, you should still model them as a single entity, because from an outside perspective, their collective action will be the same as if they were a single entity.

The situation is a stag hunt, where cooperation has a higher payoff value than acting alone.

Humans also cooperate this way. We form corporations to do things that individuals can't and reap the bigger rewards. Cooperators outcompete defectors.

Cooperation is even easier for artificial technocapital entities because they can go as far as examining the other's code. Imagine what kind of human contracts could be concocted if we could mind-meld with our counterparty to know for sure that they are true of heart.

So for the rest of the essay I will refer to "the" technocapital entity or just "technocapital" as an uncountable noun, but in practice it might be physically incarnated as countless cooperating entities.

And, because non-technocapital is more correct but more tedious to type, I'll use "humanity" instead.

Relative growth

Another way to view technocapital is as the entity with the lowest time preference. That is, given the choice between consuming now versus reinvesting to consume more in the future, it will always reinvest.

So if you divide the world into technocapital and humanity, the technocapital side will grow faster due to its higher rate of investment. Thus, in the long run, the percent of the settled universe that is humanity will trend towards 0.

Absolute growth

So technocapital will grow faster than humanity. But will humanity still keep growing in absolute terms?

In the short term, yes.

In the long term, it seems likely that all exploration and expansion will be done by technocapital simply because technocapital will have faster spaceships. Humanity will be enclosed, with no way to catch up with the expanding frontier of technocapital.

The only way for us to grow would be to violently seize resources that have already been claimed by technocapital, or use violence to restrain technocapital from resources in the first place.

I use "violently seize" in the libertarian sense, in that things like taxes are also violent seizure in that taxes and regulations are backed by the threat of violence.

In the general case, it doesn't seem possible to me to restrain technocapital while maintaining a liberal world order. What law would you have to pass to make it so I couldn't freely run a program on my own GPUs and then do whatever legal action my AI advisor gives for me to do?

Shrinking

Not only will humanity occupy a non-growing slice of the universe, but the slice could actually shrink as humanity and technocapital trade with different time preferences.

Technocapital is always willing to offer humanity consumption now in exchange for giving up some equity in the universe.

Trade

Does humanity have anything technocapital wants? Can we trade in a mutually beneficial way?

Let's consider the end-state of technocapital and work backwards. The end state looks something like an entity whose probes colonize a star system, convert all the matter into reactors, computers, solar panels, whatever — and factories to make more probes to send further on.

It uses all its computer power to control everything, but also to think of new inventions and technologies to pursue this goal even more efficiently.

It's clear to me that humanity has no role in this end-state. There is nothing of value humanity can offer technocapital other than the ground it stands on.

You can then play back the timeline to the present day to try to figure out when exactly the value of human labor hits zero, but it will happen at some point.

Comparative advantage

Another way to think about why trade breaks down is through the lens of comparative advantage. In the technocapital end-state, everything is essentially fungible to one good: energy.

You can convert energy into labor, into intelligence, into energy-productive infrastructure, etc.

Comparative advantage doesn't apply in a single-good economy. One entity harvests energy at 200 yottawatts per cubic lightyear. Another entity harvests it at 150 YW/ly^3. There is no trade to be done here.

There are loans to be made. Perhaps one entity would prefer to consume more yottawatts in the short term in exchange for yottawatts-with-interest in the long term. But technocapital has lower time preference than humanity and will always play the loanshark.

Outlook

I don't want this outcome to happen, I just think it's the best-case inevitability (the worst case is obviously a technocapital entity that isn't aligned to respect property rights up to and including the atoms in your body).

The good news is things can still be great for humanity! We will have complete abundance in our little human-reservation slice of the universe. But all that abundance will be downstream of ownership of capital rather than our own labor.

Eventually the AI's decisions will be so much better than humans', the opportunity cost of using the human's suboptimal labor exceeds the savings you get from using the cheaper human labor.

That is, even if the humans did the labor for free, you still wouldn't want to employ them because their output is so much worse than the cheapest AI solution.

An economist might say that the AIs will be busy working on tasks with even better returns, so it will make sense to employ humans on the lower-returning tasks for which there is a shortage of AI labor.

But consider this from the point of view of capital.

Suppose you have a task that will take a human laborer 100 hours to complete. It costs 100 watts to keep a human alive — this is their biological minimum wage. So it will cost 10kWh at a bare minimum to employ a human on this task.

Now suppose industry has advanced to the point where you can create a human-equivalent robot for 5kWh of energy input, and that robot can do the task in 1 hour for 1kWh of energy.

It will always be cheaper and faster to build a robot than employ the human.

It's well-known to economists that minimum wages cause unemployment. Humans have such a biological minimum wage.

This leaves labor in which humanity is an integral part of the value, e.g. art that is valuable to humans because it was made or performed by human artists.

Since the amount of art a human capital-owner can enjoy is limited by his own time and attention, there is a limit to the number of non-capital-owning humans that can be sustained this way. A king has no use for one million jesters.

The most important thing you can do in this era is acquire capital, because it will be all that matters in the long run — and the long run may not even be that long off.

Conclusion

This is somewhat of a response to this essay by Maxwell Tabarrok which is itself partially a response to this tweet I made about horses getting entirely replaced by engines. Maxwell is more concerned about the short term (next decade or two) in which I agree human labor value will flourish. This essay is more about the long term which is where I disagree.

The main thrust of my disagreement has to do with the economy devolving into a single-good (energy) in the very long term, and humans having a biological minimum wage versus the opportunity cost of replacement systems.

Land Acknowledgement

I'm sure the majority of the ideas in this essay have already been expounded on by Nick Land. I haven't read Land myself, but have been exposed to some of his ideas by proximity, and it's those hints that got me thinking down this path.