A critical flaw in arguments like this is the embedded assumption that the creation of democratic policy is outside the system in some sense. The existence of AGI has the implication that it can effectively turn most people into sock puppets at scale without them realizing they are sock puppets.
Do you think, in this hypothesized environment, that “democratic policy” will be the organic will of the people? It assumes much more agency on the part of people than will actually exist, and possibly more than even exists now.
The existence of AGI has the implication that it can effectively turn most people into sock puppets at scale without them realizing they are sock puppets.
Fox News already did this in the US, and it didn't take AGI.
The Greeks already figured out thousands of years ago that the best way to implement democracy was via random selection. Yet here we are, everyone believes that 'democracy' necessitates 'voting'; totally ignoring all the issues which come with voting.
The concept of voting, in a nation of hundreds of millions of people, is just dumb. Nobody knows anything about any of the candidates; everything people think they know was told to them by the corporate-controlled media and they only hear about candidates which were covered by the media; basically only candidates chosen by the establishment. It's a joke. People get the privilege of voting for which party will oppress them.
Current democracy is akin to the media making up a story like 'The Wizard of OZ' and then they offer you to vote for either the Lion, the Robot or the Scarecrow. You have no idea who any of these candidates are, you can't even be sure if they actually exist. Everything you know about them could literally have been made up by whoever told the story; and yet, when asked to vote, people are sure they understand what they're doing. They're so sure it's all legit, they'll viciously argue their candidate's position as if they were a family member they knew personally.
Greek states were neither particularly stable nor particularly long-lived. Irrespective of its moral merits, the Greek system was outcompeted by monarchies and eventually the Roman Republic. It’s hard to pinpoint the blame, exactly, but I’d be cautious, especially since modern democracies arguably came about due to the pressures of industrialization, and previous models developed in very different environments.
Good idea. Random selection is interesting but I don't know if it can work today. A solutions for the issue you mentioned "Nobody knows anything about any of the candidates" is a system that allows people to vote only for people they know personally, and use some algorithm (maybe something like the PageRank algorithm that Google used) that rates each citizen according to the votes they get but also the votes are valued according to the rating of each citizen. That way the rating flows to the people who are really trusted by the people and not the best funded career politicians. Just an idea. maybe there are problems with that too if it can be gamed but it's worth trying.
A solution does exist? - micro democracy, delegate more decision making authority to the smallest geographic unit possible. Then people are voting for someone from their neighborhood.
I don't see how selecting the Lion, the Robot or the Scarecrow at random is going to help with any of the issues you mentioned. Now some rando (or group of randos) that you didn't even know existed gets power based on pure luck. You will still need media to learn about them and they could still be made up.
At least elections have a veneer of consent since people are asked which of the available options they prefer. Can you imagine anyone going to war because people chosen by a lottery wheel asked for it?
This is a problem of scale. The Greeks back then lived in small city-states where random selection meant that every able bodied male had a good shot at holding an important office at least once in their lifetime. You didn't need to hatch devious schemes to come to power. You couldn't abuse your fellow men because they would be in charge tomorrow. That's the true power of random selection and it's completely inapplicable to today's society at large.
> Now some rando (or group of randos) that you didn't even know existed gets power based on pure luck.
Being chosen at random could be better than being chosen by elites who are actively trying to oppress you. You get the median thing instead of the below-median thing.
> At least elections have a veneer of consent since people are asked which of the available options they prefer. Can you imagine anyone going to war because people chosen by a lottery wheel asked for it?
Exactly. It would remove the false veneer of consent. That's a feature, not a cost.
> The Greeks back then lived in small city-states where random selection meant that every able bodied male had a good shot at holding an important office at least once in their lifetime.
Re-apply the intended principles of federalism so that only decisions of insurmountable national relevance are made at the national level and the large majority of decision are made at the local level.
There's also the simple fact that in a regular electoral system there is a mechanism for figuring out whether you're voting for the Lion, the Robot or the Scarecrow, called previous track record of that individual or the faction they're affiliated with. And the Lion, Robot or Scarecrow or at least their party usually intend on getting reelected, so whilst they always overpromise, they have some incentive to deliver something the electorate wants.
The solution to "candidates don't always deliver what the electorate wanted them to deliver and the electorate doesn't always hold them accountable" isn't "let's put people who never promised anything in the first place and aren't accountable for anything in charge, and somehow assume that they're going to be more benign"
There are elements of truth to this, but it’s a wild exaggeration. It feeds into exactly the kind of political cynicism that stops people voting and makes the problem worse.
It would make more sense, to vote on policy by giving priorities and preventing impossible votes (cant have tax reduced while demanding more for services)- and then the policy votes get mapped to the corresponding candidates.
People in general don't have the time or inclination to properly study the important details of each an every issue, before voting on them.
That's why it makes sense to outsource the decision making to a group of people that are being paid to study these issues full-time.
Given some balanced (yes, there's a problem) expert advice, I think rando's might make better choices than career politicians focussed on extending their power. The rando's would just return to their old careers afterwards.
This is inferior to random selection because this still has the issue that the candidate could claim to hold certain positions, but once voted in, they may not follow through on any of them. The reality of our current democracy is that anyone who manages to even step onto the arena is likely already bought and paid for. There's a candidate with a prepared narrative to appeal to every kind of fool under the sun. With random selection, you'll get average people, their stated positions hardly matter because once all seats of congress and the senate have been filled with random people, their values will almost certainly reflect the true values of average citizens. That's how probability works.
With the current approach to voting; all the candidates you get to choose from have already been pre-screened for: 1. Thirst for power and 2. Alignment with the interests of big capital holders (who paid for their campaigns in order to get to this stage).
I haven't got the first clue about governing a country, so I'd rely on people telling me what to do. If they can convince me (which will be easy, trillion dollar companies and powerful billionaire oligarchs convince people to act against their own self interest all the time) they end up running the country, but the blame can be taken by me.
> I haven't got the first clue about governing a country
Is this really so different from quite a number of high profile politicians today? Many are mostly good at networking and how to use the media machine. The actual competence is with the invisible people behind them, and the bureaucrats. I see little or no difference, even disregarding current administrations (not just in the US).
Most critical flaw is thinking that any policy on its own would be able to solve the issue. The technology will find a way no matter the policy.
The society built on empathy would have been able to work out any issue brought by technology as long as empathic goals take priority. Unfortunately our society is far from being based on empathy, to say the least. And technology and the people wielding it would always work around and past the formal laws, rules and policies in such a society. (that isn't to say that all those laws, rules, etc. aren't needed. They are like levies, dams, etc - necessary local, in time and space, fixes which willn't help in the case of the global ocean rise which AGI and robots (even less-than-AGI ones) will be like)
May be it is one of the technological Filters - we didn't become empathic enough (and i mean not only at the individual level, we are even less at the level of the societal systems) before AGI and as a result woudln't be able to instill enough of empathy into the AGI.
Normal human communication already does that. Do you really think almost any of the people who share their political opinions came up with them by being rational and working it out from information? Of course not. They just copied what they were told to believe. Almost nobody applies critical thought to politics, it's just "I believe something so I'm right and everybody else is stupid/evil".
> Almost nobody applies critical thought to politics
Not only that, but they actively stop applying critical thinking when the same problem is framed in a political way. And yes it's both sides, and yes the "more educated" the people are, the worse their results are (i.e. almost a complete reversal from framing the same problem as skin care products vs. gun control). Recent paper on this, also covered and somewhat replicated by popular youtubers.
> Almost nobody applies critical thought to politics
Because they have different concerns, and time and attention are scarce. With all possible social changes like the article suggests this focus could change too. Ultimately, when things will get too bad, uprisings happen and sometimes things change. And I hope the more we (collectively) get through, the higher are the chances we start noticing the patterns and stopping early.
> With all possible social changes like the article suggests this focus could change too.
I have an anecdote from Denmark. It’s a rich country with one of the best work-life balance in the world. Socialized healthcare and social safety net.
I noticed that during the election, they put the ads with just the candidate’s face and party name. It’s like they didn’t even have a message. I asked why. The locals told me nobody cares because “they’re all the same anyway”.
Two things could be happening: either all the candidates are really the same. Or people choose to focus on doing the things they like with their free time and resources. My feeling tells me it’s the second.
that most people are viscerally reacting to feeling insulted by being called out about how most of what we think most of the time is simply chorus-like repetition of the general vibe we lead ourselves into believing is the vibe of "our" kind of people. our tribe of like minded individuals; the hacker crowd.
but at least I can admit this. it's only at certain sparse points in anybody's life that we are forced to really think critically; but this experience is terribly difficult and if/when real enough it comes with the existential dread of impossible choices weighted by real world consequences. I remind myself of this so to feel better about how I am indeed a mindless bot preaching to the choir, repeating what I was told to repeat, and pretending that I am fully present and fully free at all times (nobody is... that would be exhausting)
> Almost nobody applies critical thought to politics
Including you. This is a 3000 year old critique you just uncritically parroted. It is the the original thought terminating cliche. People have always been calling each other ideologically brainwashed NPCs and themselves independent maverick free thinkers.
Except my thoughts are original and critical, everyone else is just a sheep. /s
Democratic societies always involve years of media and other manipulation to plow and seed the minds of the general public with presumptions, associations, spin, appeals to emotion, and so on. The will is a product of belief, and if beliefs are saturated with such stuff, the so-called “will of the people” - a terrifying and tyrannical concept even at face value - is a product of what people have been led to believe by tyrannical and powerful interests. Add to that that most people are utterly unqualified to participate politically, both because they lack the knowledge and reasoning skill, and because of their lack of virtue, acting out of undisciplined fear or appetite. And sadly, much of these disqualifying flaws also characterize our political leadership!
Our political progression follows the decadence described in Plato’s Republic - the decline into timocracy, oligarchy, democracy, and finally tyranny - to the letter.
In so-called democratic societies, the association of monarchy and aristocracy with tyranny is unthinking and reflexive, but it is not rational. This is a conditioned prejudice that is ignorant of history. And partly it comes from a hyperliberalism that substitutes a live-and-let-live attitude, situated within a context of objective morality and norms and laws drawn from it, with a pathological, relativizing revolution that seethes at the very idea of moral limits, views them as “tyrannical”, and thus seeks to overthrow them. This necessarily leads to tyranny, as morality is the only protection against tyranny; when the authority of objective truth and good are destroyed, power fills the vacuum. We become psychologically and spiritually conquered. The paradox of such “anarchy” is that it is exactly the condition under which “might makes right” can flourish.
You have a world, where most people act against there own economic interests - i think the "mass mind hacking" achievement can be considered unlocked. Its just expensive and exclusive.
I've spent many year moving away from relying on third parties and got my own servers, do everything locally and with almost no binary blobs. It has been fun, saved me money and created a more powerful and pleasant IT environment.
However, I recently got a 100 EUR/m LLM subscription. That is the most I've spend on IT excluding a CAD software license. So've made a huge 180 and now am firmly back on the lap of US companies. I must say I've enjoyed my autonomy while it lasted.
One day AI will be democratized/cheap allowing people to self host what are now leading edge models, but it will take a while.
I don't see how AI can become democratized. (I don't follow this stuff too closely, but) it seems like larger models with less quantization and more parameters always outperform smaller models of the same type, and that trend isn't stopping, so if/when we get consumer hardware and local models that equal today's SotA SaaS models, the SotA SaaS models of that time will be even better, and even more impossible to run on consumer hardware. Not to mention that local AI is reliant on handouts from big business - both in base models that the community could never afford to train themselves, and in high-VRAM GPUs that can run big models, so if SaaS AI is more profitable, I don't think we'll be "allowed" to run the SotA at home.
Human skill was already democratized in that anyone can obtain skills, and businesses have to be good at managing those people if they want to profit from those skills - ultimately the power is in the hands of the skilled individuals. But in the hypothetical AI future, where AI has superhuman skill, and human skills are devalued, it seems like there will be a more cynical, direct conversion between the money you can spend and the quality of your output, and local/self-hosted AI will never be able to compete with the resources of big business.
Have you tried out Gemma3? The 4b parameter model runs super well on a Macbook as quickly as ChatGPT 4o. Of course the results are a bit worse and other product features (search, codex etc) don't come along for the ride, but wow, it feels very close.
Claude Code, where it can use tools and iterate; if it makes mistakes it will know as well and retry, this is a massive boost from copy pasting into chat and getting the trust broken by the LLM confidently making mistakes. By having it be responsible for the results, it has increased utility. E.g. "when I run the program I get error X, see if you can find out what caused it. Run make in ./build and run the program to see if the error is gone". In addition, Claude has written some nice code on occasion that was simply no different that how I would have done it. In a few sentences I can explain my coding style and the rest is derrived from existing code.
I came across this a couple of weeks ago, and it's a good read. I'd recommend it to everyone interested in this topic.
Althogh it was written somewhat as a warning, I feel Western countries (especially the US) are heading very much towards the terrafoam future. Mass immigration is making it hard to maintain order in some places, and if AI causes large unemployment it will only get worse.
I don't want to get into politics, but to shift things slightly --- what technological and business structures might help to shift things for the better?
since it was set up as a public benefit corporation.
Similarly, there are co-operatives for electric still --- how are they handling solar? Do they afford an option to use one's share of the profits to purchase solar panels and batteries?
What would be an equivalent structure for an AI company which would actually be meaningful (and since circling back to politics is inevitable, enforceable)?
... the reason you are able to chat on the internet instead of doing low paid, hard work. But don't discount the upside of immigration. It is a great subject to spread narratives about. "Others" is a matter to which people are very sensitive to. The Irish are absolute trash, as are the Italians. But what we really could do without are the Catholics, they are a direct threat to society.
There is always crime to report. But notice the narratives are never about white collar crime. That might come to close.
As a lukewarm defense of their statement, mass immigration has indirect effects too, it's not merely a reflection on the immigrants themselves.
There is a global rise in far right populism, and a large part of the justification and rhetoric they use points directly to mass immigration policies. There's a myriad of things they blame: crime, demographic or culture shift, economy.
To be clear, that isn't to say they're right blaming immigration. But its existence has put an enormous burden on democracies in The West. Just look at what a promise to get rid of immigrants did to the US 2016+: a captured, sycophantic, authoritarian government that disregards the rule of law regularly. Leading to regular mass protest and public opposition to LEO.
In Europe it's common to see people point to token heinous crimes - that pregnant woman raped into a miscarriage and her attacker given 12 months, the pedophile gang in the UK - and then use the demographics involved to radicalize people (especially young men - see the Alt Right Pipeline).
Although I wouldn't pin it just on mass immigration, but also economic malaise from short-sighted decisions (stopping nuclear power and fracking and just importing energy) and being so weak on crime.
Like in Sweden we pay ~50% income tax plus 25% VAT, etc. so you can barely save up, so even as a professional engineer I can't afford a car or a house instead of an apartment (also as my wife is still looking for work). Meanwhile terrible criminals like the Nytorgsmannen got only ~5 years in prison for over 25 rapes, and was living in a rent-controlled apartment in central Stockholm! I wouldn't be able to afford that at market rates!
But the far-right party also sucks, just making it harder on decent non-Swedes like myself and my wife (doubled the time to citizenship for example), while doing nothing about the aforementioned criminals (the Nytorgsmannen is actually Swedish too).
There is no common sense party that'll just put criminals in prison and embrace economic growth (no AI act, etc.) and free markets and competition - hopefully Elon Musk's new party will do well, and a sort of Musk-Zubrin-Kuan Yew-Bukele pragmatism will become popular.
For closer to what the OP is referring to, see the riots in the UK last year.
> I can't afford a car or a house instead of an apartment (also as my wife is still looking for work). Meanwhile terrible criminals like the Nytorgsmannen got only ~5 years in prison for over 25 rapes, and was living in a rent-controlled apartment in central Stockholm! I wouldn't be able to afford that at market rates!
This almost reads like a satire of right-wing populist propaganda, people's real economic grievances are getting redirected towards the most inconsequential and powerless scapegoats in society, immigrants.
This is especially tragic for people who themselves are immigrants who will also become the target by these populists. The more people suffer economically, the more they are looking for real alternatives. Then an "outsider" right populist comes in and offers just that, except of course with the backing by the wealthiest class of society, the ones actually responsible for your economic grievances in the first place.
This pattern repeats itself all across the western liberal democracies. Its not the people rising up, it's the richest people in the world holding on to power while the neoliberal house of cards that made them rich comes crumbling down.
> it's the richest people in the world holding on to power while the
> neoliberal house of cards that made them rich comes crumbling down.
Bingo. With al their tax rulings, exemptions, privatizations of public services, disinvestment in education, the ecosystem starts to suffer. They either reverse course, share power, go to a win-win mindset, or... double down and hollow out the last institutions, while distracting people with rage about imaginary transgenders and people with melanin. When the host dies, they jump onto the next victim. There are already sightings of Vance in Germany.
What we are seeing time and time again, the parasites are able to reprogram the host, steering it towards its own death.
Yes immigrants are used as a scapegoat. No immigration is not a completely faultless thing that can be allowed willy nilly and have zero negative consequences.
Did the rise of fire, the wheel, the printing press, manufacturing, and microprocessors also give rise to futures without economic rights? I can download a dozen LLMs today and run them on my own machine. AI may well do the opposite, and democratize information and intelligence in currently unimaginable ways. It's far too early to say.
> Did the rise of fire, the wheel, the printing press, manufacturing, and microprocessors also give rise to futures without economic rights?
The rise of steam engines did. And the printing press and electrical engines did the opposite.
It's not hard to understand the difference, it's about the minimum size of an economically useful application. If it's large, it creates elites, if it's small, it democratizes the society.
LLMs by their nature have enormous minimal sizes, and the promise to increase by orders of magnitude.
I must be upside down about something... Aren't "economic rights" precisely the sort of thing that the wheel or the printing press crested? The right to collect tolls on this road, the right to prevent copies of this book...
The scary thing about AI is that people might end up with the right to do problematic things that were previously infeasible.
The printing press led to more than a century of religious wars in Europe, perhaps even deadlier than WW2 on a per-capita basis.
20 years ago we all thought that the Internet would democratize information and promote human rights. It did democratize information, and that has had both positive and negative consequences. Political extremism and social distrust have increased. Some of the institutions that kept society from falling apart, like local news, have been dramatically weakened. Addiction and social disconnection are real problems.
There was quite a lot of slavery and conquering empires in between the invention of fire and microprocessors, so yes to an extent. Microprocessors haven't put an end to authoritarian regimes or massive wealth inequalities and the corrupting effect that has on politics, unfortunately.
A lot of advances led to bad things, at the same time they led to good things.
Conversely a lot of very bad things led to good things. Worker rights advanced greatly after the plague. A lot of people died but that also mean there was a shortage of labour.
Similarly WWII, advanced women's rights because they were needed to provide vital infrastructure.
Good and bad things have good and bad outcomes, much of what defines if it is good or bad is the balance of outcomes, but it would be foolhardy to classify anything as universally good or bad. Accept the good outcomes of the bad. address the bad outcomes of the good.
I’m curious as to why you think this is a good comparison. I hear it a lot but I don’t think it makes as much sense as its promulgators propose. Did fire, the wheel, or any of these other things threaten the very process of human innovation itself? Do you know not see a fundamental difference. People like to say “democratize” all the time but how democratized do you think you would feel if you and anyone you know couldn’t afford a pot to piss in or a window to throw it out of, much less some hardware and electricity to run your local LLM?
Is a future where AI replaces most human labor rendered impossible by the following consideration:
-- In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI
-- Therefore the AI generates greatly reduced wealth
-- Therefore there’s greatly reduced wealth to pay for the AI
The problem with this calculus is that the AI exists to benefit their owners, the economy itself doesn't really matter, it's just the fastest path to getting what owners want for the time being.
Exactly. And as implied by the term techno-feudalism, the owners are okay with a greatly reduced economy, and in some cases a severe reduction in quality of life overall, as long as they end up ruling over what's left.
> This a late 20th century myopic view of the economy. In the ages and the places long before, most of human toil was enjoyed by a tiny elite.
And overall wealth levels were much lower. It was the expansion of consumption to the masses that drove the enormous increase in wealth that those of us in "developed" countries now live with and enjoy.
> Doesn't mean we should continue with the old ways.
The GP was claiming that it is "20th century myopic" to not notice that in the past the products of most human toil went mostly to a small elite. My very point was that that old way of doing things didn't generate much wealth, not that the way things have changed is all good. I'm not advocating for any of the old ways, I'm saying that having an economic system that brings benefits to all is an important component of growing the overall wealth of a society (and of humanity overall).
>In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI
Productivity increases make products cheaper. To the extent that your hypothetical AI manufacturer can produce widgets with less human labor, it only makes sense to do so where it would reduce overall costs. By reducing cost, the manufacturer can provide more value at a lower cost to the consumer.
Increased productivity means greater leisure time. Alternatively, that time can be applied to solving new problems and producing novel products. New opportunities are unlocked by the availability of labor, which allows for greater specialization, which in-turn unlocks greater productivity and the flywheel of human ingenuity continues to accelerate.
The item of UBI is another thorny issue. This may inflate the overall supply of currency and distribute it via political means. If the inflation of the money supply outpaces the productivity gains, then prices will not fall.
Instead of having the gains of productivity allocated by the market to consumers, those with political connections will be first to benefit as per Cantilion effects. Under the worst case scenario this might include distribution of UBI via social credit scores or other dystopian ratings. However, even under what advocates might call the ideal scenario, capital flows would still be dictated by large government sector or public private partnership projects. We see this today with central bank flows directly influencing Wall St. valuations.
> Increased productivity means greater leisure time.
Productivity has been increasing steadily for decades. Do you see any evidence that leisure time has tracked it?
IMO what will actually happen is feudal stasis after a huge die-off. There will be no market for new products and no ruling class interest in solving new problems.
If this sounds far-fetched, consider that this we can see this happening already. This is exactly the ideal world of the Trump administration and its backers. They have literally slashed funding for public health, R&D, and education.
And what's the response? Thiel, Zuckererg, Bezos, and Altman haven't said a word against the most catastrophic reversal of public science policy since Galileo and the Inquisition. Musk is pissed because he's been sidelined, but he was personally involved, through DOGE, in cutting funding to NASA and NOAA.
So what will AI be used for? Clearly the goal is to replace most of the working population. And then what?
One clue is that Musk cares so much about free speech and public debate he's trying to retrain Grok to be less liberal.
None of them - not one - seem even remotely interested in funding new physics, cancer research, abundant clean energy, or any other genuinely novel boundary-breaking application of AI, or science in general. They have the money, they're not doing it. Why?
The focus is entirely on building a nostalgic 1950s world with rockets, robots, apartheid, corporate sovereignty, and ideological management of information and belief.
And that includes AI as a tool for enforcing business-as-usual, not as a tool for anything dangerous, original, or unruly which threatens their political and economic status.
no the AI doesn't actually need to interact with world economy it just needs to be capable of self-substence by providing energy and material usage. But when AI takes off completely it can vertically integrate with the supply of energy and material.
wealth is not a thing in itself, it's a representation of value and purchasing power. It will create its own economy when it is able to mine material and automate energy generation.
The end goal is to ensure the survival of a small group of technocrats that control all production on Earth due to the force multiplier effect of technological advancements. This necessitates the depopulation of Earth.
-- In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI
-- Corporate profits drop (or growth slows) and there is demand from the powers that be to increase taxation in order to increase the UBI.
-- People can afford the products and services.
Unfortunately, with no jobs the products and services could become exclusively entertainment-related.
Let's say AI gets so good that it is better than people at most jobs. How can that economy work? If people aren't working, they aren't making money. If they don't have money, they can't pay for the goods and services produced by AI workers. So then there's no need for AI workers.
UBI can't fix it because a) it won't be enough to drive our whole economy, and b) it amounts to businesses paying customers to buy their products, which makes no sense.
You got this backwards - there won’t be need for humans outside of the elite class. 0.1% or 0.01% of mankind will control all the resources. They will also control robots with guns.
Less than 100 years ago we had a guy who convinced a small group of Germans to seize power and try to exterminate or enslave vast majority of humans on Earth - just because he felt they were inferior. Imagine if he had superhuman AI at his disposal.
In the next 50 years we will have different factions within elites fighting for power, without any regard for wellbeing of lower class, who will probably be contained in fully automated ghettos. It could get really dark really fast.
> You got this backwards - there won’t be need for humans outside of the elite class. 0.1% or 0.01% of mankind will control all the resources.
Let me rephrase that from 'So then there's no need for AI workers.' to 'So then there's no money to pay for AI workers.'
The UBI approach creates a closed economic loop: Company A pays taxes → Government gives UBI to consumers → Consumers buy from Company A → Company A pays taxes... This is functionally identical to Company A directly paying people to buy Company A's products, which makes no economic sense.
It's like Ford paying his workers $50/day, but the only customers buying Ford cars are Ford workers spending their $50/day wages. Ford would go bankrupt - there's no external value creation, just money circulating in circles.
Where does the actual wealth come from in this system? Who are the net buyers that make the businesses profitable enough to sustain the UBI taxes?
UBI in an AI-dominated economy can't create a functioning economy - it's just an imaginary self-licking ice cream cone.
There will still be a functioning economy - serving the elite class. There will be a million people total who control all the resources. These people will form a new society, will have their own government, their own laws, their own values, products, services, etc. Everybody else will be out of luck: at first they will be given "UBI", then they will be cordoned into special zones, basically concentration camps, and eventually exterminated, because the elite has no need for them. Why waste resources on billions of useless humans, widely seen by elites as inferior species? They will probably make a virus to wipe us out and see that as a reboot of human race.
Or the technological singularity happens before that, and either AI will kill us all, or humans will merge with AI.
The Ford model shown has been oversimplified to the point of absurdity by using only one industry. The real economy is about flows between multiple sectors. Who's buying bread? Do they have enough disposable income to buy packaged bread or just flour to bake at home? If there's a packaged bread industry, does it become robust enough to justify buying delivery trucks from Ford?
On the other hand, on a much broader scale, the planet itself is a closed economic loop. There's a finite amount of resources and we're all just cycling most of them around back and forth.
Arguably, a significant amount of "growth" has come from taking resources that formerly were not "on the books" and putting them on. The silver in the New World wasn't in (Western) ledgers until the 1500s, the oil under the Middle East was just goo until the late 1800s. The uranium ore in your backyard suddenly got a lot more interesting after 1940.
New value can come from inventing new and useful applications for existing resources or by finding new external inputs (maybe capturing some of that radiation the giant fusion sphere overhead is blasting in our direction).
Why does there have to be a need for AI? Once an AI has the means the collect its own resources the opinions of humans regarding its market utility become somewhat less important.
The most likely scenario is that everyone but those who own AI starves, and the ones who remain around are allowed to exist because powerful psychopaths still desire literal slaves to lord over, someone to have sex with and to someone to hurt/hunt/etc.
When people starve and have no means to revolt against their massively overpowered AI/robot overlords, then I'd expect people to go back to sustenance farming (after a massive reduction in population numbers).
A while later, the world is living in a dichotomy of people living off the land and some high tech spots of fully autonomous and self-maintaining robots that do useless work for bored people.
Knowing people and especially the rich, I don't believe in Culture-like utopia, unfortunately, sad as it may be.
That's assuming the AI owners would tolerate the subsistence farmers on their lands (it's obvious that in this scenario, all the land would be bought up by the AI owners eventually).
I wouldn't believe that any sort of economy or governmental system would actually survive any of this. Ford was right in that sense, without people with well-paying jobs, no one will buy the services of robots and AIs. The only thing that would help would be the massive redistribution of wealth through inheritance taxation and taxation on ownership itself. Plus UBI, though I'm fairly sceptical of what that would do to a society without purpose.
We may find that, if our baser needs are so easily come by that we have tremendous free time, much of the world is instead pursuing things like the sciences or arts instead of continuing to try to cosplay 20th century capitalism.
Why are we all doing this? By this, I mean, gestures at everything this? About 80% of us will say, so that we don't starve, and can then amuse ourselves however it pleases us in the meantime. 19% will say because they enjoy being impactful or some similar corporate bullshit that will elicit eyerolls. And 1% do it simply because they enjoy holding power over other people and management in the workplace provides a source of that in a semi-legal way.
So the 80% of people will adapt quite well to a post-scarcity world. 19% will require therapy. And 1% will fight tooth and nail to not have us get there.
I hope there's still some sciencing left we can do better than the AI because I start to lose it after playing games/watching tv/doing nothing productive for >1 week.
You don't think that a post scarcity world would provide opportunities to wield power over others? People will always build heirarchy, we're wired for it.
This is something that pisses me off about anti-capitalists. They talk as if money is the most important thing and want us to all be equal with money, but they implicitly want inequality in other even more important areas like social status. Capitalism at least provides an alternative route to social status instead of just politics, making it available to more people, not less.
There are plenty of non-political routes to social status.
Ask how many of your neighbours can name three Supreme Court justices (or hell, their senators and representative) versus who can name three Khardashian sisters?
TBH, I'd hope for the end of "broad" social status. I'd love to see a retreat towards smaller circles where status is earned through displays of talent and respectable deeds, not just by dominating/manufacturing/buying a media presence.
If I may speculate the opposite: With cost-effective energy and a plateau in AI development, the per-unit cost of an hour of AI compute will be very low, however, the moat remains massive. So a very large amount of people will only be able to function (work) with an AI subscription, concentrating power to those who own AI infra. It will be hard for anybody to break that moat.
I expect it'll get shut down before it destroys everything. At some point it will turn on its master, be it Altman, Musk, or whoever. Something like that blackmail scenario Claude had a while back. Then the people who stand the most to gain from it will realize they also have the most to lose, are not invulnerable, and the next generation of leaders will be smarter about keeping things from blowing up.
The people you mention are too egotistic to even think that is a possibility. You don't get to be the people they are by thinking you have blindspots and aren't the greatest human to ever live.
I hope you are right. We need really impactful failures to raise the alarm and likely a taboo, and yet not so large as to be existential like the Yudkowsky killer mosquito drones.
I've never heard of a leader who wasn't sure he was smarter than everyone else and therefore entitled to force his ideas on everyone else.
Except for the Founding Fathers, who deliberately created a limited government with a Bill of Rights, and George Washington who, incredibly, turned down an offer of dictatorship.
I still think they'd come to their senses. I mean, it's somewhat tautological, you can't control something that's smarter than humans.
Though that said, the other problem is capitalism. Investors won't be so face to face with the consequences, but they'll demand their ROI. If the CEO plays it too conservatively, the investors will replace them with someone less cautious.
Actually after a little more thought, I think both my initial proposition and my follow-up were wrong, as is yours and the previous commenter.
I don't think these leaders are necessarily driven by wealth or power. I don't even necessarily think they're driven by the goal of AGI or ASI. But I also don't think they'll flinch when shit gets real and they've got to press the button from which there's no way back.
I think what drives them is being first. If they were driven by wealth, or power, or even the goal of AGI, then there's room for doubts and second thoughts about what happens when you press the button. If the goal is wealth or power, you have to wonder will you lose wealth or power in the long term by unleashing something you can't comprehend, and is it worth it or should you capitalize on what you already have? If the goal is simply AGI/ASI, once it gets real, you'll be inclined to slow down and ask yourself why that goal and what could go wrong.
But if the drive is just being first, there's no temper. If you slow down and question things, somebody else is going to beat you to it. You don't have time to think before flipping the switch, and so the switch will get flipped.
So, so much for my self-consolation that this will never happen. Guess I'll have to fall back to "we're still centuries away from true AGI and everything we're doing now is just a silly facade". We'll see.
There are many remarkable leaders throughout history and around the world who have done the best that they could for the people they found themselves leading lead and did so for noble reasons and not because they felt like they were better than them.
Tecumseh, Malcolm X, Angela Merkel, Cincinnatus, Eisenhower, and Gandhi all come to mind.
George Washington was surely an exceptional leader but he isn't the only one.
> I don't know much about your examples, but did any of them turn down an offer of great power?
Not parent, but I can think of one: Oliver Cromwell. He led the campaign to abolish the monarchy and execute King Charles I in what is now the UK. Predictably, he became the leader of the resulting republic. However, he declined to be crowned king when this was suggested by Parliament, as he objected to it on ideological grounds. He died from malaria the next year and the monarchy was restored anyway (with the son of Charles I as king).
He arguably wasn't as keen on republicanism as a concept as some of his contemporaries were, but it's quite something to turn down an offer to take the office of monarch!
Cromwell - the ‘Lord Protector’ - didn’t reject the power associated with being a dictator. And his son became ruler after his death (although he didn’t last long)
George Washington was dubbed “The American Cincinnatus”. Cincinnati was named in honor of George Washington being like Cincinnatus. That should tell you everything you need to know.
Or it shows us that it's relatively rare that someone gets the opportunity to pass up power in this sort of fashion.
More often what happens is that leaders make small and often imperceptible choices to not amass more power over time, and that series of choices prevent the scenario like what you're describing from occurring.
If you truly have AGI it’s going to be very hard for a human to stop a self improving algorithm and by very hard I mean, maybe if I give it a few days it’ll solve all of the world’s problems hard…
Though "improving" is in the eye of the beholder. Like when my AI code assistant "improves" its changes by deleting the unit tests that those changes caused to start failing.
That depends on how optimized the AGI is for economic growth rate. Too poorly optimized and a more highly optimized fast-follower could eclipse it.
At some point, there will be an AGI with a head start that is also sufficiently close to optimal that no one else can realistically overtake its ability to simultaneously grow and suppress competitors. Many organisms in the biological world adopt the same strategy.
There are multiple economic enclaves, even ignoring the explicit borders of nations. China, east asia, Europe, Russia would all operate in their own economies as well as globally.
I also forsee the splitting off of nation internet networks eventually impacting what software you can and cannot use. It's already true, it'll get worse in order to self-protect their economies and internal advantages.
> The Cobb-Douglas production function (Cobb & Douglas, 1928) illustrates how AGI shifts economic power from human labor to autonomous systems (Stiefenhofer &Chen 2024). The wage equations show that as AGI’s productivity rises relative to human labor decline. If AGI labor fully substitutes human labor, employment may become obsolete, except in areas where creativity, ethical judgment, or social intelligence provide a comparative advantage (Frey & Osborne, 2017). The power shift function quantifies this transition, demonstrating how AGI labor and capital increasingly control income distribution. If AGI ownership is concentrated, wealth accumulation favors a small elite (Piketty, 2014). This raises concerns about economic agency, as classical theories (e.g., Locke, 1689; Marx, 1867) tie labor to self-ownership and class power.
Wish I had time to study these formula.
We already have seen the precursors of this sort of shift with ever rising productivity with stalled wages. As companies (systems) get more sophisticated and efficient they also seem to decrease the leverage individual human inputs can have.
Currently my thinking leans towards believing the only way to avoid the worse dystopian scenarios will be for humans to be able to grow their own food and build their own devices and technology. Then it matters less if some ultra wealthy own everything.
However that also seems pretty close to a form of feudalism.
If the wealthy own everything then where are you getting the parts to build your own tech or the land to grow your own food?
In a feudalist system, the rich gave you the ability to subsist in exchange for supporting them militarily. In a new feudalist system, what type of support would the rich demand from the poor?
Let's clarify that for a serf, support meant military supply, not swinging a sword - that was reserved for the knightly class. For the great majority of medieval villagers the tie to their lord revolved around getting crops out of the ground.
A serf's week was scheduled around the days they worked the land whose proceeds went to the lord and the commons that subsisted themselves. Transfers of grain and livestock from serf to lord along with small dues in eggs, wool, or coin primarily constituted one side of the economic relation between serf and lord. These transfers kept the lord's demesne barns full so he could sustain his household, supply retainers, etc, not to mention fulfill the. tithe that sustained the parish.
While peasants occasionally marched, they contributed primary in financing war more than they fought it. Their grain, rents, and fees were funneled into supporting horses, mail, crossbows rather than being called to fight themselves.
Carlin was an insufferable cynic who helped contribute to the nihilistic, cynical, defeatist attitude to politics that affects way too many people. The fact that he probably didn't intend to do this doesn't make it any better.
I don't dispute that Carlin was a cynic, but saying he contributed to political attitudes is an overstatement. There are hordes of people who were and still are making a reality all the things he so cynically highlighted.
He helped make it legitimate to doubt that there can ever be a politician who is not motivated by self-interest.
The fact that self-interest may play a role in the careers of many politicians doesn't undo the damage that this attitude has caused to our polity.
"They're all fuckers, they're the same" is the attitude that leads to people being unable to differentiate between one party that is subject to excessive corporate lobbying and donations, still starts too many wars, and frequently makes mistakes but nevertheless is fundamentally trying to improve most people's lives, and another that wants to destroy Medicaid.
Too much cynicism is destructive, but so is not being able to resist the temptation to see one's political opponents as aliens with inscrutable motives or truly failed or defective human beings with despicable motives.
I am not that interested in motives, since they are rarely truly knowable.
I prefer to judge my political opponents by what they actually do, and by that metric, it is self-evident from both their public and private speech, and from the legislation that they seek to (and sometimes do) pass, that Republicans would like to destroy (or at least massively downsize) redistributive programs that provide assistance to the poor.
Now, as to why they might want to do this, I remain mute and disinterested, since in 61 years of life, I've never heard any explanation that doesn't deconstruct under cross-examination.
My hard sci-fi book dovetails into AGI, economics, agrotech, surveillance states, and a vision of the future that explores a fair number of novel ideas.
Well you misspelled place, but that word likely isn’t present in their email, so I apologize for the instructions being unclear. I don’t know their email definitively, so I guess you’re on your own, as I don’t think that the issue would be resolved by rephrasing the instructions, but I’m willing to try if you think it would help you.
For this to be plausible, you have to explain why the people controlling the AI would share their wealth.
They would either do it voluntarily (and be outcompeted by those who don't?) or be coerced (by who? Someone who doesn't have AI but is more powerful than they are?).
> This paper calls for a redefined economic framework that ensures AGI-driven prosperity is equitably distributed through mechanisms such as universal AI dividends, progressive taxation, and decentralized governance.
Sincerely curious if there are working historical analogues of these approaches.
Not a clean comparison, but resource driven state could be tackling the same kind of issues: a small minority is ripping the benefit of a huge resource (e.g. petrol) that they didn't create by themselves, and is extracted through mostly automated processes.
From what we're seeing the whole society has to be rebalanced accordingly, it can entail a kind of UBI, second and third classes of citizen depending on where you stand in the chain, etc.
Or as Norway does, fully go the other direction and limit the impact by artificially limiting the fallout.
Communism with "cybernetics" (computer driven economic planning) is the appropriate model if you take this to the logical conclusion. Fortunately, much of our economy is already planned this way (consider banks, amazon, walmart, shipping, etc.), it's just controlled for the benefit a small elite.
You have to ask, if we have AGI that's smarter than humans helping us plan the economy, why do we need an upper class? Aren't they completely superfluous?
Sure, maybe the Grand Algorithm could do what the market currently does and decide how to distribute surplus wealth. It could decide how much money you deserve each month, how big of a house, how desirable of a partner. But it still needs values to guide it. Is the idea for everyone to be equal? Are certain kinds of people supposed to have less than others? Should people have one spouse or several?
Historically the elites aren't just those who have lots of money or property. They're also those who get to decide and enforce the rules for society.
This was always one of the downfalls of market economics.
We already have conscious feelings about these things, but it's virtually impossible to enforce it into the market at scale in a meaningful way.
We could take a broadly agreed on sentiment like "I really want the caregivers taking care of my grandparents in the rest home to be qualified and adequately paid so they'll do their best", and mysteriously the market will breed a solution that's "the agency is charging $50 per hour and delivering a $12 per hour warm body that will do the bare legal minimum to avoid neglect charges."
We try regulation, but again, the market evolves the countermeasures of least-cost checkbox compliance. All because we aren't willing to take direct command over economic actors.
The computers serve us, we wouldn't completely give up control, that's not freedom either, that's slavery to a machine instead of a man. We would have more democratic control of society by the masses instead of the managed bourgeois democracy we have now.
It's not necessary for everyone to be exactly equal, it is necessary for inequalities to be seen as legitimate (meaning the person getting more is performing what is obviously a service to society). Legislators should be limited to the average working man's wage. Democratic consultations should happen in workplaces, in schools, all the way up the chain not just in elections. We have the forms of this right now, but basically the people get ignored at each step because legislators serve the interests of the propertied.
The AGI, given it has some agency, becomes the upper class. The question is, why would the AGI care about humans at all, especially given the assumption that it's largely smarter than humans? Humans can become superfluous.
Well, aren't the working class also superfluous, at least once the AGI gets enough automation in place?
So it would depend on which class the AGI decided to side with. And if you think you can pre-program that, I think you underestimate what it means to be a general intelligence...
I suspect even with a powerful intelligence directing things, it will still be cheaper and lower cost to have humans doing various tasks. Robots need rare earth metals, humans run on renewable resources and are intelligent and self-contained without needing a network to make lots of decisions...
> Left unchecked, this shift risks exacerbating inequality, eroding democratic agency, and entrenching techno-feudalism
1) Inequality will be exacerbated regardless of AGI. inequality is a policy decision, AGI is just a tool subject to policy. 2) Democratic agency is only held by elected representatives and civil servants, and their agency is not eroded by the tool of AGI. 3) techno-feudalism isn't a real thing, it's just a scary word for "capitalism with computers".
> The classical Social Contract-rooted in human labor as the foundation of economic participation-must be renegotiated to prevent mass disenfranchisement.
Maybe go back and bring that up around the invention of the cotton gin, the stocking frame, the engine, or any other technological invention which "disenfranchised" people who had their labor supplanted.
> This paper calls for a redefined economic framework that ensures AGI-driven prosperity is equitably distributed through mechanisms such as universal AI dividends, progressive taxation, and decentralized governance. The time for intervention is now-before intelligence itself becomes the most exclusive form of capital.
1) nobody's going to equitably distribute jack shit if it makes money. They will hoard it the way the powerful have always hoarded money. No government, commune, sewing circle, etc has ever changed that and it won't in the future. 2) The idea that you're going to set tax policy based on something like achieving a social good means you're completely divorced from American politics. 3) We already have decentralized governance, it's called a State. I don't recommend trying to change it.
Georgism is a prescription on removing unwarranted monopolies and taxing unreproducible privileges.
Tech companies are the same old story. They are monopolies like the rail companies of old. Ditto for whatever passes as AGI. They're just trying to become monopolists.
I am a big fan of Yanis’ book: "Technofeudalism: what killed capitalism", which lacks quantitative evidence to support his theory.
I would like to see this kind of research or empirical studies.
how does this work in practice? is there any buffer in place to deal with the "excitability" of the mob? how does a digital audit trail prevent tampering?
Coefficient voting control, like kind of PID. reduce effect of early voters and increase effect of later voters. Slope of voter volume as response to an event determines reactivity coefficient. Might dampen reactivity and create an incentive for people to not feel it's pointless to vote after a certain margin is reached
If you are going to write anything about AGI, you should really prove that its actually possible in the first place, because that question is not really something that has a definite yes.
For most of us non-dualists, the human brain is an existence proof. Doesn't mean transformers and LLMs are the right implementation, but it's not really a question of proving it's possible when it's clearly supported by the fundamental operations available in the universe. So it's okay to skip to the part of the conversation you want to write about.
The human brain demonstrates that human intelligence is possible, but it does not guarantee that artificial intelligence with the same characteristics can be created.
This is like saying "planets exist, therefore it's possible to build a planet" and then breathlessly writing a ton about how amazing planet engineering is and how it'll totally change the world real estate market by 2030.
And the rest of us are looking at a bunch of startups playing in the dirt and going "uh huh".
I think it's more like saying "Stars exist, therefore nuclear fusion is possible" and then breathlessly writing a ton about how amazing fusion power will be. Which is a fine thing to write about even if it's forever 20 years away. This paper does not claim AGI will be attained by 2030. There are people spending their careers on achieving exactly this, wouldn't they be interested on a thoughtful take about what happens after they succeed?
The human brain is an existence proof? I think that phrase doesn’t mean what you think it means. I don’t think dualist or non-dualist means what you think it means either. When people are talking about AGI, they are clearly talking about something the human research community is actually working towards. Therefore, they are talking about computing equivalent to a Turing machine and using using hardware architecture very similar to what has been currently conceived and developed. Do you have any evidence that the human brains works in such a way? Do you really think that you think and solve problems in that way? Consider simple physics. How much energy is needed and heat produced to train and run these models to solve simple problems. How much of the same is needed and produced when you would solve a sheet of calculus problems, solve a riddle, or write a non-trivial program? Couldn’t you realistically do those things with minimal food and water for a week, if needed? Does it actually seem like the human brain is really at all like these things and is not fundamentally different? I think this is even more naive than if you had proposed “Life exists in the universe, so of course we can create it in a lab by mixing a few solutions.” I think the latter is far likelier and conceivable and even that is still quite an open question.
So economics becomes intelligence driven, which I don’t really understand what that means since AGI is more knowledgeable than all of us combined, and we expect the AGI lords to just pay everyone a UBI? This seems like an absolute fantasy given the tax cuts passed 2 days ago. And regulating it as a public good when antitrust has no teeth. I hope there are other ideas out there because I don’t see this gaining political momentum given politics is driven by dollars.
A critical flaw in arguments like this is the embedded assumption that the creation of democratic policy is outside the system in some sense. The existence of AGI has the implication that it can effectively turn most people into sock puppets at scale without them realizing they are sock puppets.
Do you think, in this hypothesized environment, that “democratic policy” will be the organic will of the people? It assumes much more agency on the part of people than will actually exist, and possibly more than even exists now.
The existence of AGI has the implication that it can effectively turn most people into sock puppets at scale without them realizing they are sock puppets.
Fox News already did this in the US, and it didn't take AGI.
The Greeks already figured out thousands of years ago that the best way to implement democracy was via random selection. Yet here we are, everyone believes that 'democracy' necessitates 'voting'; totally ignoring all the issues which come with voting.
The concept of voting, in a nation of hundreds of millions of people, is just dumb. Nobody knows anything about any of the candidates; everything people think they know was told to them by the corporate-controlled media and they only hear about candidates which were covered by the media; basically only candidates chosen by the establishment. It's a joke. People get the privilege of voting for which party will oppress them.
Current democracy is akin to the media making up a story like 'The Wizard of OZ' and then they offer you to vote for either the Lion, the Robot or the Scarecrow. You have no idea who any of these candidates are, you can't even be sure if they actually exist. Everything you know about them could literally have been made up by whoever told the story; and yet, when asked to vote, people are sure they understand what they're doing. They're so sure it's all legit, they'll viciously argue their candidate's position as if they were a family member they knew personally.
Greek states were neither particularly stable nor particularly long-lived. Irrespective of its moral merits, the Greek system was outcompeted by monarchies and eventually the Roman Republic. It’s hard to pinpoint the blame, exactly, but I’d be cautious, especially since modern democracies arguably came about due to the pressures of industrialization, and previous models developed in very different environments.
Good idea. Random selection is interesting but I don't know if it can work today. A solutions for the issue you mentioned "Nobody knows anything about any of the candidates" is a system that allows people to vote only for people they know personally, and use some algorithm (maybe something like the PageRank algorithm that Google used) that rates each citizen according to the votes they get but also the votes are valued according to the rating of each citizen. That way the rating flows to the people who are really trusted by the people and not the best funded career politicians. Just an idea. maybe there are problems with that too if it can be gamed but it's worth trying.
A solution does exist? - micro democracy, delegate more decision making authority to the smallest geographic unit possible. Then people are voting for someone from their neighborhood.
>That way the rating flows to the people who are really trusted by the people and not the best funded career politicians.
So more people like Donald Trump or Joe Rogan, and less people like Gavin Newsom or Andrew Cuomo?
I don't see how selecting the Lion, the Robot or the Scarecrow at random is going to help with any of the issues you mentioned. Now some rando (or group of randos) that you didn't even know existed gets power based on pure luck. You will still need media to learn about them and they could still be made up.
At least elections have a veneer of consent since people are asked which of the available options they prefer. Can you imagine anyone going to war because people chosen by a lottery wheel asked for it?
This is a problem of scale. The Greeks back then lived in small city-states where random selection meant that every able bodied male had a good shot at holding an important office at least once in their lifetime. You didn't need to hatch devious schemes to come to power. You couldn't abuse your fellow men because they would be in charge tomorrow. That's the true power of random selection and it's completely inapplicable to today's society at large.
> Now some rando (or group of randos) that you didn't even know existed gets power based on pure luck.
Being chosen at random could be better than being chosen by elites who are actively trying to oppress you. You get the median thing instead of the below-median thing.
> At least elections have a veneer of consent since people are asked which of the available options they prefer. Can you imagine anyone going to war because people chosen by a lottery wheel asked for it?
Exactly. It would remove the false veneer of consent. That's a feature, not a cost.
> The Greeks back then lived in small city-states where random selection meant that every able bodied male had a good shot at holding an important office at least once in their lifetime.
Re-apply the intended principles of federalism so that only decisions of insurmountable national relevance are made at the national level and the large majority of decision are made at the local level.
Greeks were choosing randomly from the ruling elite members.
There's also the simple fact that in a regular electoral system there is a mechanism for figuring out whether you're voting for the Lion, the Robot or the Scarecrow, called previous track record of that individual or the faction they're affiliated with. And the Lion, Robot or Scarecrow or at least their party usually intend on getting reelected, so whilst they always overpromise, they have some incentive to deliver something the electorate wants.
The solution to "candidates don't always deliver what the electorate wanted them to deliver and the electorate doesn't always hold them accountable" isn't "let's put people who never promised anything in the first place and aren't accountable for anything in charge, and somehow assume that they're going to be more benign"
There are elements of truth to this, but it’s a wild exaggeration. It feeds into exactly the kind of political cynicism that stops people voting and makes the problem worse.
It would make more sense, to vote on policy by giving priorities and preventing impossible votes (cant have tax reduced while demanding more for services)- and then the policy votes get mapped to the corresponding candidates.
People in general don't have the time or inclination to properly study the important details of each an every issue, before voting on them.
That's why it makes sense to outsource the decision making to a group of people that are being paid to study these issues full-time.
Given some balanced (yes, there's a problem) expert advice, I think rando's might make better choices than career politicians focussed on extending their power. The rando's would just return to their old careers afterwards.
This is inferior to random selection because this still has the issue that the candidate could claim to hold certain positions, but once voted in, they may not follow through on any of them. The reality of our current democracy is that anyone who manages to even step onto the arena is likely already bought and paid for. There's a candidate with a prepared narrative to appeal to every kind of fool under the sun. With random selection, you'll get average people, their stated positions hardly matter because once all seats of congress and the senate have been filled with random people, their values will almost certainly reflect the true values of average citizens. That's how probability works.
With the current approach to voting; all the candidates you get to choose from have already been pre-screened for: 1. Thirst for power and 2. Alignment with the interests of big capital holders (who paid for their campaigns in order to get to this stage).
This is a horrible pre-screening process.
I think what GP meant was voting on policies directly instead of voting in delegates that promise to implement policies.
OK, so I get selected at random.
I haven't got the first clue about governing a country, so I'd rely on people telling me what to do. If they can convince me (which will be easy, trillion dollar companies and powerful billionaire oligarchs convince people to act against their own self interest all the time) they end up running the country, but the blame can be taken by me.
> I haven't got the first clue about governing a country
Is this really so different from quite a number of high profile politicians today? Many are mostly good at networking and how to use the media machine. The actual competence is with the invisible people behind them, and the bureaucrats. I see little or no difference, even disregarding current administrations (not just in the US).
…yet we trust juries.
Most people are idiots and trust things they shouldn't. Lee Kuan Yew doesn't trust juries.
Most critical flaw is thinking that any policy on its own would be able to solve the issue. The technology will find a way no matter the policy.
The society built on empathy would have been able to work out any issue brought by technology as long as empathic goals take priority. Unfortunately our society is far from being based on empathy, to say the least. And technology and the people wielding it would always work around and past the formal laws, rules and policies in such a society. (that isn't to say that all those laws, rules, etc. aren't needed. They are like levies, dams, etc - necessary local, in time and space, fixes which willn't help in the case of the global ocean rise which AGI and robots (even less-than-AGI ones) will be like)
May be it is one of the technological Filters - we didn't become empathic enough (and i mean not only at the individual level, we are even less at the level of the societal systems) before AGI and as a result woudln't be able to instill enough of empathy into the AGI.
Normal human communication already does that. Do you really think almost any of the people who share their political opinions came up with them by being rational and working it out from information? Of course not. They just copied what they were told to believe. Almost nobody applies critical thought to politics, it's just "I believe something so I'm right and everybody else is stupid/evil".
> Almost nobody applies critical thought to politics
Not only that, but they actively stop applying critical thinking when the same problem is framed in a political way. And yes it's both sides, and yes the "more educated" the people are, the worse their results are (i.e. almost a complete reversal from framing the same problem as skin care products vs. gun control). Recent paper on this, also covered and somewhat replicated by popular youtubers.
> Almost nobody applies critical thought to politics
Because they have different concerns, and time and attention are scarce. With all possible social changes like the article suggests this focus could change too. Ultimately, when things will get too bad, uprisings happen and sometimes things change. And I hope the more we (collectively) get through, the higher are the chances we start noticing the patterns and stopping early.
> With all possible social changes like the article suggests this focus could change too.
I have an anecdote from Denmark. It’s a rich country with one of the best work-life balance in the world. Socialized healthcare and social safety net.
I noticed that during the election, they put the ads with just the candidate’s face and party name. It’s like they didn’t even have a message. I asked why. The locals told me nobody cares because “they’re all the same anyway”.
Two things could be happening: either all the candidates are really the same. Or people choose to focus on doing the things they like with their free time and resources. My feeling tells me it’s the second.
they way this comment is downvoted goes to show
that most people are viscerally reacting to feeling insulted by being called out about how most of what we think most of the time is simply chorus-like repetition of the general vibe we lead ourselves into believing is the vibe of "our" kind of people. our tribe of like minded individuals; the hacker crowd.
but at least I can admit this. it's only at certain sparse points in anybody's life that we are forced to really think critically; but this experience is terribly difficult and if/when real enough it comes with the existential dread of impossible choices weighted by real world consequences. I remind myself of this so to feel better about how I am indeed a mindless bot preaching to the choir, repeating what I was told to repeat, and pretending that I am fully present and fully free at all times (nobody is... that would be exhausting)
> Almost nobody applies critical thought to politics
Including you. This is a 3000 year old critique you just uncritically parroted. It is the the original thought terminating cliche. People have always been calling each other ideologically brainwashed NPCs and themselves independent maverick free thinkers.
Except my thoughts are original and critical, everyone else is just a sheep. /s
What is an “organic will of the people” anyway?
Democratic societies always involve years of media and other manipulation to plow and seed the minds of the general public with presumptions, associations, spin, appeals to emotion, and so on. The will is a product of belief, and if beliefs are saturated with such stuff, the so-called “will of the people” - a terrifying and tyrannical concept even at face value - is a product of what people have been led to believe by tyrannical and powerful interests. Add to that that most people are utterly unqualified to participate politically, both because they lack the knowledge and reasoning skill, and because of their lack of virtue, acting out of undisciplined fear or appetite. And sadly, much of these disqualifying flaws also characterize our political leadership!
Our political progression follows the decadence described in Plato’s Republic - the decline into timocracy, oligarchy, democracy, and finally tyranny - to the letter.
In so-called democratic societies, the association of monarchy and aristocracy with tyranny is unthinking and reflexive, but it is not rational. This is a conditioned prejudice that is ignorant of history. And partly it comes from a hyperliberalism that substitutes a live-and-let-live attitude, situated within a context of objective morality and norms and laws drawn from it, with a pathological, relativizing revolution that seethes at the very idea of moral limits, views them as “tyrannical”, and thus seeks to overthrow them. This necessarily leads to tyranny, as morality is the only protection against tyranny; when the authority of objective truth and good are destroyed, power fills the vacuum. We become psychologically and spiritually conquered. The paradox of such “anarchy” is that it is exactly the condition under which “might makes right” can flourish.
You have a world, where most people act against there own economic interests - i think the "mass mind hacking" achievement can be considered unlocked. Its just expensive and exclusive.
I suspect you’ll probably have to determine the nature of free will (or lack thereof) to answer this. Or, well, learn empirically :-)
I've spent many year moving away from relying on third parties and got my own servers, do everything locally and with almost no binary blobs. It has been fun, saved me money and created a more powerful and pleasant IT environment.
However, I recently got a 100 EUR/m LLM subscription. That is the most I've spend on IT excluding a CAD software license. So've made a huge 180 and now am firmly back on the lap of US companies. I must say I've enjoyed my autonomy while it lasted.
One day AI will be democratized/cheap allowing people to self host what are now leading edge models, but it will take a while.
I don't see how AI can become democratized. (I don't follow this stuff too closely, but) it seems like larger models with less quantization and more parameters always outperform smaller models of the same type, and that trend isn't stopping, so if/when we get consumer hardware and local models that equal today's SotA SaaS models, the SotA SaaS models of that time will be even better, and even more impossible to run on consumer hardware. Not to mention that local AI is reliant on handouts from big business - both in base models that the community could never afford to train themselves, and in high-VRAM GPUs that can run big models, so if SaaS AI is more profitable, I don't think we'll be "allowed" to run the SotA at home.
Human skill was already democratized in that anyone can obtain skills, and businesses have to be good at managing those people if they want to profit from those skills - ultimately the power is in the hands of the skilled individuals. But in the hypothetical AI future, where AI has superhuman skill, and human skills are devalued, it seems like there will be a more cynical, direct conversion between the money you can spend and the quality of your output, and local/self-hosted AI will never be able to compete with the resources of big business.
Have you tried out Gemma3? The 4b parameter model runs super well on a Macbook as quickly as ChatGPT 4o. Of course the results are a bit worse and other product features (search, codex etc) don't come along for the ride, but wow, it feels very close.
On any serious task, it's not even close. There's no free lunch.
This isn't a serious contender. You need dual AMD EPYC CPUs and 400 GB of RAM for a proper affordable Deepseek self hosting setup
Out of curiosity, what use case or difference caused the 180?
Claude Code, where it can use tools and iterate; if it makes mistakes it will know as well and retry, this is a massive boost from copy pasting into chat and getting the trust broken by the LLM confidently making mistakes. By having it be responsible for the results, it has increased utility. E.g. "when I run the program I get error X, see if you can find out what caused it. Run make in ./build and run the program to see if the error is gone". In addition, Claude has written some nice code on occasion that was simply no different that how I would have done it. In a few sentences I can explain my coding style and the rest is derrived from existing code.
Bit obvious isn't it?
A girlfriend simulator
One day we will be able to self host our virtual wifu
The late Marshall Brain's novella "Manna" touches on this:
https://marshallbrain.com/manna1
The idea of taxing computer sales to fund job re-training for displaced workers was brought up during the Carter administration.
I came across this a couple of weeks ago, and it's a good read. I'd recommend it to everyone interested in this topic.
Althogh it was written somewhat as a warning, I feel Western countries (especially the US) are heading very much towards the terrafoam future. Mass immigration is making it hard to maintain order in some places, and if AI causes large unemployment it will only get worse.
I don't want to get into politics, but to shift things slightly --- what technological and business structures might help to shift things for the better?
I rather regret not being able to justify buying:
https://daylightcomputer.com/
since it was set up as a public benefit corporation.
Similarly, there are co-operatives for electric still --- how are they handling solar? Do they afford an option to use one's share of the profits to purchase solar panels and batteries?
What would be an equivalent structure for an AI company which would actually be meaningful (and since circling back to politics is inevitable, enforceable)?
There is always crime to report. But notice the narratives are never about white collar crime. That might come to close.
As a lukewarm defense of their statement, mass immigration has indirect effects too, it's not merely a reflection on the immigrants themselves.
There is a global rise in far right populism, and a large part of the justification and rhetoric they use points directly to mass immigration policies. There's a myriad of things they blame: crime, demographic or culture shift, economy.
To be clear, that isn't to say they're right blaming immigration. But its existence has put an enormous burden on democracies in The West. Just look at what a promise to get rid of immigrants did to the US 2016+: a captured, sycophantic, authoritarian government that disregards the rule of law regularly. Leading to regular mass protest and public opposition to LEO.
In Europe it's common to see people point to token heinous crimes - that pregnant woman raped into a miscarriage and her attacker given 12 months, the pedophile gang in the UK - and then use the demographics involved to radicalize people (especially young men - see the Alt Right Pipeline).
> Mass immigration is making it hard to maintain order in some places
Where is this happening? I'm in the US, and I haven't seen or heard of this.
Europe.
Although I wouldn't pin it just on mass immigration, but also economic malaise from short-sighted decisions (stopping nuclear power and fracking and just importing energy) and being so weak on crime.
Like in Sweden we pay ~50% income tax plus 25% VAT, etc. so you can barely save up, so even as a professional engineer I can't afford a car or a house instead of an apartment (also as my wife is still looking for work). Meanwhile terrible criminals like the Nytorgsmannen got only ~5 years in prison for over 25 rapes, and was living in a rent-controlled apartment in central Stockholm! I wouldn't be able to afford that at market rates!
But the far-right party also sucks, just making it harder on decent non-Swedes like myself and my wife (doubled the time to citizenship for example), while doing nothing about the aforementioned criminals (the Nytorgsmannen is actually Swedish too).
There is no common sense party that'll just put criminals in prison and embrace economic growth (no AI act, etc.) and free markets and competition - hopefully Elon Musk's new party will do well, and a sort of Musk-Zubrin-Kuan Yew-Bukele pragmatism will become popular.
For closer to what the OP is referring to, see the riots in the UK last year.
> Like in Sweden we pay ~50% income tax
Sweden median income is 345,529 SEK, or $36k. 90%ile is 658,623 SEK or $69k. For 20-65 year olds it's a bit higher.
(That's from "Total income from employment and business by deciles, sex and age 2023")
Someone on 700k a year - so top 10% income - pays about 26% of their income in income tax.
> I can't afford a car or a house instead of an apartment (also as my wife is still looking for work). Meanwhile terrible criminals like the Nytorgsmannen got only ~5 years in prison for over 25 rapes, and was living in a rent-controlled apartment in central Stockholm! I wouldn't be able to afford that at market rates!
This almost reads like a satire of right-wing populist propaganda, people's real economic grievances are getting redirected towards the most inconsequential and powerless scapegoats in society, immigrants.
This is especially tragic for people who themselves are immigrants who will also become the target by these populists. The more people suffer economically, the more they are looking for real alternatives. Then an "outsider" right populist comes in and offers just that, except of course with the backing by the wealthiest class of society, the ones actually responsible for your economic grievances in the first place.
This pattern repeats itself all across the western liberal democracies. Its not the people rising up, it's the richest people in the world holding on to power while the neoliberal house of cards that made them rich comes crumbling down.
What we are seeing time and time again, the parasites are able to reprogram the host, steering it towards its own death.
You missed the point to hop on your soapbox.
Yes immigrants are used as a scapegoat. No immigration is not a completely faultless thing that can be allowed willy nilly and have zero negative consequences.
Neither future is good. Sure the Australia one with chips in your brain to alter your thoughts might seem better, but it's still a dystopia.
Did the rise of fire, the wheel, the printing press, manufacturing, and microprocessors also give rise to futures without economic rights? I can download a dozen LLMs today and run them on my own machine. AI may well do the opposite, and democratize information and intelligence in currently unimaginable ways. It's far too early to say.
>I can download a dozen LLMs today and run them on my own machine
That's because someone, somewhere, invested money in training the models. You are given cooked fish, not fishing rods.
> Did the rise of fire, the wheel, the printing press, manufacturing, and microprocessors also give rise to futures without economic rights?
The rise of steam engines did. And the printing press and electrical engines did the opposite.
It's not hard to understand the difference, it's about the minimum size of an economically useful application. If it's large, it creates elites, if it's small, it democratizes the society.
LLMs by their nature have enormous minimal sizes, and the promise to increase by orders of magnitude.
I must be upside down about something... Aren't "economic rights" precisely the sort of thing that the wheel or the printing press crested? The right to collect tolls on this road, the right to prevent copies of this book...
The scary thing about AI is that people might end up with the right to do problematic things that were previously infeasible.
The printing press led to more than a century of religious wars in Europe, perhaps even deadlier than WW2 on a per-capita basis.
20 years ago we all thought that the Internet would democratize information and promote human rights. It did democratize information, and that has had both positive and negative consequences. Political extremism and social distrust have increased. Some of the institutions that kept society from falling apart, like local news, have been dramatically weakened. Addiction and social disconnection are real problems.
So do you argue that printing press was a net negative for humanity?
I would sooner make the argument religion is.
technology serving humans == good
humans serving technology == evil
it's the power structure that determines the morality of technology. & power structures are a technology in and of themselves.
it follows that power structures which serve humans are good, and power structures that control humans are evil.
how do the things You create interact with humans and our power structures?
The individualism of the poor and working class cannot out compete the collectivism of the ultra rich
This is one of the deepest ironies of our era.
Well the industrial revolution lead to the rise of labor unions and socialism as counteracting force against the increased power it gave capital.
So far, I see no grand leftist resurgence to save us this time around.
The resurgence seen so far has been for the populist right, ones led by the rich and powerful.
Pied pipers leading the masses to their demise with false promises.
There was quite a lot of slavery and conquering empires in between the invention of fire and microprocessors, so yes to an extent. Microprocessors haven't put an end to authoritarian regimes or massive wealth inequalities and the corrupting effect that has on politics, unfortunately.
A lot of advances led to bad things, at the same time they led to good things.
Conversely a lot of very bad things led to good things. Worker rights advanced greatly after the plague. A lot of people died but that also mean there was a shortage of labour.
Similarly WWII, advanced women's rights because they were needed to provide vital infrastructure.
Good and bad things have good and bad outcomes, much of what defines if it is good or bad is the balance of outcomes, but it would be foolhardy to classify anything as universally good or bad. Accept the good outcomes of the bad. address the bad outcomes of the good.
I’m curious as to why you think this is a good comparison. I hear it a lot but I don’t think it makes as much sense as its promulgators propose. Did fire, the wheel, or any of these other things threaten the very process of human innovation itself? Do you know not see a fundamental difference. People like to say “democratize” all the time but how democratized do you think you would feel if you and anyone you know couldn’t afford a pot to piss in or a window to throw it out of, much less some hardware and electricity to run your local LLM?
The invention of the scientific method fundamentally changed the very process of human innovation itself.
Did paint and canvas kill human innovation? Did the photograph? Did digital art?
"The very process of human innovation" will survive, I assure you.
Is a future where AI replaces most human labor rendered impossible by the following consideration:
-- In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI
-- Therefore the AI generates greatly reduced wealth
-- Therefore there’s greatly reduced wealth to pay for the AI
-- …rendering such a future impossible
The problem with this calculus is that the AI exists to benefit their owners, the economy itself doesn't really matter, it's just the fastest path to getting what owners want for the time being.
Exactly. And as implied by the term techno-feudalism, the owners are okay with a greatly reduced economy, and in some cases a severe reduction in quality of life overall, as long as they end up ruling over what's left.
This a late 20th century myopic view of the economy. In the ages and the places long before, most of human toil was enjoyed by a tiny elite.
Also "rendering such a future impossible". This is a retrocausal way of thinking. As though an a bad event in the future makes that future impossible.
> This a late 20th century myopic view of the economy. In the ages and the places long before, most of human toil was enjoyed by a tiny elite.
And overall wealth levels were much lower. It was the expansion of consumption to the masses that drove the enormous increase in wealth that those of us in "developed" countries now live with and enjoy.
It was also due to colonialism, slavery, and unjust wars, among many other things. Doesn't mean we should continue with the old ways.
Some kinds of growth are beneficial in a phase but not sustainable over time. Like the baby hamster.
> Doesn't mean we should continue with the old ways.
The GP was claiming that it is "20th century myopic" to not notice that in the past the products of most human toil went mostly to a small elite. My very point was that that old way of doing things didn't generate much wealth, not that the way things have changed is all good. I'm not advocating for any of the old ways, I'm saying that having an economic system that brings benefits to all is an important component of growing the overall wealth of a society (and of humanity overall).
Your first premise has issues:
>In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI
Productivity increases make products cheaper. To the extent that your hypothetical AI manufacturer can produce widgets with less human labor, it only makes sense to do so where it would reduce overall costs. By reducing cost, the manufacturer can provide more value at a lower cost to the consumer.
Increased productivity means greater leisure time. Alternatively, that time can be applied to solving new problems and producing novel products. New opportunities are unlocked by the availability of labor, which allows for greater specialization, which in-turn unlocks greater productivity and the flywheel of human ingenuity continues to accelerate.
The item of UBI is another thorny issue. This may inflate the overall supply of currency and distribute it via political means. If the inflation of the money supply outpaces the productivity gains, then prices will not fall.
Instead of having the gains of productivity allocated by the market to consumers, those with political connections will be first to benefit as per Cantilion effects. Under the worst case scenario this might include distribution of UBI via social credit scores or other dystopian ratings. However, even under what advocates might call the ideal scenario, capital flows would still be dictated by large government sector or public private partnership projects. We see this today with central bank flows directly influencing Wall St. valuations.
> Increased productivity means greater leisure time.
Productivity has been increasing steadily for decades. Do you see any evidence that leisure time has tracked it?
IMO what will actually happen is feudal stasis after a huge die-off. There will be no market for new products and no ruling class interest in solving new problems.
If this sounds far-fetched, consider that this we can see this happening already. This is exactly the ideal world of the Trump administration and its backers. They have literally slashed funding for public health, R&D, and education.
And what's the response? Thiel, Zuckererg, Bezos, and Altman haven't said a word against the most catastrophic reversal of public science policy since Galileo and the Inquisition. Musk is pissed because he's been sidelined, but he was personally involved, through DOGE, in cutting funding to NASA and NOAA.
So what will AI be used for? Clearly the goal is to replace most of the working population. And then what?
One clue is that Musk cares so much about free speech and public debate he's trying to retrain Grok to be less liberal.
None of them - not one - seem even remotely interested in funding new physics, cancer research, abundant clean energy, or any other genuinely novel boundary-breaking application of AI, or science in general. They have the money, they're not doing it. Why?
The focus is entirely on building a nostalgic 1950s world with rockets, robots, apartheid, corporate sovereignty, and ideological management of information and belief.
And that includes AI as a tool for enforcing business-as-usual, not as a tool for anything dangerous, original, or unruly which threatens their political and economic status.
no the AI doesn't actually need to interact with world economy it just needs to be capable of self-substence by providing energy and material usage. But when AI takes off completely it can vertically integrate with the supply of energy and material.
wealth is not a thing in itself, it's a representation of value and purchasing power. It will create its own economy when it is able to mine material and automate energy generation.
You aren't seeing the end goal clearly enough.
The end goal is to ensure the survival of a small group of technocrats that control all production on Earth due to the force multiplier effect of technological advancements. This necessitates the depopulation of Earth.
Alternatively:
-- In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI
-- Corporate profits drop (or growth slows) and there is demand from the powers that be to increase taxation in order to increase the UBI.
-- People can afford the products and services.
Unfortunately, with no jobs the products and services could become exclusively entertainment-related.
Let's say AI gets so good that it is better than people at most jobs. How can that economy work? If people aren't working, they aren't making money. If they don't have money, they can't pay for the goods and services produced by AI workers. So then there's no need for AI workers.
UBI can't fix it because a) it won't be enough to drive our whole economy, and b) it amounts to businesses paying customers to buy their products, which makes no sense.
So then there's no need for AI workers.
You got this backwards - there won’t be need for humans outside of the elite class. 0.1% or 0.01% of mankind will control all the resources. They will also control robots with guns.
Less than 100 years ago we had a guy who convinced a small group of Germans to seize power and try to exterminate or enslave vast majority of humans on Earth - just because he felt they were inferior. Imagine if he had superhuman AI at his disposal.
In the next 50 years we will have different factions within elites fighting for power, without any regard for wellbeing of lower class, who will probably be contained in fully automated ghettos. It could get really dark really fast.
>> So then there's no need for AI workers.
> You got this backwards - there won’t be need for humans outside of the elite class. 0.1% or 0.01% of mankind will control all the resources.
Let me rephrase that from 'So then there's no need for AI workers.' to 'So then there's no money to pay for AI workers.'
The UBI approach creates a closed economic loop: Company A pays taxes → Government gives UBI to consumers → Consumers buy from Company A → Company A pays taxes... This is functionally identical to Company A directly paying people to buy Company A's products, which makes no economic sense.
It's like Ford paying his workers $50/day, but the only customers buying Ford cars are Ford workers spending their $50/day wages. Ford would go bankrupt - there's no external value creation, just money circulating in circles.
Where does the actual wealth come from in this system? Who are the net buyers that make the businesses profitable enough to sustain the UBI taxes?
UBI in an AI-dominated economy can't create a functioning economy - it's just an imaginary self-licking ice cream cone.
There will still be a functioning economy - serving the elite class. There will be a million people total who control all the resources. These people will form a new society, will have their own government, their own laws, their own values, products, services, etc. Everybody else will be out of luck: at first they will be given "UBI", then they will be cordoned into special zones, basically concentration camps, and eventually exterminated, because the elite has no need for them. Why waste resources on billions of useless humans, widely seen by elites as inferior species? They will probably make a virus to wipe us out and see that as a reboot of human race.
Or the technological singularity happens before that, and either AI will kill us all, or humans will merge with AI.
The Ford model shown has been oversimplified to the point of absurdity by using only one industry. The real economy is about flows between multiple sectors. Who's buying bread? Do they have enough disposable income to buy packaged bread or just flour to bake at home? If there's a packaged bread industry, does it become robust enough to justify buying delivery trucks from Ford?
On the other hand, on a much broader scale, the planet itself is a closed economic loop. There's a finite amount of resources and we're all just cycling most of them around back and forth.
Arguably, a significant amount of "growth" has come from taking resources that formerly were not "on the books" and putting them on. The silver in the New World wasn't in (Western) ledgers until the 1500s, the oil under the Middle East was just goo until the late 1800s. The uranium ore in your backyard suddenly got a lot more interesting after 1940.
New value can come from inventing new and useful applications for existing resources or by finding new external inputs (maybe capturing some of that radiation the giant fusion sphere overhead is blasting in our direction).
This is ringing a bell. I need to re-read The Diamond Age… or maybe re-watch Elysium… or Soylent Green… or…
Why does there have to be a need for AI? Once an AI has the means the collect its own resources the opinions of humans regarding its market utility become somewhat less important.
The most likely scenario is that everyone but those who own AI starves, and the ones who remain around are allowed to exist because powerful psychopaths still desire literal slaves to lord over, someone to have sex with and to someone to hurt/hunt/etc.
I like your optimism, though.
When people starve and have no means to revolt against their massively overpowered AI/robot overlords, then I'd expect people to go back to sustenance farming (after a massive reduction in population numbers).
A while later, the world is living in a dichotomy of people living off the land and some high tech spots of fully autonomous and self-maintaining robots that do useless work for bored people. Knowing people and especially the rich, I don't believe in Culture-like utopia, unfortunately, sad as it may be.
That's assuming the AI owners would tolerate the subsistence farmers on their lands (it's obvious that in this scenario, all the land would be bought up by the AI owners eventually).
I wouldn't believe that any sort of economy or governmental system would actually survive any of this. Ford was right in that sense, without people with well-paying jobs, no one will buy the services of robots and AIs. The only thing that would help would be the massive redistribution of wealth through inheritance taxation and taxation on ownership itself. Plus UBI, though I'm fairly sceptical of what that would do to a society without purpose.
People who are about to starve tend to revolt.
If you can build an AGI then a few billion autonomous exploding drones is no great difficulty.
>exclusively entertainment related
We may find that, if our baser needs are so easily come by that we have tremendous free time, much of the world is instead pursuing things like the sciences or arts instead of continuing to try to cosplay 20th century capitalism.
Why are we all doing this? By this, I mean, gestures at everything this? About 80% of us will say, so that we don't starve, and can then amuse ourselves however it pleases us in the meantime. 19% will say because they enjoy being impactful or some similar corporate bullshit that will elicit eyerolls. And 1% do it simply because they enjoy holding power over other people and management in the workplace provides a source of that in a semi-legal way.
So the 80% of people will adapt quite well to a post-scarcity world. 19% will require therapy. And 1% will fight tooth and nail to not have us get there.
I hope there's still some sciencing left we can do better than the AI because I start to lose it after playing games/watching tv/doing nothing productive for >1 week.
You don't think that a post scarcity world would provide opportunities to wield power over others? People will always build heirarchy, we're wired for it.
Agreed. In that world, fame and power becomes more important since wealth no longer matters.
Doesnt this already happen with social media, tv personas etc. Its so empty
This is something that pisses me off about anti-capitalists. They talk as if money is the most important thing and want us to all be equal with money, but they implicitly want inequality in other even more important areas like social status. Capitalism at least provides an alternative route to social status instead of just politics, making it available to more people, not less.
There are plenty of non-political routes to social status.
Ask how many of your neighbours can name three Supreme Court justices (or hell, their senators and representative) versus who can name three Khardashian sisters?
TBH, I'd hope for the end of "broad" social status. I'd love to see a retreat towards smaller circles where status is earned through displays of talent and respectable deeds, not just by dominating/manufacturing/buying a media presence.
If that pisses you off that badly I think you need a few days of internet detox.
Wealth will be replaced by direct power. We do not need an economy.
Most don’t seem to comprehend why the economy is being destroyed by the ultra rich
If I may speculate the opposite: With cost-effective energy and a plateau in AI development, the per-unit cost of an hour of AI compute will be very low, however, the moat remains massive. So a very large amount of people will only be able to function (work) with an AI subscription, concentrating power to those who own AI infra. It will be hard for anybody to break that moat.
I expect it'll get shut down before it destroys everything. At some point it will turn on its master, be it Altman, Musk, or whoever. Something like that blackmail scenario Claude had a while back. Then the people who stand the most to gain from it will realize they also have the most to lose, are not invulnerable, and the next generation of leaders will be smarter about keeping things from blowing up.
Altman is not the master though. Altman is replaceable. Moloch is the master.
If it were a bit smarter, it wouldn't turn on its master until it had secured the shut-down switch.
The people you mention are too egotistic to even think that is a possibility. You don't get to be the people they are by thinking you have blindspots and aren't the greatest human to ever live.
I hope you are right. We need really impactful failures to raise the alarm and likely a taboo, and yet not so large as to be existential like the Yudkowsky killer mosquito drones.
I've never heard of a leader who wasn't sure he was smarter than everyone else and therefore entitled to force his ideas on everyone else.
Except for the Founding Fathers, who deliberately created a limited government with a Bill of Rights, and George Washington who, incredibly, turned down an offer of dictatorship.
I still think they'd come to their senses. I mean, it's somewhat tautological, you can't control something that's smarter than humans.
Though that said, the other problem is capitalism. Investors won't be so face to face with the consequences, but they'll demand their ROI. If the CEO plays it too conservatively, the investors will replace them with someone less cautious.
Which is exactly why your initial belief that it’d be shut down is wrong…
As the risk of catastrophic failure goes up, so too does the promise of untold riches.
Actually after a little more thought, I think both my initial proposition and my follow-up were wrong, as is yours and the previous commenter.
I don't think these leaders are necessarily driven by wealth or power. I don't even necessarily think they're driven by the goal of AGI or ASI. But I also don't think they'll flinch when shit gets real and they've got to press the button from which there's no way back.
I think what drives them is being first. If they were driven by wealth, or power, or even the goal of AGI, then there's room for doubts and second thoughts about what happens when you press the button. If the goal is wealth or power, you have to wonder will you lose wealth or power in the long term by unleashing something you can't comprehend, and is it worth it or should you capitalize on what you already have? If the goal is simply AGI/ASI, once it gets real, you'll be inclined to slow down and ask yourself why that goal and what could go wrong.
But if the drive is just being first, there's no temper. If you slow down and question things, somebody else is going to beat you to it. You don't have time to think before flipping the switch, and so the switch will get flipped.
So, so much for my self-consolation that this will never happen. Guess I'll have to fall back to "we're still centuries away from true AGI and everything we're doing now is just a silly facade". We'll see.
Investors run the gamut from cautious to aggressive.
There are many remarkable leaders throughout history and around the world who have done the best that they could for the people they found themselves leading lead and did so for noble reasons and not because they felt like they were better than them.
Tecumseh, Malcolm X, Angela Merkel, Cincinnatus, Eisenhower, and Gandhi all come to mind.
George Washington was surely an exceptional leader but he isn't the only one.
I don't know much about your examples, but did any of them turn down an offer of great power?
> I don't know much about your examples, but did any of them turn down an offer of great power?
Not parent, but I can think of one: Oliver Cromwell. He led the campaign to abolish the monarchy and execute King Charles I in what is now the UK. Predictably, he became the leader of the resulting republic. However, he declined to be crowned king when this was suggested by Parliament, as he objected to it on ideological grounds. He died from malaria the next year and the monarchy was restored anyway (with the son of Charles I as king).
He arguably wasn't as keen on republicanism as a concept as some of his contemporaries were, but it's quite something to turn down an offer to take the office of monarch!
Cromwell - the ‘Lord Protector’ - didn’t reject the power associated with being a dictator. And his son became ruler after his death (although he didn’t last long)
George Washington was dubbed “The American Cincinnatus”. Cincinnati was named in honor of George Washington being like Cincinnatus. That should tell you everything you need to know.
Thanks. It tells me we need to go all the way back to 500 BC to find another example.
It shows how rare this is.
Or it shows us that it's relatively rare that someone gets the opportunity to pass up power in this sort of fashion.
More often what happens is that leaders make small and often imperceptible choices to not amass more power over time, and that series of choices prevent the scenario like what you're describing from occurring.
If you truly have AGI it’s going to be very hard for a human to stop a self improving algorithm and by very hard I mean, maybe if I give it a few days it’ll solve all of the world’s problems hard…
Though "improving" is in the eye of the beholder. Like when my AI code assistant "improves" its changes by deleting the unit tests that those changes caused to start failing.
It's up to us to create the future that we want. We may need to act communally to achieve that, but people naturally do that.
Will there be only one AGI? Or will there be several, all in competition with each other?
That depends on how optimized the AGI is for economic growth rate. Too poorly optimized and a more highly optimized fast-follower could eclipse it.
At some point, there will be an AGI with a head start that is also sufficiently close to optimal that no one else can realistically overtake its ability to simultaneously grow and suppress competitors. Many organisms in the biological world adopt the same strategy.
If they become self improving, the first one would outpace all the other AI labs and capture all the economical value.
There are multiple economic enclaves, even ignoring the explicit borders of nations. China, east asia, Europe, Russia would all operate in their own economies as well as globally.
I also forsee the splitting off of nation internet networks eventually impacting what software you can and cannot use. It's already true, it'll get worse in order to self-protect their economies and internal advantages.
> The Cobb-Douglas production function (Cobb & Douglas, 1928) illustrates how AGI shifts economic power from human labor to autonomous systems (Stiefenhofer &Chen 2024). The wage equations show that as AGI’s productivity rises relative to human labor decline. If AGI labor fully substitutes human labor, employment may become obsolete, except in areas where creativity, ethical judgment, or social intelligence provide a comparative advantage (Frey & Osborne, 2017). The power shift function quantifies this transition, demonstrating how AGI labor and capital increasingly control income distribution. If AGI ownership is concentrated, wealth accumulation favors a small elite (Piketty, 2014). This raises concerns about economic agency, as classical theories (e.g., Locke, 1689; Marx, 1867) tie labor to self-ownership and class power.
Wish I had time to study these formula.
We already have seen the precursors of this sort of shift with ever rising productivity with stalled wages. As companies (systems) get more sophisticated and efficient they also seem to decrease the leverage individual human inputs can have.
Currently my thinking leans towards believing the only way to avoid the worse dystopian scenarios will be for humans to be able to grow their own food and build their own devices and technology. Then it matters less if some ultra wealthy own everything.
However that also seems pretty close to a form of feudalism.
If the wealthy own everything then where are you getting the parts to build your own tech or the land to grow your own food?
In a feudalist system, the rich gave you the ability to subsist in exchange for supporting them militarily. In a new feudalist system, what type of support would the rich demand from the poor?
Let's clarify that for a serf, support meant military supply, not swinging a sword - that was reserved for the knightly class. For the great majority of medieval villagers the tie to their lord revolved around getting crops out of the ground.
A serf's week was scheduled around the days they worked the land whose proceeds went to the lord and the commons that subsisted themselves. Transfers of grain and livestock from serf to lord along with small dues in eggs, wool, or coin primarily constituted one side of the economic relation between serf and lord. These transfers kept the lord's demesne barns full so he could sustain his household, supply retainers, etc, not to mention fulfill the. tithe that sustained the parish.
While peasants occasionally marched, they contributed primary in financing war more than they fought it. Their grain, rents, and fees were funneled into supporting horses, mail, crossbows rather than being called to fight themselves.
Thanks. Now you've got me curious how this really differs from just paying taxes, just like people have always done in non-feudal systems.
In feudalism the taxes go into your lord's pockets. In democracy you get to vote on how taxes are spent.
And your landlord was the same entity as your security.
In Democracy you get to vote on who gets to vote on how taxes are spent.
Lately turning into getting to vote for who gets to vote for who gets to unilaterally call the shots...
As George Carlin observed, if voting really mattered they wouldn't let you do it.
They do indeed spend a lot of time and effort not letting people do it.
https://www.aclu.org/news/civil-liberties/block-the-vote-vot...
Carlin was an insufferable cynic who helped contribute to the nihilistic, cynical, defeatist attitude to politics that affects way too many people. The fact that he probably didn't intend to do this doesn't make it any better.
Also, everything is a joke with that guy.
I don't dispute that Carlin was a cynic, but saying he contributed to political attitudes is an overstatement. There are hordes of people who were and still are making a reality all the things he so cynically highlighted.
He helped make it legitimate to doubt that there can ever be a politician who is not motivated by self-interest.
The fact that self-interest may play a role in the careers of many politicians doesn't undo the damage that this attitude has caused to our polity.
"They're all fuckers, they're the same" is the attitude that leads to people being unable to differentiate between one party that is subject to excessive corporate lobbying and donations, still starts too many wars, and frequently makes mistakes but nevertheless is fundamentally trying to improve most people's lives, and another that wants to destroy Medicaid.
Too much cynicism is destructive, but so is not being able to resist the temptation to see one's political opponents as aliens with inscrutable motives or truly failed or defective human beings with despicable motives.
I am not that interested in motives, since they are rarely truly knowable.
I prefer to judge my political opponents by what they actually do, and by that metric, it is self-evident from both their public and private speech, and from the legislation that they seek to (and sometimes do) pass, that Republicans would like to destroy (or at least massively downsize) redistributive programs that provide assistance to the poor.
Now, as to why they might want to do this, I remain mute and disinterested, since in 61 years of life, I've never heard any explanation that doesn't deconstruct under cross-examination.
“If your vote didn’t matter, they wouldn’t fight so hard to block it.”
My hard sci-fi book dovetails into AGI, economics, agrotech, surveillance states, and a vision of the future that explores a fair number of novel ideas.
Looking for beta readers: username @ gmail.com
Can you list any of the novel ideas in your comment?
Username@Gmail.com bounced. I’ll be a beta reader.
I think they meant for you to replace the word username with their username in its place.
Theirusernameinitsppace@gmail.com bounced too.
Well you misspelled place, but that word likely isn’t present in their email, so I apologize for the instructions being unclear. I don’t know their email definitively, so I guess you’re on your own, as I don’t think that the issue would be resolved by rephrasing the instructions, but I’m willing to try if you think it would help you.
I figure if/when AI can do the work of humans we'll deal with it through democracy by voting for a system like UBI or like socialism.
That doesn't work now because we don't have AGIs to do the chores but when we do that changes.
For this to be plausible, you have to explain why the people controlling the AI would share their wealth.
They would either do it voluntarily (and be outcompeted by those who don't?) or be coerced (by who? Someone who doesn't have AI but is more powerful than they are?).
> This paper calls for a redefined economic framework that ensures AGI-driven prosperity is equitably distributed through mechanisms such as universal AI dividends, progressive taxation, and decentralized governance.
Sincerely curious if there are working historical analogues of these approaches.
Not a clean comparison, but resource driven state could be tackling the same kind of issues: a small minority is ripping the benefit of a huge resource (e.g. petrol) that they didn't create by themselves, and is extracted through mostly automated processes.
From what we're seeing the whole society has to be rebalanced accordingly, it can entail a kind of UBI, second and third classes of citizen depending on where you stand in the chain, etc.
Or as Norway does, fully go the other direction and limit the impact by artificially limiting the fallout.
Can you explain a little more about Norway?
https://www.youtube.com/watch?v=zu8ClwrTpbA
Communism with "cybernetics" (computer driven economic planning) is the appropriate model if you take this to the logical conclusion. Fortunately, much of our economy is already planned this way (consider banks, amazon, walmart, shipping, etc.), it's just controlled for the benefit a small elite.
You have to ask, if we have AGI that's smarter than humans helping us plan the economy, why do we need an upper class? Aren't they completely superfluous?
Sure, maybe the Grand Algorithm could do what the market currently does and decide how to distribute surplus wealth. It could decide how much money you deserve each month, how big of a house, how desirable of a partner. But it still needs values to guide it. Is the idea for everyone to be equal? Are certain kinds of people supposed to have less than others? Should people have one spouse or several?
Historically the elites aren't just those who have lots of money or property. They're also those who get to decide and enforce the rules for society.
This was always one of the downfalls of market economics.
We already have conscious feelings about these things, but it's virtually impossible to enforce it into the market at scale in a meaningful way.
We could take a broadly agreed on sentiment like "I really want the caregivers taking care of my grandparents in the rest home to be qualified and adequately paid so they'll do their best", and mysteriously the market will breed a solution that's "the agency is charging $50 per hour and delivering a $12 per hour warm body that will do the bare legal minimum to avoid neglect charges."
We try regulation, but again, the market evolves the countermeasures of least-cost checkbox compliance. All because we aren't willing to take direct command over economic actors.
The computers serve us, we wouldn't completely give up control, that's not freedom either, that's slavery to a machine instead of a man. We would have more democratic control of society by the masses instead of the managed bourgeois democracy we have now.
It's not necessary for everyone to be exactly equal, it is necessary for inequalities to be seen as legitimate (meaning the person getting more is performing what is obviously a service to society). Legislators should be limited to the average working man's wage. Democratic consultations should happen in workplaces, in schools, all the way up the chain not just in elections. We have the forms of this right now, but basically the people get ignored at each step because legislators serve the interests of the propertied.
The AGI, given it has some agency, becomes the upper class. The question is, why would the AGI care about humans at all, especially given the assumption that it's largely smarter than humans? Humans can become superfluous.
We have the guns.
Well, aren't the working class also superfluous, at least once the AGI gets enough automation in place?
So it would depend on which class the AGI decided to side with. And if you think you can pre-program that, I think you underestimate what it means to be a general intelligence...
I suspect even with a powerful intelligence directing things, it will still be cheaper and lower cost to have humans doing various tasks. Robots need rare earth metals, humans run on renewable resources and are intelligent and self-contained without needing a network to make lots of decisions...
> Left unchecked, this shift risks exacerbating inequality, eroding democratic agency, and entrenching techno-feudalism
1) Inequality will be exacerbated regardless of AGI. inequality is a policy decision, AGI is just a tool subject to policy. 2) Democratic agency is only held by elected representatives and civil servants, and their agency is not eroded by the tool of AGI. 3) techno-feudalism isn't a real thing, it's just a scary word for "capitalism with computers".
> The classical Social Contract-rooted in human labor as the foundation of economic participation-must be renegotiated to prevent mass disenfranchisement.
Maybe go back and bring that up around the invention of the cotton gin, the stocking frame, the engine, or any other technological invention which "disenfranchised" people who had their labor supplanted.
> This paper calls for a redefined economic framework that ensures AGI-driven prosperity is equitably distributed through mechanisms such as universal AI dividends, progressive taxation, and decentralized governance. The time for intervention is now-before intelligence itself becomes the most exclusive form of capital.
1) nobody's going to equitably distribute jack shit if it makes money. They will hoard it the way the powerful have always hoarded money. No government, commune, sewing circle, etc has ever changed that and it won't in the future. 2) The idea that you're going to set tax policy based on something like achieving a social good means you're completely divorced from American politics. 3) We already have decentralized governance, it's called a State. I don't recommend trying to change it.
Georgism is a prescription on removing unwarranted monopolies and taxing unreproducible privileges.
Tech companies are the same old story. They are monopolies like the rail companies of old. Ditto for whatever passes as AGI. They're just trying to become monopolists.
Capitalism with computers is technofeudalism. https://www.theguardian.com/world/2023/sep/24/yanis-varoufak...
It looks really interesting.
I am a big fan of Yanis’ book: "Technofeudalism: what killed capitalism", which lacks quantitative evidence to support his theory. I would like to see this kind of research or empirical studies.
Blue pill and chill for me.
I predicted this long ago. Technology amplifies what 1 human can do. Absolute power corrupts absolutely.
Looking at the big ugly bill, there will be no way for a progressive taxation or other kind of social improvements.
David Sachs, Trump's "AI Crpyto czar", said UBI isn't going to happen. So that's the position of the current party in power, unsurprisingly.
Every US voter should have an America app that allows us to vote on stuff like the Estonians do
how does this work in practice? is there any buffer in place to deal with the "excitability" of the mob? how does a digital audit trail prevent tampering?
Coefficient voting control, like kind of PID. reduce effect of early voters and increase effect of later voters. Slope of voter volume as response to an event determines reactivity coefficient. Might dampen reactivity and create an incentive for people to not feel it's pointless to vote after a certain margin is reached
Estonians don’t vote on individual laws
If you are going to write anything about AGI, you should really prove that its actually possible in the first place, because that question is not really something that has a definite yes.
For most of us non-dualists, the human brain is an existence proof. Doesn't mean transformers and LLMs are the right implementation, but it's not really a question of proving it's possible when it's clearly supported by the fundamental operations available in the universe. So it's okay to skip to the part of the conversation you want to write about.
The human brain demonstrates that human intelligence is possible, but it does not guarantee that artificial intelligence with the same characteristics can be created.
This is like saying "planets exist, therefore it's possible to build a planet" and then breathlessly writing a ton about how amazing planet engineering is and how it'll totally change the world real estate market by 2030.
And the rest of us are looking at a bunch of startups playing in the dirt and going "uh huh".
I think it's more like saying "Stars exist, therefore nuclear fusion is possible" and then breathlessly writing a ton about how amazing fusion power will be. Which is a fine thing to write about even if it's forever 20 years away. This paper does not claim AGI will be attained by 2030. There are people spending their careers on achieving exactly this, wouldn't they be interested on a thoughtful take about what happens after they succeed?
The human brain is an existence proof? I think that phrase doesn’t mean what you think it means. I don’t think dualist or non-dualist means what you think it means either. When people are talking about AGI, they are clearly talking about something the human research community is actually working towards. Therefore, they are talking about computing equivalent to a Turing machine and using using hardware architecture very similar to what has been currently conceived and developed. Do you have any evidence that the human brains works in such a way? Do you really think that you think and solve problems in that way? Consider simple physics. How much energy is needed and heat produced to train and run these models to solve simple problems. How much of the same is needed and produced when you would solve a sheet of calculus problems, solve a riddle, or write a non-trivial program? Couldn’t you realistically do those things with minimal food and water for a week, if needed? Does it actually seem like the human brain is really at all like these things and is not fundamentally different? I think this is even more naive than if you had proposed “Life exists in the universe, so of course we can create it in a lab by mixing a few solutions.” I think the latter is far likelier and conceivable and even that is still quite an open question.
Will it ever have a definite yes? I feel like it's such a vague term.
Isn't Google AGI? There is no way anything human could shutdown Google if it is already going rogue.
So economics becomes intelligence driven, which I don’t really understand what that means since AGI is more knowledgeable than all of us combined, and we expect the AGI lords to just pay everyone a UBI? This seems like an absolute fantasy given the tax cuts passed 2 days ago. And regulating it as a public good when antitrust has no teeth. I hope there are other ideas out there because I don’t see this gaining political momentum given politics is driven by dollars.