sus

Moderator
There are other strands and influences, histories of early commentators and volunteers who shaped the intellectual and social direction of the site, but those are the big ones.

A few years before Yudkowsky started LessWrong, in 2000 (age 21) Yudkowsky founded the Singularity Institute, now MIRI (Machine Intelligence Research Institute).

I think it's hard to overstate how important Yudkowsky has been in mainstreaming AI alignment concerns. Yudkowsky didn't come up with the notion of singularity—that's John von Neumann—but he and Ray Kurzweil are the only ones seriously thinking about singularity around the turn of the century. Musk, famously, met Grimes via a Rococo Basilisk joke—his interest in AI alignment, and the large donations he's made to alignment projects, are largely a result of rationalist efforts.

In the 2000s, in an interview for Luke Muelhauser's Pale Blue Dot, Yudkowsky quips that humanity spends less money on AI safety research than New York City spends on lipstick. By 2020, there is so much money (billions, maybe tens of billions) going towards AI alignment and safety that some rationalists begin discouraging further donations, and it's an active community joke that you can get money for any project so long as you call it an alignment project.
 

sus

Moderator
I think this is a good amount of information for now. I'll let it stew and if there's interest I will continue the story.
 

thirdform

pass the sick bucket
It is clear that box A) is the phantom, and Box B) is the only box one should ever take, precisely because it could be empty, not because it could contain 1 million dollars.

the reason? Perfection or almost certainly correct predictions are feudal ideology tout court.
 

thirdform

pass the sick bucket
Let us chew on this a bit more: supposing that someone who was starving was presented with this game. box A contains the $1000 good and proper, and box B if taken with box A contains nothing.

Then it should follow that the predictor should always predict box B, and it if it doesn't, it is manifestly as stupid as the person who confuses nationalisation with socialism.

If one had a box with 1 tube of quinine and another with a potential of 1000 tubes, and one picked both boxes as predicted, then he would not be able to treat anyones malaria but his own.

If on the other hand he took the second box, there would be four options:
A) in case of the box being empty, the predictor erred and must as such provide the contents of box A) as it has already been decided that they are a charlatan and an ignoramus who have appointed themselves as possessing the authority to decide that the state budget has now replaced the grace of God, hence bankrupsy.
B) the taker of box B) he can distribute all the quinine to the town.
C) he could hoard the quinine.
D) he could poison himself.
 

sus

Moderator
I would never have guessed that thirdform would be such a decision theory junkie! I love this for him
 

thirdform

pass the sick bucket
for instance, it would be more logical for roko's basilisk to torture cigarette smokers for being traitors to the health of the species and demanding the manufacture of such destructive goods, rather than not bringing it into existence in time. I say this as one who used to be an avid smoker, like every Turk is, at one point in their life...
 

thirdform

pass the sick bucket
It would also be more logical for said basilisk AI to torture past generations for mourning the dead and attending funerals, rather than realising the energy that courses through the sun and through the earth, and hence acknowledging the true living dead. But this was too advanced for the blog.
 

thirdform

pass the sick bucket
I should say I'm a proponent of neither, despite my tendancies to inebriation and hacking out my lungs. But the thought experiment can be made much more potent and reasonable, with significant rational incentive.
 

mixed_biscuits

_________________________
Transhumanism, all the nootropic stuff, is just fruitless intensification of competition within an existing cognitive elite.

I don't understand why they're starting transhumanism on people rather than trying to upgrade something simpler first as a proof of concept e.g. a snail.

All these tech bros foregoing foie gras in order to live forever are going to be kicking themselves once they're reminded they're immortal on expiry.

Trying to upload human consciousness to the cloud is needless duplication as this happens anyway.
 

mixed_biscuits

_________________________
The best LessWrong find is the Dual n-Back game. re transhumanism if everyone in the US did Dn-B for the next five years you would see a noticeably different kind of society there; even the buildings would look different.
 
Top