The Inhuman and the Inhumane

constant escape

winter withered, warm
Been interesting in elaborating ideologies that are not centered around the human (which is arguably impossible), but perhaps around something upon which the human is superstructural.

Spending time here, at Dissensus, granted me an understanding of accelerationism as an inhuman ideology (at least the Nick Land brand - but again, I cannot elaborate much much on his thoughts here. I believe someone here described accelerationism as a communism oriented around capital, which stuck with me). To be sure, whether or not accelerationism is effectively dead, exhausted, or has ever carried any political purchase is another question. My question: Is accelerationism a taste of the kind of inhuman ideologies we may adopt at some point in the future? Perhaps ideology is no longer the right word, at this point.

What kind of ideology, or value system, would be centered "around something upon which the human is superstructural"? Perhaps a vitalism, but I'd be inclined to go deeper. Something upon which even the vital/alive is superstructural? I lack the words here, but I would say: the manner in which matter forms and compounds upon itself is the very force that amounts to intelligence. Perhaps this force is intelligence, or perhaps intelligence emerges from this force - but whether or not this force is predestined, whether or not its development is predetermined, is almost an irrelevant question. This development can be stochastic or pseudorandom and still, in hindsight, abide by some telos/purpose (rather than seeming random in the present and in hindsight).

My question is this, is an inhuman value system necessarily anti-human, if carried to some conclusion? Could humans adopt an inhuman ideology without becoming inhumane, but rather indifferent towards the human? What could such an ideology orient around? The expansion/evolution of physical ordered complexity in the cosmos? Ever-deepening information density of computers, etc?

Perhaps humans wouldn't be the ones to "adopt" such an ideology, which is why I hesitate to even use the word ideology - perhaps there is a better word from computational theory, some set of values that operate within/behind/beyond the algorithmic? ("algorithm" as a set of instructions for a computer, instructions that spell out what is true and what is not. I welcome criticism here, but I think that is a good point of departure)

In thinking about the intersections of various developing technologies (blockchain, surveillance, machine learning, quantum computing, data/attention economies, internet of things), as well as the fact that covid is (arguably) accelerating our exodus from meatspace into the entirely manmade online realm, a realm which almost nobody understands enough to prevent their utter manipulation by the techies who do - in thinking about all this, a radically new day-to-day epistemology seems well warranted.

To be clear, its not like most people have a thorough understanding about how "meatspace" works, but the key point is that meatspace is not manmade, at least most of it isn't. With "cyberspace", everything is manmade, every quantum/pixel is determined, if even arbitrarily, according to a human's will. Thus, the navigator of such a world is not navigating a neutral space, but a space imbued with the ideological residue/inheritance of meatspace. This promises to enable a magnitude of power unseen in human history, right? There are people who construct/determine/curate search engines, which have become veritable extensions of our cognitions - and yet we traffic through these spaces as if they were spontaneously realized by some neutral yet benevolent programmers, merely because the infrastructure is practically alien to us.

Might as well end the rant here - but I am looking for a way to smuggle a cryptohumanism into an ostensibly posthuman/inhuman ideology. Many knots to tie and untie here.
 

constant escape

winter withered, warm
So i guess this question can be split in two:

Can the human, in their ideological capacity, come to adopt inhuman ideologies in a world where the human is no longer the primary creative/intelligent force?

Can ideologies apply to non-human entities, such as computers? What do we call the set of values, inscribed/programmed by humans, that govern the decisions made by the computer?
 

constant escape

winter withered, warm
You mean, just telling myself that I am friendly/hospitable to a humanism, when in actuality being indifferent to it? A very real and worthy consideration, if that is indeed what you mean.

I've a tough time with this, and it might even be the central problem with the kind of "transvaluation" I have in mind: How can you base/ground an excursion-into-alternative-values in your current set of values? Perhaps the poster example of this: how can you become fascist in order to destabilize fascism, by way of infiltrating the (conceptual/ideological, not social) ranks and planting at its center a new model that renders the entire infrastructure obsolete?

Not quite sure how pertinent the fascism example is, but how can one move from one value system to another, and operate in the latter in the interest of the former? This is what I was on about regarding second order pragmatism: one pragmatism nested within a larger pragmatism. It almost necessarily entails contradiction and, perhaps, bad faith. But bad faith nested within good faith - but at that point, how can you be sure? You can't, I suppose, but have faith.
 

vimothy

yurp
bakker is absurdly verbose so here's the punch line:

What does ‘human flourishing’ mean in such cognitive ecologies? What can it mean? Pinker doesn’t know. Nobody does. He can only speculate in an age when the gobsmacking power of science has revealed his guesswork for what it is. This was why Adorno referred to the possibility of knowing the good as the ‘Messianic moment.’ Until that moment comes, until we find a form of rationality that doesn’t collapse into instrumentalism, we have only toothless guesses, allowing the pointless optimization of appetite to command all. It doesn’t matter whether you call it the will to power or identity thinking or negentropy or selfish genes or what have you, the process is blind and it lies entirely outside good and evil. We’re just along for the ride.

Human cognition is not ontologically distinct. Like all biological systems, it possesses its own ecology, its own environmental conditions. And just as scientific progress has brought about the crash of countless ecosystems across this planet, it is poised to precipitate the crash of our shared cognitive ecology as well, the collapse of our ability to trust and believe, let alone to choose or take responsibility. Once every suboptimal behaviour has an etiology, what then? Once everyone us has artificial friends, heaping us with praise, priming our insecurities, doing everything they can to prevent non-commercial—ancestral— engagements, what then?

‘Semantic apocalypse’ is the dramatic term I coined to capture this process in my 2008 novel, Neuropath. Terminology aside, the crashing of ancestral (shallow information) cognitive ecologies is entirely of a piece with the Anthropocene, yet one more way that science and technology are disrupting the biology of our planet. This is a worst-case scenario, make no mistake. I’ll be damned if I see any way out of it.

Humans cognize themselves and one another via systems that take as much for granted as they possibly can. This is a fact. Given this, it is not only possible, but exceedingly probable, that we would find squaring our intuitive self-understanding with our scientific understanding impossible. Why should we evolve the extravagant capacity to intuit our nature beyond the demands of ancestral life? The shallow cognitive ecology arising out of those demands constitutes our baseline self-understanding, one that bears the imprimatur of evolutionary contingency at every turn. There’s no replacing this system short replacing our humanity.
 

constant escape

winter withered, warm
Wow that's a lot. I don't understand the central argument very well, but it seems to be asserting that we shouldn't fancy our intellect to outpace its evolutionary demands? I think I can see that, but then the question is: how broad, or vast, are the demands? Maybe our survival is predicated on much, much more than mere material subsistence, and we need to develop intellectually/cerebrally in order to meet those demands. I don't know.
 

vimothy

yurp
the way we understand ourselves as humans - as moral beings, as intentional beings, as beings whose whose experience of the world is heavily mediated by interactions with others - is rapidly being undercut by science and technology. if you extrapolate that trend forwards just a bit, what you're left with is perhaps not human at all.
 

constant escape

winter withered, warm
As far as I know, a case could be made for that. A cognition optimized for (relatively) complex problem-solving, even at the cost of the more "human" faculties such as emotion, sensitivity, etc.

Predator's got a clue.
 

constant escape

winter withered, warm
the way we understand ourselves as humans - as moral beings, as intentional beings, as beings whose whose experience of the world is heavily mediated by interactions with others - is rapidly being undercut by science and technology. if you extrapolate that trend forwards just a bit, what you're left with is perhaps not human at all.
I guess my question is, what are some ways we can prepare for this, mentally? Other than nihilism, that is. Or mere spin-doctoring.
 

vimothy

yurp
one response would be to say, humans are just temporary equilibria in the flow of matter-energy. it's not necessarily a bad thing if they disappear, we should see what we can produce instead. that's sort of the D&G / delanda / acc take.
 

constant escape

winter withered, warm
Yeah, I like how that obliterates the sacredness we grant ourselves (well, I don't exactly like it, but I think it could be a step in the right direction: away from human-matter exceptionalism?)

Can we view humans a temporary equilibria, yes, but at the cutting edge of matter complexity? If the human brain is, in fact, the most complex matter out there. This could bring in but a whole new slew of ideological hangups, perhaps.

Full disclosure, I do believe that the Omega point, the dematerialization of intelligent matter, will be reached by some iteration of the universe, and thus reached period. So maybe I'm pushing this thought in that direction - but it seems to be the only thing worth pushing for, ultimately. A way to actually escape/transcend all this?
 

vimothy

yurp
the "scientific perspective" - causality - just postpones explanation, shifts the demand back a level, ultimately it doesnt explain things
 

vimothy

yurp
A particle of matter is because of an act of existence for which it itself is not responsible. It is what it is because of its microstructure, the specific and stable organization of its constitutive elements – in a word, its form, which it itself does not produce. The same is true, mutatis mutandis, of forces and laws of nature, which neither bring themselves into being nor cause their specific and essential character.

The materialist would like to explain the world in full by means of the attributes and arrangements of material particles in conjunction with natural forces and the laws that determine their appearance and application. But any such explanation necessarily presupposes the existence and ordered constitution of the particles, the forces, and the laws themselves. Matter and its properties, natural forces and the laws that govern them, are neither self-generating nor self-explanatory; they depend utterly upon the ontologically prior acts of existence and form. Without these metaphysical principles there can be no physical reality.

Moral: physics, being derivative, will never provide the fundamental explanation of reality.

Mark Anderson, Pure: Modernity, Philosophy and the One
 
Top