4 Chan.

suspendedreason

Well-known member
Clearly, none of the Brits trying to oldsplain algorithms understand algorithms, nor what NLP is

Also, you can be opposed to algorithms "encroaching on every aspect of life" for reasons that aren't "wow racist!"

Also, important to remember that human beings are machines running black-box algorithms already.
 

suspendedreason

Well-known member
You don't know why the hiring manager made that decision and to a large extent he doesn't either. Any vision of "structural" or "implicit" discrimination includes, as its bedrock, unconscious bias. Brains are just as much a black box as algorithms.
 

suspendedreason

Well-known member
"I don't want to be evaluated by the optics and data I create" Sorry my man, you already are. Your reputation is all you've ever had. And while in some ideal ancestral community, maybe your Holistic Values Built Up And Signaled Over Decades would shine and land you that blacksmith's gig despite your inability to nail job interviews or "test well," that hasn't been the case in any society I've been a part of.
 

suspendedreason

Well-known member
I think one thing that's scary about algorithms is they basically defamiliarize the current landscape—they show it for what it really is, and always have been.
 

suspendedreason

Well-known member
And of course we see the entities in latex skin masks pretending to be human telling us that this is the utopia we have been waiting for. The scientific mathematical rationalisation of society the palace of crystal now let us insert the microchip beneath the epidermis.
I wouldn't go that far, but I'd note that our entire society is already run off a system of social credits, and formalizing it would have less effect on whether this fact is true or not, and more an effect on whether it's done properly. "We are already as Gods" but adapted for the social era—"We are already reputation mongers, massaging optics for our potential evaluators"
 

suspendedreason

Well-known member
It’s common to look back, in our readings of Homer, on the dysfunctions of the honor culture it presents. Reputation takes precedence over strategy; legacy preoccupies; Achilles’ glory hunt is his psyche’s heel. And it is common — whether the writer seeks to redeem that honor culture, as in Tamler’s writing, or to condemn it as barbarous, as in Jonathan Haidt’s work — to contrast this behavior with that of our modern civilization: against dignity, against liberty, against victim paradigms. But future civilizations, where we perceive breakage and rupture, may perceive a continuous dysfunction of status and achievement, the same system of reputation management, incentive, and punishment that has permeated even the most utopian, even the most “primitive-naïve” societies. The differences in how honor or victim cultures handle reputational disputes, or the specific instantiated values they publicly prize, pale next to the general practice of reputation management, or to the constant striving-toward and assignment of prestige. Because the pragmatism of our solutions, like the pragmatism of the Greeks, will fade in the light of retroactive comparison to a better, as-of-yet undiscovered solution, this future civilization will see primarily the most barbarous elements of these reputational economies: their shortcuts and half-measures, inefficiencies and irrationalities; the exploitations and cheaters and the constant looming over all of human life.
 

padraig (u.s.)

a monkey that will go ape
I like how response is literally just "you're old and you don't get it"

these objections - that algorithms are subject to same opacity and bias as anything else, only without the accountability - are hardly outlandish, or new. they've been widely raised - including by people who understand very well what algorithms are - because they're obvious. saying humans are algorithmic machines isn't an answer, either. there are, again, processes for holding humans accountable. neither is a strawman about the idyllic olden days. the issue isn't that everything was fair before, it's that algorithms are no more fair, but with less or no means of recourse.

I would agree that popular understanding of algorithms is limited, that they're perceived as almost dark sorcery instead of yunno, just a set of instructions you give a computer to complete x task, but even that popular understanding essentially nails the obvious dangers of algocracy

address the issues or don't, but don't try to pass off an ideological position as an unassailable truth
 

padraig (u.s.)

a monkey that will go ape
like if your argument is that programmers should run society, just make that argument, you wouldn't be the first

I strongly doubt I'd agree with it for a number of reasons, but it would be a more honest argument to make

(in b4 one of the young guys says programmers already de facto run society)
 

suspendedreason

Well-known member
I like how response is literally just "you're old and you don't get it"

these objections - that algorithms are subject to same opacity and bias as anything else, only without the accountability - are hardly outlandish, or new. they've been widely raised - including by people who understand very well what algorithms are - because they're obvious. saying humans are algorithmic machines isn't an answer, either. there are, again, processes for holding humans accountable. neither is a strawman about the idyllic olden days. the issue isn't that everything was fair before, it's that algorithms are no more fair, but with less or no means of recourse.

I would agree that popular understanding of algorithms is limited, that they're perceived as almost dark sorcery instead of yunno, just a set of instructions you give a computer to complete x task, but even that popular understanding essentially nails the obvious dangers of algocracy
OK first, that wasn't my real response, gimme some credit. I liked "oldsplain" as a coinage—I gotta use it, gimme a chance.

I don't see why we couldn't set up the same system of holding algorithms accountable. "We have laws for carriages, but not cars!" wasn't a good argument against cars in the early 20th C, and yes, a lot of people died before they figured out seat belts and speed limits.

Algorithms are probably not more fair right now, and I wouldn't even advocate for them to be rolled out widely, but a priori I don't see anything wrong with the tech.
 
Last edited:

suspendedreason

Well-known member
like if your argument is that programmers should run society, just make that argument, you wouldn't be the first

I strongly doubt I'd agree with it for a number of reasons, but it would be a more honest argument to make

(in b4 one of the young guys says programmers already de facto run society)
I don't think that should be the case at all, rather, if you understand how machine learning works, you realize they're not programmed, they're just a marginalisation of a dataset.

The real question then is merely "should more information and better information-processors take over the information jobs we already have" and my answer is yes, it should be done right, obviously it should be done carefully, but yes.
 

constant escape

winter withered, warm
Even if certain/all machine learning derives natural results from a dataset, we could still say that the dataset is only an approximation, albeit it a relatively raw and neutral one. And an approximation necessarily omits certain, perhaps infinite, dimensions, no?
 

constant escape

winter withered, warm
But sure these kinks get buffed out, and I would say that this iterative correction process necessarily closes in on the territory, but never fully approximates it. That could be said to be a matter of belief, but perhaps there is some kind of metascience to back it up. Some study about the robustness of conventional metrics, and how this robustness increases.
 

suspendedreason

Well-known member
It'd be interesting to think about what kind of "stereotyping" we're OK with and what kinds we aren't—and I mean stereotype just in the inductive, pattern-forming sense.

We can imagine a camera + ML system set up around an elementary school that is in charge of sending the school into lockdown in the case of emergencies. The actual details of this thought experiment are probably iffy, but I don't think they matter in painting the contours of the problem at hand.

Now, e.g. statistically black Americans commit violent felonies at higher rates, and a perfectly well-tuned, sensitive facial threat-level indicator might tic up a little coming from a statistical place. That seems pretty scummy and gross. But realistically, if you were trying to profile school shooters, you'd probably actually start profiling white teenagers with greasy hair, acne, and military jackets, because non-white school shooters are pretty damn rare. Would this be OK?

Then there's the next level—let's say there are certain colors that especially violent subcultures wear, like white long-sleeve Ts. And there've been a couple prominent school shooters with white long-sleeve Ts, lots of people on 4chan are wearing white long-sleeves and talking about shooting up schools, so if a teen in a white long-sleeve is walking around campus with a giant dufflebag, maybe that threat level indicator should go up.

Then there's really explicit symbolism—what about a kid with Nazi tattoos on his knuckles, that the camera system picked up getting into a fight last week with a bullying jock. At that point, you're working off strong statistical precedent, and more precise correlations than phenotypic descriptors. And yet, this is still the same premise—statistical correlations between appearance and behaviors, or between life histories and actions.

And of course, everything I just described can be, and is, a "threat indicator" to administrators who run these schools already.

So where do you draw the line, between prejudice and statistical correlations? You can't, they're the exact same thing mathematically.
 

suspendedreason

Well-known member
And as importantly, is the way you could interrogate a school administrator's rationale, in setting off a lockdown alarm, meaningfully different than the way you could interrogate an algorithm?
 
Top