4 Chan.

sus

Well-known member
What I'm trying to convey is that we are an opticratic society—and, moreover, almost every human society is, but it's especially pronounced in large, urban, globalized societies. In an ancestral community, you get to know people as individuals over years and decades; your parents knew each other, you grew up together, etc.

In contemporary society, we are constantly sizing people up—in job interviews, in online dating, in deciding who is "your tribe" & you wanna be friends with vs "not your tribe." What's the old expression? "Follow your vibe, find your tribe." Yeah, a "vibe" is a set of aesthetic signifiers that correlate with people who are a certain way. In other words, a stereotype.
 

padraig (u.s.)

a monkey that will go ape
I don't think that should be the case at all, rather, if you understand how machine learning works, you realize they're not programmed, they're just a marginalisation of a dataset.

The real question then is merely "should more information and better information-processors take over the information jobs we already have" and my answer is yes, it should be done right, obviously it should be done carefully, but yes.
but there still humans selecting the dataset. and data bias in machine learning is again, a widely acknowledged issue. really "programming" is semantics.

more information and more powerful information processing automatically leading to better outcomes is only true in a narrow sense. you can optimize a given process, but it doesn't tell what process you should optimize or how you should optimize it - those are value judgments. the vaguer the concept, the less use it is. algorithms are probably a good way to manage traffic - whether they're a good way to decide who gets into college or

cars aren't a good comparison because cars didn't create a fundamental change in decision-making compared to carriages. cars radically changed social organization but they don't decide where people can live or go to school, whether or not they're likely to commit crimes, etc.
 

padraig (u.s.)

a monkey that will go ape

padraig (u.s.)

a monkey that will go ape
again tho, I don't think you've addressed the real issue which is valuation of universalized technical knowledge over localized personal knowledge. and not just valuation - you're arguing for ceding autonomy to whoever is creating the means of information processing on the grounds that it's preferable. that's an ideological position.

before stan says "that's already happening" - yes, and plenty of people are uneasy about it. that's why it's a hot button issue. plenty of people would prefer the freedom to make their own decisions, even if they're not what a given algorithm says are optimal. I don't think that's an unreasonable position to have at all.
 

sus

Well-known member
One distinction that's probably worth making: ML works fundamentally differently than metrics, and is a separate issue. I'm all for avoiding human-made metrics, because they're prone to a whole slew of well-documented problems and are often outperformed by managers' "gut" reactions.

What's important to understand is that machine-learning is the same thing as a gut reaction. A gut decision is the result of black-box calculations performed unconsciously by your (probabilistic) cognitive model, built up after many years in the field getting a sense of the territory. Doctors regularly outperform more formalized, metricized evaluative procedures with their gut. A romantic "feeling" is just inscrutable mathematics all the way down, a set of computations inscrutable to the conscious mind.
 

padraig (u.s.)

a monkey that will go ape
dude there is no such thing as avoiding human-made metrics. all metrics humans use are human-made in one way or another.
 

sus

Well-known member
I'm pretty skeptical about this holding algorithms accountable business as well. you can't - you can only hold an algorithm's creator accountable. but you can't, really, unless you can prove that it was intentionally racist or x problem. all you can do is stop using it. in the meantime, it's fucked up a bunch of people's lives. i.e., a couple years ago the state I live in cancelled a splashy predicative analytics program to prevent child abuse because it kept returning false positives.

I'm also against the state implementing algorithmic programs for life-altering decisions in the year 2020, primarily because the state is infamously tech-illiterate and basically bad at everything, and because these technologies are brand new and not vetted. I mean Jesus Christ you look at the Dem primary app rollout, it's like you pooled together the ten biggest morons in America and put them to a huge task, of course it was a disaster.

But opposing premature rollout in certain sectors is different from opposing the technology's use in a blanket sense.
 

sus

Well-known member
dude there is no such thing as avoiding human-made metrics. all metrics humans use are human-made in one way or another.

That might be true in a technical sense, but not in a meaningful one. There is a huge spectrum of how "designed" vs "evolved" an algorithm is, and that difference matters immensely, and is critical to the argument I'm advancing.
 

sus

Well-known member
again tho, I don't think you've addressed the real issue which is valuation of universalized technical knowledge over localized personal knowledge. and not just valuation - you're arguing for ceding autonomy to whoever is creating the means of information processing on the grounds that it's preferable. that's an ideological position.

That's an interesting issue, but not the primary one in my mind, which is that the concept of "bias" is impossibly tied up with the concept of statistical correlation; that both are already happening in humans, and you can't draw an objective/conceptual/mathematical line between them. I'm curious what you think of the school shooter thought experiment.
 

constant escape

winter withered, warm
Yeah the gut reaction is based on your lifetime's supply of feedback, albeit perhaps in manner that is tougher to quantify.

And yeah all metrics are, directly ot indirectly, human-made, but perhaps relying on the man-via-machine-made metrics would weed out a lot of the error we're talking about, in terms of optimization and/or equality.
 
  • Like
Reactions: sus

sus

Well-known member
I apologize for any "you don't understand it cuz it's old" rudeness earlier in the thread. It's a bad habit—I hate when people here pull the inverse on me, that I'm limited or stunted in a certain way because I'm young, so I won't perpetuate it.

My position is more that, unless you are up and close to the technical world involved, if you're not actually working in machine learning or reading the academic papers coming out, you're getting your information via journalists who are themselves not very tech literate, and predisposed to click-mongering apocalyptica, and it's important to remember that.
 

padraig (u.s.)

a monkey that will go ape
this reminds me of McNamara etc thinking you could measure a war with statistics alone and fundamentally misunderstanding that what they were measuring had nothing to do with progressing toward the goal of winning that war. only that might have been correctable if they'd understood what kind of war they were fighting (probably not, but they would have at least been measuring applicable things), whereas in this case there is no goal because the goal is a value judgment. you cannot algorithm your way to an optimized society.
 

sus

Well-known member
Yeah the gut reaction is based on your lifetime's supply of feedback, albeit perhaps in manner that is tougher to quantify.

And yeah all metrics are, directly ot indirectly, human-made, but perhaps relying on the man-via-machine-made metrics would weed out a lot of the error we're talking about, in terms of optimization and/or equality.

"Training" or "teaching" in Luka's sense
 

padraig (u.s.)

a monkey that will go ape
perhaps relying on the man-via-machine-made metrics would weed out a lot of the error we're talking about, in terms of optimization and/or equality
that's basically what the dispute is about

as well as the idea that "optimization" of itself is a desirable in the first place
 

sus

Well-known member
this reminds me of McNamara etc thinking you could measure a war with statistics alone and fundamentally misunderstanding that what they were measuring had nothing to do with progressing toward the goal of winning that war. only that might have been correctable if they'd understood what kind of war they were fighting (probably not, but they would have at least been measuring applicable things), whereas in this case there is no goal because the goal is a value judgment. you cannot algorithm your way to an optimized society.

This is why I'm trying to draw the distinction. I've spent the last year deep-researching the failures of metrics in institutional decision-making, including the Iraq war, so I'm not drawing this distinction out of my ass.

Picking a dataset, and the biases of how that dataset is secured—most prominently, reporting bias—is a big issue. That's not what people generally mean when they talk about "racism" in NLP datasets, tho.
 

constant escape

winter withered, warm
Well real, robust widespread optimization is a pretty safe bet, in that it tautologically means that things are getting better, in a net way. When the particular definition of optimization becomes more partisan, that is where things get sticky, no?
 

sus

Well-known member
that's basically what the dispute is about

as well as the idea that "optimization" of itself is a desirable in the first place

I'm a little surprised someone as misanthropic as yourself is so opposed to investigating alternatives to human gut judgment, and its own black boxes.

Again, I'm all for cautious roll-out, and not putting untested algorithms in charge of life-altering decisions. But we already have really shitty bureaucrats making life-altering decisions about people, and they make them unfairly constantly. Isn't investigating alternatives important?
 

sus

Well-known member
I still want someone to deal with the school threat-detection thought experiment, and weigh in on how to handle this question of statistical correlation vs. unfair stereotyping, because that question is at the heart of the algorithmic bias discourse, and therefore IMO the issue at hand.
 
Top