Encountered this quotation on Facebook today, in a post intended to dismiss fears of job loss through technology:
It is much easier to imagine someone losing their job to a new technology than it is to imagine many people gaining jobs that haven’t been invented yet.
Yes, it’s definitely easier to imagine something that’s happened than something that hasn’t. But what does that prove? Does it prove that fears about job loss are unfounded? Or does it prove the reverse, that those who deride such fears lack common sense?
The latter, I think. It should take little imagination to grasp that when you lose your job to a new technology, you lose your income, suffer insecurity, and face repercussions right now that require resolution right now. Imagining jobs that have yet to materialize does nothing to respond to any of those concerns in a timely way. And time is of the essence here. You can’t accept a job, earn an income, or pay for necessities in the present by imagining the jobs of the future. An imagined job only pays an imaginary income, and an imaginary income only pays for imaginary commodities, none of which are much good in the real world. If you lack the non-imaginary resources to survive until the relevant jobs materialize, you might not survive. And then your chances of employment really plummet.
You might at this point say that anyone who loses their job ought to have made provision for that contingency, and deserves to suffer if they don’t. But if you think that, you should say that. You should also come up with evidence that demonstrates its feasibility. You should stop pretending that future technological developments will, absent that assumption, make a looming problem evaporate through vigorous acts of imagination. You should also stop guilt-tripping people who worry about job loss through technology. They’re more right to be worried than you are right to be guilt-tripping them.
To gesture cavalierly at as-yet non-existent technologies (creating as-yet non-existent jobs) as the answer to imminent unemployment is an evasion. It’s to assume that anyone who loses their job in the present can, without trouble, tide themselves over until whenever these forthcoming jobs materialize.
And what if they can’t? I guess some people find that hard to imagine. Strange that they should. If only there was an app to help them.
Yes, absolutely. This argument is raised a lot in regard to job losses associated with the transition to green energy. When the coal mine shuts down, you’ll just shift over to the hydrogen plant or whatever. It’s not real world thinking. On the other hand it’s inevitable and necessary that coal plants are shut down. Telling people about to lose their jobs to stop whingeing and get with the program are no more helpful than deluding ourselves that the tide will pause for us.
Exactly. I was thinking of artificial intelligence and robotics, but the same applies in either case. It’s one thing if there is an infrastructure in place to handle the transition, unemployment insurance of some kind, and accessible, feasible training programs. But in the US, self-styled anti-Luddite rhetoric comes from people who want all of the following: smiley faced optimism about technological progress; indifference to the plight of those rendered unemployed; zero concern for transitional issues; opposition to subsidized unemployment insurance; opposition to subsidized job training; opposition to labor unions; and total credulousness about the motives and claims of business executives. That’s basically a recipe for immiseration-by-technology.
The author of the post I was criticizing linked to this piece in a triumphalist spirit:
In other words, for 100 years, people have been wringing their hands about unemployment through technological progress, and yet everything has worked out fine!
Yet the article ends with this:
The Facebook poster conveniently deleted the first sentence from his post. I guess the complexity involved–aka reality–didn’t fit his script. But the article he himself regards as Exhibit A for “Don’t worry, be happy” suggests that transitions are the whole problem. Are transitions so easy or happy? Can we just blithely smile our way through them? Not really.
And in no case does the linked article deal with the question it leaves to the end: what happened in the transitions between one technology and the next in all the cases it mentions? Were they seamless and friction-free? Believe it or not, they weren’t. So what is the point of the argument these people are making? How do they deal with the most obvious real-world problems that technology generates? For all the confidence and smug posturing involved here, the hard truth is that the polemicists involved have no fucking clue. Nor do they feel any obligation to get one.
Also omitted in the whole discussion is how gigantically overhyped the benefits of technology have often been. I work in Big Data. Big Data is one of the most colossally overhyped industries on the planet. The one thing no one ever has any reliable data on in this industry is whether the data we’re processing is of net value to human beings. Highly, highly doubtful it is. The irony is, after inducing so many people to go into data analytics–yesterday’s wave of the future–the Captains of Industry are now threatening those same people with unemployment via artificial intelligence, over-selling that, and trying to put a smiley face on the BS involved. Naturally, the people smiling are the ones making the threats, not the ones receiving them.
LikeLiked by 2 people
I think there is a kind of sanguine attitude here, with respect to human suffering, that is a double-edged sword. You don’t want to be categorically indifferent (and, at the policy level, you probably want some kind of safety net to deal with the disruption that new technology brings). But arguably, if there is too much focus on the right-now real-world suffering, it will be less likely that the long-term, big-picture good things get done (if I, or most citizens, vividly imagine the immediate suffering of real people due to long-term-good policy X, then X won’t happen). So there is social utility in the eight-miles-high quasi-utilitarian view (that is, at least in some respects, indifferent to right-now, real-world suffering). Maybe there is always something morally non-ideal about this kind of stance. We don’t want the individual or social equilibrium to tip into utter indifference, but neither do we want to sacrifice long-term to short-term social gain. I guess the question is whether, for any given stance that is indifferent to suffering in some ways, the indifference is morally culpable (in this sort of context). I think that articulating the relevant standards here is tricky. As is achieving the right sort of psycho-moral balancing act in one’s mind (and publicly, in society).
I agree with that. We have two considerations in play, both true:
The simplest solution is to allow progress to continue, but ameliorate the suffering of those adversely affected by it.
What’s objectionable is rhetoric that simply equates progress for some with progress as such, and then ignores, evades, or downplays the suffering of those adversely affected by progress, or blames them for their suffering or for complaining about it.
Ideally, though, we need a more adequate non-utilitarian framework for thinking about the all-things-considered welfare of whole societies, and a non-consequentialist framework for thinking about the consequences of actions or trends affecting whole societies.
In the individual case, we have a semi-adequate grasp of what it means to realize or sacrifice your own well-being in a given case. Likewise in a two-person or n-person interaction, where n is small. But once n becomes very large, and the causal variables proliferate, it becomes very difficult to identify courses of actions or states of affairs that are, all things considered, net beneficial for literally everyone. There’s a tendency to treat quantifiable material gains as a proxy for well-being, and infer that if X leads to (say) more income for Group 1 over Group 2, then X promotes the interests of Group 1 while sacrificing those of Group 2. That can’t be right.
On the other hand, if virtually every policy change or trend benefits Group 1 while either harming or failing to help Group 2, we have prima facie evidence that something is amiss there. Once the evidence starts mounting past the merely “prima facie amiss,” it becomes culpable to gesture at Group 1’s good fortunes and equate that with Progress.
This is especially true if the actual members (or their literal descendants) of Groups 1 and 2 tend to remain relatively fixed. If the same demographic cohort is rich, and the same poor, decade after decade, and progress tends to benefit the rich and immiserate the poor, then it starts to look like the sheer wealth of the rich and poverty of the poor are what explain the distribution of wealth in that society, in which case appeals to progress conceal injustice.
LikeLiked by 1 person
Pingback: Failing the Empathy Exams Yet Again, Starring Chat GPT | Policy of Truth