On Oct. 9, 1779, a group of textile workers in Manchester, England, worried that mechanization would cost them their jobs. They took matters into their own hands by destroying the looms and knitting frames in their factories. The English government failed to protect the thousands of jobs and dozens of industries from rapid modernization, so the craftspeople turned to hammers and torches to solve the problem.
Taking their inspiration from a young apprentice named Ned Ludd, many “Luddite” riots occurred throughout England. His legacy continues today: “Luddite” remains a term to connote people who do not like, nor adapt well, to new technology. From the facsimile machines to the automated bank tellers, from video cassette recorders to online shopping, from the clapper to telemedicine, we have been inundated with technological “breakthroughs” for decades, each one seemingly coming faster than the previous one — leading to the artificial intelligence (AI) revolution now underway.
Since the entry of ChatGPT last November, there have been prescient calls to slow down the development of AI for fear of the displacement of many industries and the potential loss of livelihood that would attend it. The Museum of the Future in Dubai features on its ground floor a robot barista that serves up a mean latte. Bereft of tattoos, piercings, attitude and union membership, it may be the future of the coffeehouse workforce. Pundits long have predicted a low-skilled labor displacement resulting from robotics.
What we did not expect was that highly-trained professional jobs also could be on the chopping block. Lawyers, accountants, physicians and bankers — for centuries, sought-after jobs that meant entry to the upper-middle class — are all prey for automation. That matters of national defense, health care provision and criminal justice are being outsourced to machines gives us pause — and rightly so.
We still have time to future-proof our professions and ensure that there are resources to reskill and retool, but we have to get moving. Our political class, however, has been caught unprepared for the predictable changes now in gear. It was no secret that automation would wreak havoc on many professions and the workplace. But instead of providing clear rules about the ethics for the development and deployment of AI, our political leaders have been caught flat-footed, proffering “guidelines” rather than enforceable laws.
It was little surprise that Sam Altman, CEO of OpenAI, the company that birthed ChatGPT, urged Congress to regulate the AI industry in May. The White House has offered a Blueprint for an AI Bill of Rights, but these aren’t actionable rules. Even the European Union, typically a bastion of innovation-stifling procedures, has moved ahead on the rule-making front. You know you are moving toward bureaucratic purgatory when the United Nations Secretary General offers to create an international organization on AI to deal with its risks.
- Revolt Against Tech: What Striking Workers Have in Common With the Misunderstood Luddites
- AI Bias: Gender and Race Risks Loom as Banking Embraces Artificial Intelligence
- Artificial Intelligence CEO Predicts AI Is The Mother of All Bubbles
- Artificial Intelligence Startup Anthropic Lays Out Strategy To Curb Evil AI
- AI vs. Hollywood: ‘House of the Dragon’ Gets a Robot Friend
- Google’s RT-2 Robot Gives ChatGPT-Style AI a Body
Self-regulation, like mob violence of the Luddite kind, occurs when governments fail to act. Just two weeks ago, the Frontier Model Forum, composed of top AI builders, announced that it was developing its own best practices for AI safety. It may not be long until some workers in the medical profession, accounting practices and other high-skilled work take action to protest the coming challenges to their ability to earn the wages they expect.
Back in 18th century England, the textile and weaving jobs never returned once the industries’ processes were mechanized. Most of the craftspeople lost their livelihoods. For years, the United States has attempted to mitigate damages from new free agreements and trade wars by providing Trade Adjustment Assistance, funding for U.S. workers to get training for new industries. The results have been mixed, at best. We need a retraining program coupled with a national strategy for ethical and sustainable AI development and deployment.
We have only a few years to ensure that artificial intelligence does not do similar damage to industries and professions that mechanization did to the English textile industry. The digital clock is “ticking” indeed, but that sound is simulated. For those of us analog dead-enders and throwbacks with video cassette recorders still flashing “12:00,” though, it may be too late already.
James Cooper is a professor of law at California Western School of Law and a research fellow at Singapore University of Social Sciences.
- By Sheldon H. Jacobson, Ph.D. and Dr. Janet JokelaOpinionGive Yourself a New Year Gift: Visit the Dentist
- By Armstrong WilliamsOpinionWhat Ordinary Americans Want
- By Alan BrownsteinOpinionWhen Should Universities Take a Stand on Public Policy or Normative Issues?
- By W. Mark ValentineOpinionAmerican Drones Are a ‘Force Multiplier’ for US Security and Safety
- By Richard J. ShinderOpinionWhat Can We Do About American Culture, Frozen in Place?
- By Keith NaughtonOpinionIs Trump Building His Own ‘Unenthusiasm Gap’?
- By Amy ChenOpinionSimplicity: How to Reverse the Awful Airline User Experience
- By Stephanie MartzOpinionAddressing Workforce Shortages Starts With Immigration Reforms
- By Harlan UllmanOpinionToday’s Crises, Here and Abroad, Echo the Disasters of the Past
- By Eric R. MandelOpinionShould the US Try to Halt Israel’s War Against Hamas?
- By Austin Sarat and Dennis AftergutOpinionHow to Continue Securing the Truth About January 6
- By Patrick M. CroninOpinionWhat to Make of Kim Jong Un’s Latest Threats of War