When ChatGPT came out last November, Olivia Lipkin, a 25-year-old copywriter in San Francisco, didn’t think too much about it. Then, articles about how to use the chatbot on the job began appearing on internal Slack groups at the tech start-up where she worked as the company’s only writer.
Over the next few months, Lipkin’s assignments dwindled. Managers began referring to her as “Olivia/ChatGPT” on Slack. In April, she was let go without explanation, but when she found managers writing about how using ChatGPT was cheaper than paying a writer, the reason for her layoff seemed clear.
“Whenever people brought up ChatGPT, I felt insecure and anxious that it would replace me,” she said. “Now I actually had proof that it was true, that those anxieties were warranted and now I was actually out of a job because of AI.”
Skip to end of carousel
What is artificial intelligence?
AI is an umbrella term for any form of technology that can perform “intelligent” tasks. For decades, AI has been mostly used for analysis — trawling huge sets of data to find patterns. But a boom in generative AI, which uses this pattern-matching to create words, images and sounds, has opened up new possibilities.
What is generative AI?
The technology backs chatbots such as ChatGPT and image generators, such as Dall-E, which can create words, sounds, images and video, sometimes at a level of sophistication that mimics human creativity. This technology can’t “think” like humans do; it can find patterns and imitate speech, but it can’t interpret meanings.
How does AI learn?
AI can “learn” without programmer to tell it each step, a process called machine learning. It uses neural networks, mathematical systems modeled after the human brain, to find connections in huge data sets. The poems or images it makes may seem creative, but it’s really pattern matching based on which word is most likely to come next.
Is AI dangerous?
The boom in generative AI brings many exciting possibilities — but also concerns that it might cause harm. Chatbots can sometimes spread misinformation or “hallucinate” by producing information that sounds plausible, but is irrelevant, nonsensical or entirely false. It can be used to make fake images of real people, called deepsfakes.
End of carousel
Some economists predict artificial intelligence technology like ChatGPT could replace hundreds of millions of jobs, in a cataclysmic reorganization of the workforce mirroring the industrial revolution.
For some workers, this impact is already here. Those that write marketing and social media content are in the first wave of people being replaced with tools like chatbots, which are seemingly able to produce plausible alternatives to their work.
Experts say that even advanced AI doesn’t match the writing skills of a human: It lacks personal voice and style, and it often churns out wrong, nonsensical or biased answers. But for many companies, the cost-cutting is worth a drop in quality.
“We’re really in a crisis point,” said Sarah T. Roberts, an associate professor at University of California in Los Angeles specializing in digital labor. “[AI] is coming for the jobs that were supposed to be automation-proof.”
See why AI like ChatGPT has gotten so good, so fast
Artificial intelligence has rapidly increased in quality over the past year, giving rise to chatbots that can hold fluid conversations, write songs and produce computer code. In a rush to mainstream the technology, Silicon Valley companies are pushing these products to millions of users and — for now — often offering them free.
AI and algorithms have been a part of the working world for decades. For years, consumer-product companies, grocery stores and warehouse logistics firms have used predictive algorithms and robots with AI-fueled vision systems to help make business decisions, automate some rote tasks and manage inventory. Industrial plants and factories have been dominated by robots for much of the 20th century, and countless office tasks have been replaced by software.
But the recent wave of generative artificial intelligence — which uses complex algorithms trained on billions of words and images from the open internet to produce text, images and audio — has the potential for a new stage of disruption. The technology’s ability to churn out human-sounding prose puts highly paid knowledge workers in the crosshairs for replacement, experts said.
“In every previous automation threat, the automation was about automating the hard, dirty, repetitive jobs,” said Ethan Mollick, an associate professor at the University of Pennsylvania’s Wharton School of Business. “This time, the automation threat is aimed squarely at the highest-earning, most creative jobs that … require the most educational background.”
In March, Goldman Sachs predicted that 18 percent of work worldwide could be automated by AI, with white-collar workers such as lawyers at more risk than those in trades such as construction or maintenance. “Occupations for which a significant share of workers’ time is spent outdoors or performing physical labor cannot be automated by AI,” the report said.
The White House also sounded the alarm, saying in a December report that “AI has the potential to automate ‘nonroutine’ tasks, exposing large new swaths of the workforce to potential disruption.”
ChatGPT "hallucinates." Some researchers worry it isn’t fixable.
But Mollick said it’s too early to gauge how disruptive AI will be to the workforce. He noted that jobs such as copywriting, document translation and transcription, and paralegal work are particularly at risk, since they have tasks that are easily done by chatbots. High-level legal analysis, creative writing or art may not be as easily replaceable, he said, because humans still outperform AI in those areas.
“Think of AI as generally acting as a high-end intern,” he said. “Jobs that are mostly designed as entry-level jobs to break you into a field where you do something kind of useful, but it’s also sort of a steppingstone to the next level — those are the kinds of jobs under threat.”
Eric Fein ran his content-writing business for 10 years, charging $60 an hour to write everything from 150-word descriptions of bath mats to website copy for cannabis companies. The 34-year-old from Bloomingdale, Ill., built a steady business with 10 ongoing contracts, which made up half of his annual income and provided a comfortable life for his wife and 2-year-old son.
But in March, Fein received a note from his largest client: His services would no longer be needed because the company would be transitioning to ChatGPT. One by one, Fein’s nine other contracts were canceled for the same reason. His entire copywriting business was gone nearly overnight.
“It wiped me out,” Fein said. He urged his clients to reconsider, warning that ChatGPT couldn’t write content with his level of creativity, technical precision and originality. He said his clients understood that, but they told him it was far cheaper to use ChatGPT than to pay him his hourly wage.
Fein was rehired by one of his clients, who wasn’t pleased with ChatGPT’s work. But it isn’t enough to sustain him and his family, who have a little over six months of financial runway before they run out of money.
Now, Fein has decided to pursue a job that AI can’t do, and he has enrolled in courses to become an HVAC technician. Next year, he plans to train to become a plumber.
“A trade is more future-proof,” he said.
The debate over whether AI will destroy us is dividing Silicon Valley
Companies that replaced workers with chatbots have faced high-profile stumbles. When the technology news site CNET used artificial intelligence to write articles, the results were riddled with errors and resulted in lengthy corrections. A lawyer who relied on ChatGPT for a legal brief cited numerous fictitious cases. And the National Eating Disorders Association, which laid off people staffing its helpline and reportedly replaced them with a chatbot, suspended its use of the technology after it doled out insensitive and harmful advice.
Roberts said that chatbots can produce costly errors and that companies rushing to incorporate ChatGPT into operations are “jumping the gun.” Since they work by predicting the most statistically likely word in a sentence, they churn out average content by design. That provides companies with a tough decision, she said: quality vs. cost.
“We have to ask: Is a facsimile good enough? Is imitation good enough? Is that all we care about?” she said. “We’re going to lower the measure of quality, and to what end? So the company owners and shareholders can take a bigger piece of the pie?”
Lipkin, the copywriter who discovered she’d been replaced by ChatGPT, is reconsidering office work altogether. She initially got into content marketing so that she could support herself while she pursued her own creative writing. But she found the job burned her out and made it hard to write for herself. Now, she’s starting a job as a dog walker.
“I’m totally taking a break from the office world,” Lipkin said. “People are looking for the cheapest solution, and that’s not a person — that’s a robot.”