top of page

AI Won’t Make You Obsolete. Your Fear of It Might.

  • Writer: Erin Sedor
    Erin Sedor
  • 6 days ago
  • 9 min read

By Erin Sedor | Black Fox Strategy


Every time I open my computer, I see stories about AI and what it is — or mostly is not — doing to advance humankind. It’s like a drumbeat that just keeps getting louder. AI is coming for your job. AI will replace you. AI is the beginning of the end of human relevance in the workplace.


The latest drumbeat came from Microsoft’s AI CEO, Mustafa Suleyman, who told the Financial Times this month that AI will achieve “human-level performance on most, if not all professional tasks” within the next twelve to eighteen months. Lawyers, accountants, project managers, marketers — anyone who sits at a computer for a living, effectively automated by 2027. His words, not mine.


Conversations in boardrooms – people worried about how the competition is getting leaner with it. Conversations in breakrooms – people worried about whether they should be worried about the latest tech initiative. It’s in the quiet anxiety of people wondering if the role they’ve spent twenty years building is about to be automated into irrelevance.


I get the fear. It’s real. But it’s also misdirected.


AI is a tool. A powerful one. A genuinely remarkable one. It processes data at speeds we can’t comprehend, identifies patterns we’d never spot, and automates tasks that used to eat entire workdays. That’s not trivial. It’s valuable. But AI doesn’t think. It doesn’t feel. It doesn’t choose. It doesn’t care about the outcome. Calling it “intelligence” is convenient shorthand, not literal truth. The moment we start treating the tool like the craftsman, we’ve lost the plot entirely.


This Is Not a New Story

Here’s the thing about technology advancement: we’ve been here before. Many times. And we keep forgetting how the story actually ends.


The printing press was supposed to eliminate the need for scribes and scholars. The mechanical loom was supposed to destroy the textile industry’s workforce. The assembly line, the calculator, the spreadsheet, the internet—every single one of these triggered the same primal fear: this will make us obsolete.


None of them did. What they did was eliminate certain tasks—the repetitive, the mechanical, the time-consuming—and in doing so, they freed humans to do something more complex, more creative, more valuable. The weavers who lost work to the loom weren’t ultimately replaced by the loom. They were replaced by a failure of imagination about what weavers could become next.


That failure of imagination is the real threat right now. Not the technology.


The pattern isn’t replacement — it’s elevation. Every major tool humanity has built has ultimately pushed us toward higher-order work. Work that requires judgment, creativity, empathy, moral reasoning—the things that make us irreplaceably human. AI is no different. It’s the latest in a long line of tools built to handle the mechanical so we can focus on the meaningful.


And the research is already confirming it. A 2024 study published in Science Advances found that generative AI enhances individual creativity—but reduces the collective diversity of novel content. In other words, AI makes each person’s output better while making everyone’s output more similar. It raises the floor and lowers the ceiling. That’s exactly what a tool does. It handles the craft. It doesn’t replace the artist.


The question isn’t whether the pattern will hold. The question is whether we’ll have the wisdom to see it this time, or whether we’ll let fear drive us into the same short-sighted reactions that have slowed down every previous transition.


Where the Real Risk Lives

I’m not dismissing the disruption. It’s real and it demands attention. The World Economic Forum’s Future of Jobs Report projects that 170 million new roles will be created globally by 2030, while 92 million existing roles will be displaced. That’s a net gain of 78 million jobs—but the transition between those two numbers is where lives get disrupted, careers get upended, and communities feel the pain.


The risk isn’t AI itself. The risk is how we deploy it.


And here’s where the Suleyman prediction actually undermines itself. In the same interview where he declared white-collar work all but finished, he described how Microsoft’s own software engineers have shifted from writing code to “debugging, scrutinizing, doing the strategic stuff like architecting.” That’s not automation replacing people. That’s elevation. The very pattern he ignores in his headline claim is already playing out inside his own company. Tasks get automated. Humans move to higher-order work. The job doesn’t disappear. It transforms.


But when you’ve invested thirteen billion dollars in AI technology and your business model depends on enterprises buying AI-powered tools, the incentive is to sell the revolution, not describe the nuance. And that’s where the conversation gets dangerous. Because conflating task automation with job elimination isn’t just inaccurate—it creates the very fear that drives organizations toward the wrong deployment decisions.


When organizations treat AI as a people-replacement strategy rather than a people-amplification strategy, the results are predictable and ugly. Gartner’s 2025 research found that 72% of CIOs report their organizations are breaking even or losing money on AI investments, presumably not because the tech doesn’t work. What I am most interested in is when the strategy (and intent) behind AI deployment is wrong. You can’t strip the humans out of the system and expect the system to perform.


BCG’s 2025 research tells the same story from a different angle. When leadership actively supports AI integration—meaning they invest in the humans alongside the technology—employee positivity toward AI jumps from 15% to 55%. But only one in four frontline employees says they actually receive that support. There’s your failure point. It’s not a technology gap. It’s a leadership gap.


Meanwhile, MIT Sloan’s 2025 EPOCH research—one of the most rigorous studies on AI and labor to date—found something that should reframe this entire conversation: new tasks emerging across all U.S. occupations carry significantly higher scores in empathy, presence, judgment, creativity, and hope than the tasks they’re replacing. The economy is already reorganizing around human capabilities. The work that’s growing isn’t the work machines do. It’s the work only humans can do.


The data isn’t ambiguous. The organizations that succeed with AI are the ones that invest in people first and technology second. The ones doing it backwards risk a significant chance of failure.


The Difference Between AI and Human Innovation

Here’s what I keep coming back to, and it’s the thing most of the AI conversation ignores:

AI doesn’t ask why. It doesn’t wonder about purpose. It has no stake in whether the work it produces matters to anyone, serves anyone, or contributes to anything beyond the task parameters it was given.


That’s not a flaw. It’s a feature—of being a tool—and it’s the difference between AI and human innovation. Hammers don’t ask why either. But the person holding the hammer had better know.


There are questions that every individual, every organization, and every society has to answer for itself. Questions no algorithm will ever generate: What are we here for? How do we grow in ways that are intentional and sustainable? How do we evolve with what’s coming instead of being consumed by it?


Antique quill on a celestial map.

These are human questions. Purpose, growth, evolution. They’re the operating principles of anything alive. They apply to your career. Your organization. Your community. The way you raise your kids and the future you’re building for them.


AI doesn’t grapple with any of that. And the day we stop grappling with it ourselves—because we’ve outsourced so much thinking to machines that we’ve forgotten how to do the thinking that matters—that’s the day we should worry.


Forced Action vs. Inspired Action

There’s an ancient wisdom principle that keeps proving itself in modern contexts: the Law of Inspired Action. Action aligned with purpose creates momentum and flow. Forced action—action misaligned with the natural order of things—creates resistance.


Watch what happens when organizations deploy AI to eliminate people. The resistance is immediate and visceral. Morale craters. Institutional knowledge walks out the door. The remaining workforce disengages because they’ve just been told, in the clearest possible terms, that they’re a cost to be minimized rather than a capability to be developed. That’s forced action. And it produces exactly what forced action always produces: diminishing returns.


Now watch what happens when organizations deploy AI to free people up. The bookkeeper who used to spend forty hours a week on data entry becomes the financial strategist who spots trends and advises on resource allocation. The data analyst buried in spreadsheets becomes the insight architect who translates patterns into decisions. The manager drowning in administrative tasks finally has time to actually lead—to coach, to develop talent, to build the kind of team cohesion that no software can manufacture.


That’s inspired action. It works because it’s aligned with what humans are actually for: creating meaning, building relationships, exercising judgment, and doing the kind of complex, ambiguous, emotionally intelligent work that machines fundamentally cannot do.


The difference between these two approaches isn’t philosophical. It’s measurable. And every piece of current research confirms it.


What We Should Be Championing

We should be louder about this. All of us. Not just business leaders—everyone. Parents, educators, employees, community leaders. We should be championing the idea that AI is the thing that frees people to do better, harder, more meaningful work. Not the thing that makes people disposable.


Marcus Buckingham’s work on human performance identifies something he calls “red threads”—the specific activities that strengthen you, that make you lose track of time, that leave you more energized after doing them than before. The tragedy of most modern work isn’t that it’s hard. It’s that people spend the vast majority of their time on tasks that have nothing to do with their red threads. The administrative. The repetitive. The mechanical.


AI can take those tasks off the table. Not to eliminate jobs, but to make room for the work that actually matters—both to the organization and to the human doing it. That’s not a threat. That’s a gift. And it’s one we should be talking about with a lot more conviction and a lot less hand-wringing.


The numbers support the conviction. That net increase of 78 million jobs by 2030 also shows that the fastest-growing roles aren’t all in tech—they include nurses, teachers, care workers, and farmworkers. Roles that require presence, empathy, judgment, and human connection. The economy isn’t just creating new jobs. It’s creating more human jobs. Meanwhile, 91% of learning and development professionals say human skills are more important than ever, not despite AI, but because of it.


We should be teaching people to work alongside AI the way we taught them to work alongside spreadsheets and email and every other tool that once seemed threatening:


by investing in the capabilities that matter more, not less, when the routine work is handled. Critical thinking. Relational intelligence. Creative problem-solving. The ability to sit with ambiguity and make a judgment call when no algorithm can tell you what’s right.


The alternative is not appealing. It leads to organizations hoarding AI capabilities while cutting headcount. It leads to a generation of workers convinced their skills are worthless before they’ve had the chance to discover what those skills become when the busywork is gone. It leads to societies that define human value by productivity metrics that a machine can always beat, instead of by the qualities that make us irreplaceable.


That’s not a future worth accepting. And we don’t have to.


Fear Is the Enemy. AI Is Not.

A universal truth that bears repeating is this: fear is the real enemy.


Fear makes us reactive instead of intentional. Fear makes organizations reach for the cost-cutting play instead of the investment play. Fear makes people cling to tasks they’ve outgrown because the known feels safer than the unknown, even when the unknown is where all the growth lives. Fear narrows our vision at the exact moment we need it to be expansive.


And fear is exactly what predictions like Suleyman’s are designed to produce. When an AI CEO tells the world that your profession will be automated in eighteen months, he’s not describing an inevitability. He’s creating urgency for a product. That’s not insight. That’s a sales cycle. And it joins a growing chorus—Anthropic’s CEO warning that half of entry-level white-collar jobs could vanish, Ford’s CEO making similar claims, Elon Musk announcing artificial general intelligence by year’s end—each prediction more breathless than the last. Meanwhile, as Fortune itself observed, the technology has so far made only a small splash in professional services. The gap between the prophecy and the reality should tell us something.


I’m not saying that we should ignore the implications and impacts of AI advancement – but at the end of the day, it’s change, and as humans, we know how to change. Navigating it means that we have to embrace the uncertainty for all the risks AND opportunities it presents.


We managed our way through the printing press and the loom and the assembly line and the internet. Perhaps not as gracefully as we could have, but transitions are messy and people are too.


The question was never whether machines would get smarter. It was always whether we’d get wiser about what to do with them, and whether we do it from a place of fear or clarity.


Erin Sedor is an executive advisor and strategic performance expert with 30+ years helping organizations build strategy that works. She writes about the intersection of new science, ancient wisdom, and better business at ErinSedor.com.

Comments


About Erin Sedor

With more than three decades of experience under my belt navigating in high-growth organizational environments to manage strategic risk and organizational change, there's not much I haven't seen. My practice has put me alongside executives in organizations of all sizes, types, and industries - vision alignment, risk visibility, and strategic performance are always the topics at hand. Leaders who hire me are confident and excited about the journey they are on and recognize the value of thought diversity and independent perspective. They are looking for the insight they need to make meaningful and effective strategic decisions that will move the organization forward. 

Erin Sedor, Black Fox Strategy
bottom of page