Under Pressure, Every Strategy Reveals What It’s Built On
- Erin Sedor

- Mar 5
- 9 min read
By Erin Sedor | Black Fox Strategy
You can’t tell what an organization is actually built on by reading its website. You can’t tell from the annual report, the values statement posted in the lobby, or the strategic plan sitting in a three-ring binder on the CEO’s shelf. You find out when something tests it. When a contract is on the table that would require compromising something essential. When an entity with significant leverage demands you change your terms or lose the relationship. When the safe choice and the principled choice are clearly not the same thing.
That test happened very publicly last week. Two of the most prominent companies in the world faced the same external pressure and responded in ways that revealed exactly what their organizations are built on — not through a crisis of character or scandal, but through a governance decision that forced both to answer a question most organizations never have to confront directly: when growth and purpose collide, which one wins?
The answer depends entirely on what you built before the pressure arrived.
Three Forces Running Through Every Strategy
Every organization has a reason it exists. Every organization has an agenda for expanding its reach and capability. And every organization is either anticipating the future or slowly falling behind it — usually without knowing which.
Purpose. Growth. Evolution.
These three forces don’t wait for you to design them into your strategy. They’re already operating, already shaping every major decision, already in tension with each other in ways that surface most clearly under pressure. The organizations that hold under that pressure aren’t the ones with the best-looking strategy documents. They’re the ones that have made those forces explicit — named them, examined how they’re weighted, and built the architecture of their decision-making around one question: when purpose, growth, and evolution pull in different directions, what holds them in balance?
Most organizations haven’t answered that question. They’ve assumed the answer, and the assumption holds – right up until the moment it doesn’t.
This is more common than most leaders want to believe. Research from Harvard Business Review found that executives report feeling 82% aligned with their company’s strategy — but actual measured alignment sits at 23%. Nearly a fourfold gap between confidence and reality. That gap doesn’t exist because leaders are careless or incompetent.
It exists because when those three forces aren't made explicit, they cannot be held in relationship with each other. Purpose drifts from growth. Growth outpaces evolution. Evolution becomes aspirational language rather than active anticipation. The organization doesn't collapse — it just slowly loses coherence.
Last week, two companies gave us a rare and unusually visible case study in all three forces playing out simultaneously. Neither company is the villain in this story. Both are operating from the logic of what they built. That’s precisely what makes it worth studying.
The details continue to evolve – but what unfolded in that moment is already instructive.
What Happened — And What It Reveals
Anthropic, one of the leading AI development companies in the world, had been contracted by the U.S. Department of Defense to deploy its AI model in classified settings, a contract valued at up to $200 million. The dispute that ended it came down to two specific restrictions Anthropic refused to remove: no use of its technology for fully autonomous weapons systems, and no use for mass domestic surveillance of American citizens. In their own public statement, Anthropic said “we do not believe that today’s frontier AI models are reliable enough to be used in fully autonomous weapons” and that “mass domestic surveillance of Americans constitutes a violation of fundamental rights.” (NPR, Feb 27 2026)
The Pentagon’s position was equally clear. It required access to AI tools for all lawful purposes and would not accept company-imposed restrictions beyond existing law. Negotiations broke down, Anthropic walked away from the table, and the contract was pulled. The supply chain risk designation that followed — something historically reserved for foreign adversaries — was applied here for the first time to an American company.
Hours later, OpenAI announced it had reached its own agreement with the Pentagon, moving quickly after watching its competitor walk away from the table. OpenAI’s CEO later acknowledged the deal had been rushed and, in his own words, “looked opportunistic and sloppy.” (CNBC, Mar 3 2026)
And then today, the story turned again with Anthropic and the Pentagon returning to the negotiating table. (CNBC, Mar 5, 2026)
Two companies. Same external pressure. Different outcomes. The difference wasn’t about stated values — both companies share, at least in language, the same red lines around autonomous weapons and surveillance. The difference was about architecture. About what each company had built, structurally and deliberately, before this moment arrived.
The Layer Most Commentary Is Missing
The public conversation about this situation has focused almost entirely on the principled stand — the contract loss, the regulatory designation, the politics. But there is a layer underneath that carries equal strategic weight, and it connects directly to the dimension of organizational strategy that is most consistently overlooked: the relationship between the present decisions an organization makes and the future it is actually preparing for.
Anthropic’s argument wasn’t simply that they wouldn’t allow their technology to be used for harmful purposes. It was more precise than that. Current law has not kept pace with what AI can actually do. Under existing U.S. statutes, the government can legally purchase commercially available data from brokers and deploy AI to analyze it at scale — which could, in effect, constitute mass surveillance without technically violating any law on the books today. Legal and policy experts have named this gap directly: the distance between what is technically lawful and what is functionally surveillance has collapsed, and the regulatory frameworks haven’t caught up. (Fortune, Mar 2 2026)
On autonomous weapons, Anthropic’s CEO made a point that deserves careful attention: “The constitutional protections in our military structures depend on the idea that there are humans who would, we hope, disobey illegal orders. With fully autonomous weapons, we don’t necessarily have those protections.” (Dario Amodei, NYT podcast via EA Forum, Feb 27 2026) Fully autonomous systems remove that safeguard by design. The concern wasn’t that the Pentagon has bad intentions. The concern was that the governance structures required to deploy this capability responsibly don’t exist yet — and that proceeding as if they do isn’t pragmatism, but rather a failure of foresight.
This is forward-looking strategic thinking at its most consequential: anticipating where a gap between current conditions and future reality will eventually become a crisis and refusing to act as though that gap has already been resolved. It is the opposite of reactive strategy. It is the decision made now that determines whether the organization — and in this case, the broader systems it operates within — will still be coherent when the future arrives.
OpenAI’s counter-position was to trust the law as it currently stands — reference existing statutes and policies in the contract, rely on institutional good faith, defer to legal frameworks rather than imposing additional restrictions. That’s a defensible approach. It’s also one that assumes the future will look enough like today that today’s rules will hold, or at least that today’s rules adapt in real time to bridge the gaps. History on precisely this question — surveillance programs later ruled unlawful, capabilities that technically complied while substantively circumventing the intent of legal protections — suggests that’s a reasonable bet only if you’re not the one accountable when the gap becomes visible. (MIT Technology Review, Mar 2 2026)
Two Organizations, Two Architectures, One Pressure
Anthropic built its organizational structure as a Public Benefit Corporation whose stated purpose is described not as a values statement but as “the final arbiter” in organizational decisions — their own language for what governs when competing interests collide. (anthropic.com/company) They created a Long-Term Benefit Trust with genuine governance authority: an independent body with the power to select and ultimately elect a majority of board members, meaning major investors cannot override the purpose architecture even as the company scales. (anthropic.com/news/the-long-term-benefit-trust) When growth and purpose collided last week, the architecture held. A $200 million government relationship walked out the door. That isn’t a PR decision. It’s a design decision made years earlier, expressing itself exactly as intended.
OpenAI built something different, and it’s worth tracing the arc without judgment, because the trajectory itself is instructive. Their original mission was to advance AI unconstrained by a need to generate financial return. Over nine years, that mission was revised six times. The word ‘safely,’ present in every IRS filing through 2023, was removed when the company restructured into a for-profit in 2024. The nonprofit oversight board, which once held near-total governance authority, now controls 26% of the company. (Fortune, Feb 23 2026) Each of these changes was individually defensible. Taken together, they trace the pattern of an organization whose growth imperative has been progressively elevated — with purpose language quietly redefined to remain compatible with that growth agenda rather than to govern it.
When the Pentagon pressure arrived, each organization’s architecture responded exactly as built. This is what Jim Collins documented across decades of research in Built to Last: the companies that endure aren’t the ones with the most compelling mission language. They’re the ones that have built their core ideology into the structure of how decisions actually get made — and held it there even when holding it cost them something commercially. The visionary companies in Collins’s research weren’t more profitable in the short run. They were more coherent over time. That coherence is a strategic asset. But only if you build it before you need it.
What Anthropic demonstrated is an organization whose purpose is load-bearing — not decorative. What OpenAI demonstrated is the ability to move quickly and pragmatically under pressure, to find a workable path when a harder line would cost the company a significant opportunity. Both capabilities have real organizational value. But only one of them tells you where the organization will stand fifteen years from now.
Building a Strategy Design Framework That Holds
Most organizations will never face a test this visible. The equivalent pressure usually arrives in quieter forms. A revenue opportunity that pulls the organization away from its core contribution. A growth initiative that requires moving faster than the culture can adapt. A partnership contingent on softening a principle that seemed abstract until it suddenly wasn’t. A board conversation in which the financial case for a decision is overwhelming and the strategic instinct against it is hard to articulate.
Every one of those moments is a test of the same thing: whether the organization’s reason for existing, its approach to growth, and its awareness of where it’s headed are held in conscious relationship with each other — or whether one of them is secretly running the organization while the others serve as useful language for the annual report.
Playing it safe, in the conventional sense, is not a strategy. The organization that protects near-term revenue by quietly softening its purpose isn’t being cautious. It’s making one of the riskiest long-term bets available: that the load-bearing wall it’s gradually thinning isn’t actually load-bearing. You don’t find out you’re wrong about that in a planning session. You find out when the pressure comes.
The organizations that hold under that pressure — that can absorb a significant contract loss, walk away from a powerful partnership, or refuse to let short-term growth logic override long-term organizational coherence — didn’t develop that capacity when the test arrived. They built it in advance, deliberately, into the structure of how decisions get made and what those decisions answer to.
That’s the work. Not the values statement. Not the mission language. The architecture. The question worth sitting with isn’t what your organization says it stands for. It’s what your organization is actually built to do when standing for it costs something.
It means rethinking your strategy design framework — one that explicitly asks the organization to identify its strategic imperatives around purpose, growth, and evolution, and to manage those things in conscious equilibrium.
Building this kind of context into your strategic planning is the starting point for that reckoning — and it goes deeper than a values audit. What is your organization actually designed to protect when growth and purpose pull in opposite directions? Not in the mission statement, but in the structure of how decisions get made. Where is your evolution strategy genuinely active in today's work, and where has it become aspirational language without structural support? And are purpose, growth, and evolution held in real equilibrium across your strategic priorities — or has one of them been quietly carrying the others while the rest serve as useful cover in the annual report?
These aren't abstract questions. They're the ones that determine whether your strategy holds when the pressure arrives, and the time to answer them is before the test, not during it. And it takes a radical disruption of the business-as-usual mindset to get there.

Erin Sedor is a CEO strategy coach and executive advisor with 30+ years in enterprise risk management and strategic planning. She is the founder of Black Fox Strategy and creator of the Essential Strategy Formula, a quantum-intelligent approach for designing strategy around purpose, growth, and evolution.
.png)




Comments