The Hot Potato
Let's start with email, because every organization has it and nobody wants to own the dependency map for it.
Email feels simple. It's just a service. Outlook connects to Exchange, messages flow. But the moment someone actually sits down to map the dependencies underneath email, they discover something uncomfortable: five different business units depend on it, each with a different tolerance for downtime. HR can live without email for a day. The trading desk needs it within an hour. Legal can't specify a tolerance because "it depends on what's in flight." And all five business units share the same authentication infrastructure, the same VPN, the same DNS.
Now the person who drew the map owns a problem. Not a technical problem — a political one. They've surfaced five conflicting recovery time objectives that all point back to the same infrastructure, and resolving that conflict requires budget negotiations between teams that don't report to each other. So what happens? The map gets drawn once, presented at an annual review, filed in SharePoint, and never touched again. Nobody updates it because updating it means re-opening the argument nobody resolved the first time.
This is the hot potato. Owning the dependency map means owning everything the map reveals. And what it reveals is almost always a conversation nobody wants to have.
The Mapping Itself Is Easy (And That's the Surprising Part)
Here's what's counterintuitive: the technical work of building a dependency map is not that hard.
You dedicate one person to it. They go talk to the infrastructure teams, the application teams, the platform teams. They document what each group says. They draw the diagram. The quality improves with practice — each iteration catches dependencies the previous one missed, each conversation surfaces connections that nobody had written down before. It takes time and patience, but it's not a technology gap. It's an interview process.
This reframes the entire problem. If mapping is straightforward, why is every map wrong? Why does Forrester report that 56% of enterprises lack a complete view of their dependencies? Why does Gartner flag "missing data owners and unclear success metrics" as the top reasons mapping initiatives fail?
Because the problem was never technical. It's governance. It's ownership. It's the organizational reality that nobody wants to be the person holding the map when the questions start.
Three Political Problems That Kill Every Dependency Map
1. Who owns it?
Ask around your organization and watch the finger-pointing start. Infrastructure teams say it's the application team's job — they're the ones who know what their services depend on. Application teams say it's the platform team's job — they're the ones who provision the infrastructure. The resilience team sometimes draws the map because nobody else will, but they don't have the authority to enforce updates when things change.
The real question isn't just "who owns the map." It's "who deals with the impact of the map." Those are two different questions, and most organizations can't answer either one. The person who draws the map inherits accountability for every gap it surfaces. That's not a role anyone volunteers for — it's a role someone gets assigned and then quietly deprioritizes until audit season.
Gartner's 2025 dependency mapping report puts it bluntly: vague goals, weak sponsorship, and insufficient testing undermine most mapping deployments. The tool works in the demo. The governance falls apart in production. Because governance means someone has to own the uncomfortable truths the map reveals, and ownership without authority is just accountability without power.
2. Who pays to fix what it reveals?
The moment a dependency map exists, it surfaces problems that cost money to solve.
Authentication has a four-hour recovery time, but three critical business services need it back within one hour. Closing that gap requires upgrading the authentication infrastructure — redundant identity providers, faster failover, maybe a complete architecture change. That costs real money. But the authentication team didn't set those business requirements. The business units that need faster recovery don't control the authentication budget. And the CTO is looking at the gap thinking "who's going to bring this to the board?"
So the gap sits there. Documented. Acknowledged. Unfunded.
This is the dirty secret of operational resilience: it's much easier to get funding when there's already an incident to point to. After an outage, budget appears overnight. Before the outage, it's "we should look at this" in every steering committee for eighteen months. Prevention doesn't have a champion because prevention doesn't have a deadline. The organizations with the most accurate dependency maps are, perversely, the ones that have already suffered a major incident. Everyone else operates on faith.
This isn't unique to resilience. Cybersecurity has the same problem. Research shows enterprises routinely overfund failure response and underfund prevention — because prevention costs are invisible and failure costs create immediate executive pressure to "do something." The entire discipline of operational resilience is fighting the same battle that security teams have been losing for twenty years.
3. Who maintains it?
This is the long-term killer, and it's the reason even good maps become bad maps.
Someone — maybe a consultant, maybe an internal resilience analyst — creates a thorough dependency map. It's accurate. It's useful. It surfaces real risks. People reference it in planning meetings. For about six months, it's a living document.
Then three new microservices get deployed. A vendor gets swapped for a competitor. An architecture migration moves half the services from on-prem to cloud. A team reorganization changes who owns what. None of it is reflected in the map. The person who drew it has moved to a different team, or a different company. The map is now wrong, but it still sits in the BCP document, creating false confidence — which is worse than having no map at all, because at least with no map people know they're guessing.
Unless the map is owned by the business itself — embedded in operational workflows, maintained by the people who actually change the infrastructure — it's not going to be sustainable. A consultant can create it. A resilience team can champion it. But the moment the effort depends on someone manually updating a diagram every time a deployment happens, entropy wins. It always wins.
The Different-RTOs-Same-System Problem
This deserves its own section because it's the single most common reason organizations avoid doing the detailed mapping in the first place.
HR accepts one-day email downtime. The trading desk needs one hour. Both depend on the same authentication infrastructure. Whose RTO determines how much gets invested in authentication resilience?
This looks like a technical question. It's actually a budget negotiation. The trading desk's one-hour requirement means authentication needs redundancy, fast failover, and tested recovery procedures. That costs significantly more than what HR's one-day tolerance would justify. So who pays the difference? The trading desk? They didn't build the authentication system. The IT department? They'll push back — their budget is already committed. The enterprise risk function? They don't control infrastructure spending.
The dependency map forces this conversation into the open. And that's exactly why it doesn't get drawn. Because the map is a mirror, and nobody likes what it shows. Organizations avoid the detailed mapping not because it's technically difficult, but because the map forces a conversation nobody wants to have.
Why Incidents Are the Only Thing That Works (And Why That's a Problem)
There's a grim pattern in how organizations actually end up with accurate dependency maps: they suffer an outage bad enough that the board demands answers.
After Rogers took down Canada's Interac payment network for fourteen hours in 2022 — because of a single-provider dependency with no redundancy — Interac invested in carrier diversity, private network backup, and improved business continuity practices. The dependency map got accurate real fast. But it took a nationwide payment outage to make it happen.
This creates a perverse incentive. The only organizations with truly current, funded, and maintained dependency maps are the ones that have already been burned. Everyone else is running on assumptions, partial documentation, and the institutional knowledge of engineers who might leave next quarter.
It's the same dynamic that plays out in cybersecurity, in disaster preparedness, in every preventive discipline: the money appears after the disaster, never before. And that means the dependency map — the one artifact that could prevent cascading failures — stays underfunded and unmaintained until it's too late to matter.
What Actually Fixes This
The fix isn't more discipline. It's not a better spreadsheet template. It's not a more strongly worded governance policy. Those have all been tried and they all decay for the same reason: they require humans to do extra work with no immediate reward.
What actually works is making the dependency map a byproduct of the infrastructure itself, not a separate artifact maintained by a separate team on a separate schedule.
When someone deploys a new service, the dependency model updates. Not because they filled out a form — because the deployment pipeline carries dependency information and the model ingests it automatically. The map reflects the infrastructure because it's derived from the infrastructure.
When someone changes a vendor, the recovery chain gets re-evaluated. Not six months later at the annual review — immediately, because the change itself triggers the re-evaluation. The gap between "infrastructure changed" and "the map reflects the change" shrinks from months to minutes.
When someone sets an RTO, the system tells them whether their dependencies can actually support it. No more setting numbers in a vacuum. No more discovering during a tabletop exercise that the four-hour authentication RTO makes the one-hour email RTO impossible. The math is done at the point of decision, not during the post-incident review.
The map has to be a living thing maintained by the people who change the infrastructure — not a static artifact maintained by the resilience team once a year before an audit. The technology to do this exists. The organizational willingness to adopt it is the only thing lagging behind.
The Bottom Line
The dependency map isn't wrong because mapping is hard. It's wrong because owning it is painful. It surfaces budget conflicts nobody wants to resolve. It reveals gaps between documentation and reality that nobody wants to explain. It creates accountability that nobody wants to carry.
So it gets drawn once, filed away, and forgotten — until the outage that proves it was wrong. Then everybody cares, briefly, until the crisis passes and the cycle starts again.
The organizations that break this cycle don't do it by hiring more resilience analysts or writing better governance policies. They do it by making the map inseparable from the infrastructure it describes — so that maintaining the map isn't a separate task that requires human discipline, but an automatic consequence of building and operating the systems themselves.
Related Reading
- Your RTO Is a Lie: Recovery Time Objectives Are Chains, Not Numbers
- What Is Infrastructure Dependency Mapping? A Complete Guide
- How Companies Actually Maintain Business Continuity Plans (And Why Most Don't)
- How to Keep BCP Documentation in Sync with Infrastructure (And Why It Never Is)
- What Is Operational Resilience Modeling? From Compliance to Continuous Confidence
- Why Most BCP Software Still Can't Tell You What Breaks When Something Fails
- The Email Dependency Test: Map One Service, Find Twenty Problems
- The Rogers Outage: What It Taught Canada About Operational Resilience
- Gartner 2025 — IT Dependency Mapping Report Key Takeaways
- DataDrivenInvestor — Why Enterprises Overfund Failure and Underfund Prevention