Here's a question worth sitting with for a moment: if ransomware hit your servers tonight, a fire took out your office tomorrow morning, or an employee accidentally deleted the last three years of client files — how long before you'd be back to normal? An hour? A day? A week?
If you don't have a confident, specific answer, you're in the majority. According to disaster recovery surveys, only 5% of small businesses have both a documented recovery time objective and a recovery point objective. The other 95% are operating on assumption.
The gap between what people assume ("we have backups, we'll be fine") and what recovery actually looks like is where disasters turn into closures. This post walks through the math, the failure modes, and — most importantly — how to calculate your own real recovery window before you need to find out the hard way.
What RTO and RPO Actually Mean
Most IT documentation throws these terms around without translating them into business terms. Here's what they mean in practice:
RPO (Recovery Point Objective) is about data loss. It answers: if something goes wrong right now, how far back does your data go? If your last backup ran yesterday at midnight and your server fails at 4 PM today, you lose 16 hours of work. That 16-hour window is your current RPO. For some businesses, losing a day of records is survivable. For others — think medical practices, law firms, or financial services — even a few hours of lost data creates legal and compliance exposure.
RTO (Recovery Time Objective) is about downtime. It answers: once something breaks, how quickly must you be operational again? RTO isn't how long recovery takes — it's the maximum your business can afford. A restaurant POS going down for 30 minutes is painful. A logistics company's dispatch system going down for three days is potentially fatal.
— Infrascale SMB Disaster Recovery Survey
The dangerous part isn't that businesses have bad RTOs. It's that they have untested RTOs — or none at all. Confidence without verification is the setup for the worst day of your professional life.
What Recovery Actually Looks Like by Scenario
Here's the honest timeline breakdown. These aren't worst-case numbers — they're representative outcomes based on real incident data.
| Scenario | Likely RTO | What you're dealing with |
|---|---|---|
| Ransomware — no backup | 2–6 weeks (or never) | Pay ransom (only 32% chance of full recovery), or rebuild from scratch. 60% of businesses in this situation close within 6 months. |
| Ransomware — cloud-only backup, no immutable copy | 1–3 weeks | 94% of ransomware attacks attempt to destroy backups first. 66% succeed in the US. If your backups were also encrypted: same as above. If they survived: bandwidth-constrained cloud restore takes days. |
| Ransomware — 3-2-1-1 + tested immutable backup | 4–48 hours | Isolate affected systems, validate a clean restore point, begin recovery. Well-tested plans consistently hit 4–8 hours for critical systems. |
| Hardware failure — no backup | Days to weeks | Data recovery lab ($300–$1,500+, no guarantee). New hardware: 1–5 days. Rebuilding from scratch: 1–3 weeks per server or workstation. |
| Hardware failure — local backup | 2–8 hours | Restore to replacement hardware. Speed depends on data volume and whether backups were recently tested. |
| Hardware failure — cloud backup only | 4–24 hours | Speed is limited by your internet connection. Restoring 500 GB over a 100 Mbps connection takes ~12 hours of pure transfer — before any configuration work. |
The 24-day average ransomware recovery isn't because the technology doesn't exist to do better. It's because most businesses hit that scenario without a tested plan. The gap between "we have backups" and "we can recover in 4 hours" is a tested, documented, verified recovery process.
The Backup Paradox: Why "We Have Backups" Isn't Enough
This is the part most IT vendors would rather skip: 30–40% of backup restores fail when actually attempted. Not "might fail someday." Fail when the business needed them most.
Backups can show green checkmarks in the dashboard — jobs completing, no errors reported — while being completely unrestorable. The causes are mundane and preventable:
- Files in use at backup time were silently skipped — no error logged
- Database transaction logs weren't captured, leaving the database in a corrupt state
- Encryption keys rotated after the backup, making old data unreadable
- Backup software version mismatches between when data was backed up and when it's being restored
- Storage media degradation discovered only when trying to pull data off it
A backup that has never been restored is not a backup. It's a hope. The only way to know your backup works is to restore from it — not to a temporary location, but through a full recovery drill that proves your critical systems actually come back online correctly.
The 3-2-1-1 Rule: What Proper Backup Architecture Looks Like
The 3-2-1 rule has been the industry standard for years. The ransomware era made it insufficient. Here's what the updated framework requires — and why each element matters.
The fourth element — the immutable copy — is what changed with ransomware. The 3-2-1 rule assumes your backups are intact when you need them. Modern ransomware attacks specifically target and destroy backup infrastructure before triggering encryption. 94% of ransomware attacks attempt to compromise backup systems first. 66% succeed in the US.
An immutable backup uses write-once, retention-locked storage: once written, that data cannot be modified or deleted for the retention period, regardless of who has administrator access. Ransomware can't touch it. Neither can a disgruntled IT admin. Neither can human error.
Why Paying the Ransom Is Not a Recovery Plan
Every year, more businesses pay ransoms hoping to shortcut the recovery process. The data on how that works out is grim.
- Only 8% of businesses that paid a ransom got all their data back (Sophos/Cybereason).
- 1 in 3 organizations that paid the ransom still couldn't recover their data — even after payment (Veeam 2024 Ransomware Trends Report).
- Organizations that paid got back only 61% of their data on average.
- 97% of organizations that had tested, working backups recovered their data. The contrast is stark.
Paying the ransom doesn't eliminate recovery costs — you still have to rebuild affected systems, patch the vulnerability that allowed the attack, and deal with the downtime during negotiation. The ransom payment is often the smallest part of total incident cost, which averages $120,000–$1.24 million for US SMBs when all costs are included.
How to Calculate Your Own Recovery Time
This is a framework you can run through this week. It takes about an hour and produces a number that should change how you think about IT risk.
- List every system your business depends on. Include accounting software, CRM, email, file shares, line-of-business applications, POS, phone system, and SaaS tools. Don't skip the ones you use daily but rarely think about.
- For each system, define the impact of outage at 1 hour, 4 hours, 1 day, 1 week. What specifically stops working? Who can't do their job? Which clients get impacted? Are there manual workarounds, and how long can those realistically hold?
- Classify each system into three tiers based on how long you can actually survive without it:Tier 1 — CriticalZero toleranceBusiness stops without it. Target RTO: under 4 hours.Tier 2 — ImportantPainful but survivableDisrupts operations but workarounds exist short-term. Target RTO: under 24 hours.Tier 3 — SupportingInconvenientSlows work but doesn't stop it. Target RTO: 24–72 hours.
- Calculate your hourly downtime cost. Add your average hourly revenue to the cost of staff time lost. For a 20-person professional services firm billing $200K/month: hourly revenue ≈ $1,136, plus 20 employees at $35/hr average = $700/hour more. Total: ~$1,836/hour. At 576 hours (24 days), that's $1.06 million in lost productivity alone — before ransom, recovery labor, or legal costs.
- Set your RTO target based on what you can actually afford to lose. Not "it would be nice to recover in 4 hours" — what's the maximum downtime that doesn't threaten the business? Then ask your IT provider: what does it take to hit that number reliably?
- Schedule a recovery drill. Once per quarter for Tier 1 systems. Simulate a failure, restore from backup, bring systems online, verify data integrity. If it takes longer than your RTO target, you have work to do before an actual incident.
What Proper Backup Actually Costs
The ROI on backup infrastructure is one of the clearest in IT. Here are current market rates:
| Option | What's included | Approx. cost |
|---|---|---|
| Backblaze Business Backup | Cloud backup per device, self-managed | $99/device/year (~$8/mo) |
| MSP basic managed backup | Cloud backup, daily jobs, monitoring | $10–$15/endpoint/month |
| MSP standard managed backup | Cloud + local, monitoring, monthly test | $15–$25/endpoint/month |
| Full BDR (Business Continuity) | Immutable + local + cloud, instant virtualization, quarterly DR test | $30–$75/endpoint/month |
Managed backup at the full BDR level runs $7,200 per year for 20 seats. One ransomware incident costs $120,000 at the low end — and that's without counting the 60% probability of closure within six months if you can't recover quickly. The protection-to-risk ratio doesn't require a spreadsheet.
The Question Worth Answering Now
The businesses that recover from data disasters aren't the ones with better luck. They're the ones that ran the calculation before the incident — who knew their RTO, had tested their backups, and built the infrastructure to hit their target.
If you don't have a confident answer to "how long to recover," that's the starting point. Run the framework above, find out what your current backup actually covers, and test whether it works. An hour of preparation now is worth more than six months of trying to rebuild afterward.
If you want a second set of eyes on your current backup setup, we run free backup assessments for businesses in the Philadelphia area — no commitment, no sales pitch. We'll tell you what you have, what it covers, and what gaps exist.