Skip to content
View in the app

A better way to browse. Learn more.

Thailand News and Discussion Forum | ASEANNOW

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

This is why you need a proper Backup / Disaster Recovery System

Featured Replies

What The Actual Flip!!

 

NOT Thailand but one of the world's most technologically advanced countries ...

 

G-Drive Fire Destroys 125,000 Officials' Data

No Backup Available for Government Cloud System, Recovery Uncertain

 

Published 2025.10.02. 00:55Updated 2025.10.02. 06:50

 

It was confirmed on the 1st that the ‘G-Drive,’ a work cloud, an online storage device, used by central government officials, was completely destroyed in a fire at the National Information Resources Service in Daejeon on the 26th of last month. Unlike other online administrative systems, the G-Drive is expected to cause significant damage as it has no backup copies. The Ministry of Personnel Management, where all affiliated officials use the G-Drive, is particularly affected. A source from the Ministry of Personnel Management said, “It’s daunting as eight years’ worth of work materials have completely disappeared.”

 

https://www.chosun.com/english/national-en/2025/10/02/FPWGFSXMLNCFPIEGWKZF3BOQ3M/?fbclid=IwZnRzaANQgMpleHRuA2FlbQIxMQABHlMz0uBo5PE4ieys9ciFvjvOpX451yVLWUjEcekjyTxYxiXUthHBOh3GtV54_aem_CGl6-tZ-bPDA1PmhjvZdsw

 

image.png.8b71ae842face0dcc809dcb449d46cc5.png

"I don't want to know why you can't. I want to know how you can!"

Just now, Crossy said:

NOT Thailand but one of the most technologically advanced countries ...

 

G-Drive Fire Destroys 125,000 Officials' Data

No Backup Available for Government Cloud System, Recovery Uncertain

 

Published 2025.10.02. 00:55Updated 2025.10.02. 06:50

 

It was confirmed on the 1st that the ‘G-Drive,’ a work cloud, an online storage device, used by central government officials, was completely destroyed in a fire at the National Information Resources Service in Daejeon on the 26th of last month. Unlike other online administrative systems, the G-Drive is expected to cause significant damage as it has no backup copies. The Ministry of Personnel Management, where all affiliated officials use the G-Drive, is particularly affected. A source from the Ministry of Personnel Management said, “It’s daunting as eight years’ worth of work materials have completely disappeared.”

 

https://www.chosun.com/english/national-en/2025/10/02/FPWGFSXMLNCFPIEGWKZF3BOQ3M/?fbclid=IwZnRzaANQgMpleHRuA2FlbQIxMQABHlMz0uBo5PE4ieys9ciFvjvOpX451yVLWUjEcekjyTxYxiXUthHBOh3GtV54_aem_CGl6-tZ-bPDA1PmhjvZdsw

 

image.png.8b71ae842face0dcc809dcb449d46cc5.png

Never had that happen. At the enterprise level, there are offsite backups and RAID setups, which protect against hardware (drive) failures by mirroring or striping data across multiple drives in a single system, preventing a single point of failure.

Great link, didn't realise Chosun had an English website. I'll add this to the daily digest.

Reaching for my TimeMachine drive (MacOS) now.... 😓

  • Author

Even my lowly work files reside on my lappie, mirrored real-time to the cloud (when there's an internet connection of course), the cloud is then copied daily onto my NAS at home.

 

The cloud also syncs to my desktop if it's turned on (most of the time if I'm in Thailand).

 

Most of my files (or recent versions of them) are also attached to emails.

 

It's not perfect particularly against a ransomware attack but it will do.

 

"I don't want to know why you can't. I want to know how you can!"

Surprised they didn't back up to a government server/s that are housed on a secure military base, or in a secure non disclosed location.   This makes me suspicious that people in the government wanted to destroy data.  Perhaps they are covering their tracks. 

  • Author

30 of these at $800 a pop with some ancilliary gubbins could have provided an off-site mirror.

 

Hardly bank breaking for a government department.

 

 

image.png.f29d02f15b8ad9f7809f88facc8b7390.png

 

 

"I don't want to know why you can't. I want to know how you can!"

  • Author

I asked you know who for some ideas

 

Great question. At ~858 TB primary, you’re in “enterprise-scale” territory where the right answer is a layered strategy, not a single product. Here’s a pragmatic blueprint that balances RPO/RTO, cost, and ransomware resilience.

1) Core principles (aim for this)

  • 3-2-1-1-0 rule: 3 copies, on 2 different media, 1 off-site, 1 immutable/air-gapped, and 0 backup verification errors (automated testing).

  • Segment by criticality: Fastest recovery for the 10–20% of data that drives most of the business; economical retention for the rest.

  • Ransomware-first: Immutable snapshots + immutable object storage/tape + isolated management network + MFA.

2) Reference architecture (Good → Better → Best)

Good (cost-efficient, solid RPO/RTO for most workloads)

  • On-array snapshots (hourly to daily) retained short-term (24–72 h) for fast rollbacks.

  • Scale-out backup to a deduped repository on-prem (e.g., commodity storage with software that supports global dedupe).

  • Off-site copy to immutable object storage (S3/Object Lock, Azure Immutable Blob, GCS Bucket Lock) with 30–90 days retention.

  • Periodic tape (LTO-9) for quarterly/annual archives (air-gap, cheapest long-term).

RPO: 24 h (backups) / 1–4 h (snaps)
RTO: hours for most datasets; longer for very large restores.

Better (adds quicker DR)

  • Everything in Good, plus:

  • Backup replication to a second site (or a different cloud region) with object immutability.

  • Synthetic fulls weekly + daily incrementals to speed up restores and reduce impact on primaries.

RPO: 4–12 h typical
RTO: sub-hour for key VMs/apps via storage/VM snapshots; hours for bulk.

Best (low RPO/RTO for crown jewels)

  • Everything in Better, plus:

  • Array-native async replication for the top-tier workloads to a warm standby array in DR.

  • Journal-based replication (e.g., CDP) for sub-minute RPO on a small tier of ultra-critical apps.

  • Automated recovery testing (sandbox restores, app checks) and a clean-room recovery environment.

RPO: minutes for Tier-1, 4–12 h for others
RTO: minutes to low hours for Tier-1

3) Capacity & bandwidth sizing (sanity checks)

You’ll tune these to your environment, but here are useful ballparks.

Assumptions (adjust as needed):

  • Primary dataset: 858 TB

  • Daily change rate: 1–5% typical (use your metrics)

  • Effective dedupe/compression in backups: 3:1 (varies 2–10:1)

Daily offsite data movement (approx.)

  • 1% change ⇒ 8.58 TB/day raw ⇒ ~0.79 Gbps continuous; with 3:1 ⇒ 0.26 Gbps

  • 2% change ⇒ 17.16 TB/day raw ⇒ ~1.59 Gbps; with 3:1 ⇒ 0.53 Gbps

  • 5% change ⇒ 42.90 TB/day raw ⇒ ~3.97 Gbps; with 3:1 ⇒ 1.32 Gbps

If you target finishing the copy in ~12 hours instead of 24, double those bandwidths. Plan WAN accordingly (or seed and use cloud gateways).

Local backup repository sizing (example policy):

  • Retention: 30 daily incrementals + 4 weekly synthetic fulls.

  • Weekly full (logical 858 TB) @ 3:1 ⇒ ~286 TB each → 4 copies ⇒ ~1.14 PB.

  • Dailies @ 2% change (17.16 TB/day) @ 3:1 ⇒ ~172 TB for 30 days.

  • Working total~1.3 PB usable (add 20–30% headroom ⇒ ~1.6–1.8 PB).
    If change rate is 1% and/or better dedupe, you’ll land closer to 0.9–1.2 PB; at 5% change, plan 2.2–2.6 PB.

Off-site/object storage: mirror the above retention or keep shorter locally (fast) and longer in object/tape (cheap).

4) Technology choices (pick what your team can operate well)

  • Backup platforms: mature enterprise players support scale-out repos, immutability, synthetic fulls, and instant/near-instant recovery. Shortlist a couple and POC with your data for dedupe ratios and restore speeds.

  • Repositories: scale-out, erasure-coded object or dedupe appliances; ensure WORM/immutability support end-to-end (repo + target).

  • Cloud object: S3/Object Lock (compliance or governance mode), Azure Immutable, GCS Bucket Lock. Use lifecycle to tier to colder storage for cost (e.g., Glacier Deep Archive) while keeping the first 30–90 days in a hotter tier for restores.

  • Tape: LTO-9 (18 TB native) for true air-gap; consider a vaulting cadence that matches compliance needs.

  • Array-native replication: use if you already run dual sites or need low RTO for a subset of workloads.

  • Security: MFA/just-enough-admin, separate backup credentials/PKI, immutability + isolation for backup infra, signed backups, anomaly/change-rate detection, and network egress controls.

5) Operational playbook

  • Classify data: Tier-1 (minutes RTO/RPO), Tier-2 (hours), Tier-3 (days). Apply the Best/Better/Good accordingly.

  • Test restores: monthly sample restores; quarterly app-level DR tests; annual full DR exercise.

  • Ransomware drills: practice clean-room restores with immutable copies.

  • Runbooks: codify who does what, in what order; store offline copies.

  • Observability: backup success SLOs, restore SLOs, change-rate alerts.

6) What I’d recommend for 858 TB (typical enterprise mix)

  1. Primary: hourly snapshots (24–72 h).

  2. Local backups: scale-out dedupe repo sized ~1.6–1.8 PB usable (as above).

  3. Off-site: daily copy to immutable object storage with 90 days retention; lifecycle to cold tier for older sets.

  4. Archive: quarterly and annual LTO-9 vaulting for 7+ years (if needed).

  5. DR for top 10–20%: array or journal-based replication to secondary site; runbook-tested failover.

  6. Security: immutable everywhere, MFA, isolated mgmt plane, quarterly recovery tests.

If you share your daily change rate, dedupe expectations, retention policy, and RPO/RTO targets, I can size the exact repository, off-site storage, and WAN bandwidth and suggest 2–3 product stacks that fit your budget and ops model.

Export response as a Word file
Export response as a PDF file
 
You said:
please do, budget is government level so not unlimited
 
ChatGPT said:
 
Thought for 23s
 

Awesome — I’ll give you a concrete, “government-grade” plan with right-sized numbers and a few product stacks that teams actually run at this scale. I’ll assume conservative-but-realistic defaults, then show how to tweak if your environment is noisier/quieter.

Assumptions (easy to swap later)

  • Primary dataset: 858 TB

  • Daily change rate (logical): ~2% (plan for 1–5% range)

  • Backup effective reduction (global dedupe+compression): ~3:1

  • Retention (operational): 30 dailies + 4 weeklies kept on fast local repo

  • Off-site copy: 90 days in immutable object storage

  • Archive: quarterly + annual tape (air-gapped)

  • Targets: RPO 4–12 h (most workloads), RTO minutes–hours for Tier-1 via snapshots/instant recovery


Sizing you can take to procurement

1) Local backup repository (fast restore tier)

  • Weekly synthetic full (858 TB ÷ 3) ≈ 286 TB each → 4 copies ≈ 1.144 PB

  • Dailies: 2% of 858 TB = 17.16 TB/day raw → ÷3 ≈ 5.72 TB/day
    30 days ≈ 171.6 TB

  • Working total ≈ 1.316 PB usable → add 20–30% headroom ⇒ ~1.6–1.8 PB usable

If your change rate is 1% → repo ~0.9–1.2 PB; if 5% → ~2.2–2.6 PB usable.

2) Off-site immutable object (90-day window)

  • 4 weeklies ≈ 1.144 PB + 90 dailies (5.72 TB × 90) ≈ 515 TB

  • Total ≈ 1.66 PB logical stored.
    For on-prem object with erasure coding (e.g., 10+2), plan ~1.2–1.4× raw overhead.

3) WAN for daily off-site copy

  • 2% change (17.16 TB raw/day) → after 3:1 ≈ 5.72 TB/day
    Continuous average over 24h ≈ 0.53 Gbps
    If you want it done in 12h, target ~1.06 Gbps sustained.

  • 1% / 5% change quick refs: 0.26 / 1.32 Gbps (24h window).

4) Tape (air-gap archives, LTO-9 recommended)

  • LTO-9 native 18 TB: an annual full of 858 TB needs ~48 cartridges (single copy).
    Double-copy for vaulting parity ⇒ ~96 per year.

  • Throughput planning: to write 858 TB in ≤48 h, at ~1 GB/s/drive (best-case with compressible data), you need ~5 drives writing in parallel; at native ~400 MB/s/drive, ~12–13 drives.


Reference architecture (what to build)

  1. On-array snapshots (hourly→kept 24–72 h) for rollbacks + instant recoveries.

  2. Scale-out backup to a deduped local repo sized ~1.6–1.8 PB usable.

  3. Daily off-site copy to immutable object (S3/Object Lock or on-prem object with WORM) with 90-day retention; lifecycle older sets to colder tiers.

  4. Tape quarterly + annual for 7–10 yr compliance (true air-gap).

  5. Tier-1 DR (~10–20% of data): storage-native async replication or journal-based CDP to a warm DR array/cluster; prebuilt clean-room network for ransomware recovery.

  6. Security & ops: MFA, separate backup admin domain, isolated mgmt network, immutable everywhere, anomaly/change-rate alerts, regular restore testing (monthly samples, quarterly app DR, annual full).


Product stacks that fit (pick one approach and POC)

A) Value / Modular (widely used in gov)

  • Backup software: Veeam Backup & Replication (or Commvault if you prefer a single pane for more workloads).

  • Local repo: scale-out Linux/XFS repositories or Veeam Hardened Repository nodes; or a dedupe appliance.

  • Immutable object:

    • Cloud: AWS S3 with Object Lock (Compliance mode) / Azure Immutable / GCS Bucket Lock, or

    • On-prem: Cloudian HyperStore / Scality RING with Object Lock-compatible WORM.

  • Tape: IBM TS4500 or HPE StoreEver LTO-9 library (WORM media for compliance).

  • Tier-1 DR/CDP (optional): Zerto or Veeam CDP for minute-level RPO on a subset.

Why: Excellent cost/performance, strong immutability story end-to-end, easy staffing.


B) Integrated “ransomware-first” platform

  • Rubrik Security Cloud (appliance or software) or Cohesity DataProtect + SmartFiles.

  • Immutable snapshots, anomaly detection, instant recovery to scale.

  • Off-site: cloud or on-prem object with immutability; both vendors support air-gapped/cyber-vault patterns.

  • Tape: attach TS4500/StoreEver for quarterly/annual exports.

Why: Deep security posture (MFA, SLA domains, threat hunting), streamlined operations. Higher list price, but strong outcomes and support in public sector.


C) Vendor-aligned (if your arrays are Dell or NetApp)

  • Dell: PowerProtect Data Manager + DD Series (Retention Lock) for local repo; ECS object for immutable off-site; Cyber Recovery vault for an isolated clean room.

  • NetApp: On-array SnapMirror/SnapVault for Tier-1 + BlueXP Backup & Recovery to StorageGRID (WORM) or S3; tape via NDMP for archives.

Why: Tight integration with array snapshots/replication; great for low RTO on Tier-1.


DR runbook (how this actually plays out)

  1. Small blast (user error/ransomware <72 h): roll back via storage snapshots or instant-mount from backup repo; verify clean with EDR before re-exposing.

  2. Bigger blast: recover into clean-room (isolated cluster + identity) from immutable repo/object; promote back after forensics.

  3. Site loss: fail over Tier-1 via async replication/CDP; bulk restore Tier-2 from off-site object to DR site; rehydrate cold archives later.

  4. Testing: monthly synthetic full verification; quarterly app-level failovers scripted; annual tape restore audit.


What to buy (quick cut list)

Hardware / storage

  • Local repo: 1.6–1.8 PB usable scale-out (12–16× high-capacity nodes or a dedupe appliance cluster with equivalent usable).

  • Object off-site: ~1.7 PB logical (size raw per platform’s erasure coding).

  • Tape library: initial ~200–400 LTO-9 slots (covers a few years of quarterlies+annuals with dual copy) and 6–12 drives depending on write windows.

Network

  • WAN: ≥1.1 Gbps sustained if you want a 12-hour off-site window at 2% change; shape/burst to 2–3 Gbps if change spikes.

  • Isolated management network for backup infra + vault.

Software

  • One of the stacks above (A/B/C), with immutability enabled end-to-end, MFA, RBAC, and anomaly/change-rate alerts.

"I don't want to know why you can't. I want to know how you can!"

  • Author

Then I asked the acid question "Owmuchizzit?"

 

12 million US over 5 years - Nothing to a government!

Sum & ballpark estimate

Let’s put it all together (5-year view):

Component Estimated 5-year cost (USD)
Backup repo cluster (capex + refresh) 2,500,000
Off-site immutable object storage 1,660,000
Tape library & media 300,000
WAN / connectivity 300,000
Software licenses & support 2,500,000
Staff & operations 2,500,000
Total ≈ USD 9,760,000

So roughly USD 8–12 million over five years (or ~USD 2–3 million per year fully loaded) is plausible for a robust, government-grade backup / disaster recovery solution for ~858 TB of primary data, with full immutability, local fast restore repository, off-site object immutability, and tape archiving.

 

"I don't want to know why you can't. I want to know how you can!"

  • Author

The real question is what scandal is the South Korean government trying to pull where they needed the data to go missing?

"I don't want to know why you can't. I want to know how you can!"

Jeff Geerling has an answer to the data storage dilemma 😋

 

The Petabyte Pi Project

 

2 hours ago, Crossy said:

"Surprised" doesn't even come close, "totally flabergasted with bells on" is probably closer 🙂  

Nothing "totally flabergasts" me in this world anymore. 

 

Surprised yes, flabergasted, no.  A story like this surprises me to the point I simply become suspicious. 

1 hour ago, Crossy said:

The real question is what scandal is the South Korean government trying to pull where they needed the data to go missing?

Welcome to the dark side. :smile:

 

Perhaps they were hacked and too embarrassed to go public and had no choice but to erase. 

1 hour ago, Crossy said:

The real question is what scandal is the South Korean government trying to pull where they needed the data to go missing?

 

Wow....how innocent am I?............That never crossed my mind.

 

Like a fire sale of the contents of Bondi's desk???

  • Author
1 minute ago, KhunHeineken said:

Perhaps they were hacked and too embarrassed to go public and had no choice but to erase. 

 

Knowing the Korean psyche this could actually ring true!

"I don't want to know why you can't. I want to know how you can!"

5 minutes ago, Crossy said:

 

Knowing the Korean psyche this could actually ring true!

It would be a real loss of face for them if it was North Korea who did the hack.  :smile:

Create an account or sign in to comment

Recently Browsing 0

  • No registered users viewing this page.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.