Windows Backup Cluster
    Back to blogCase Study

    Case study: first PBS production backup of a 1 TB Windows server

    Honest case study from the first 876 GB full backup of a Windows bare metal server to Proxmox Backup Server using our open source GUI client: 20h30 transfer time, 75% deduplicated chunks, 57% end-to-end savings.

    9 min read

    TL;DR — the four numbers that matter

    876 GB
    source volume
    20h30
    transfer duration
    75 %
    deduplicated chunks
    57 %
    end-to-end savings

    Context: first real production run of the GUI client

    After several months of development and internal testing, our Proxmox Backup Server Windows client (open source GUI) has just completed its first full production backup on a meaningful volume. The target datastore is hosted with NimbusBackup, a sovereign cloud PBS operated in France — no dedicated line, no special VPN: just HTTPS to a managed Proxmox Backup Server.

    Machine profile: a Windows bare metal server of roughly 1 TB, in an SMB / mid-market environment. No identifying information is disclosed — the goal is to capitalize on the actual run figures, not on the end customer.

    Why publish these numbers

    The market is full of promises ("10× deduplication", "line-rate throughput") and very few concrete measurements. This article documents one real run, what it shows, what it limits, and what it does not yet prove.

    Timeline: 20h30 of uninterrupted transfer

    The run was triggered overnight by the client's internal scheduler and ran without incident until the successfully finished backup message on the PBS side.

    EventTimestamp (Paris)
    Client scheduler starts2026-04-15 01:30:38
    Auto-Split analysis ends2026-04-15 02:01:00
    PBS index creation2026-04-16 01:40:49
    PBS finish (successfully finished)2026-04-16 22:13:08
    PBS transfer duration20 h 32 min 19 s

    The run is an initial full backup: no previous snapshot existed for this group (GET /previous: 400 no valid previous backup). This is the most expensive scenario — everything is transferred, with no benefit from a delta against a previous run.

    Volumes processed

    MetricValue
    Source size (pxar Size)876,602,075,764 bytes ≈ 876.6 GB (816.4 GiB)
    Total chunks286,616
    Average chunk size~3.06 MB
    Catalog (pcat1)55.4 MB across 17 chunks

    Deduplication and compression: 57 % end-to-end savings

    This is where the PBS pipeline pays off, even on a first backup.

    Proxmox Backup Server Windows client: 75 % of chunks already known or duplicated

    • Deduplicated chunks: 215,784 out of 286,616 → 75.3 % of chunks were either already present on the datastore or identified as internal duplicates within the volume.
    • Unique chunks transmitted: 70,832 (24.7 %)
    • Unique bytes to transmit: 624.1 GB (71 % of the source) — meaning 252.5 GB saved by dedup (29 %)

    Honest reading: on a first backup, the 75 % dedup figure does not come from comparison against a previous backup — there is none. It comes essentially from internal duplicates within the volume (identical files replicated across folders, zero blocks, common headers…). Subsequent runs will additionally hit chunks already present on the datastore.

    Compression: 60 % of post-dedup size

    On the 624.1 GB of unique bytes to transmit, PBS applies client-side compression before upload (logged ratio: Compression: 60%).

    Result on the wire: 624.1 GB × 60 % ≈ 374.5 GB actually transmitted over the network.

    Dedup + compression summary

    876.6 GB
    source
    ~374.5 GB
    on the wire
    57.3 %
    savings

    Average throughput

    Source-side read

    876.6 GB / 20.54 h

    ≈ 42.7 GB/h • 11.85 MB/s

    Effective network traffic

    After dedup + compression

    ≈ 18.2 GB/h • 5.06 MB/s • 40.5 Mbps

    These figures are averages over the entire run. Peaks are likely higher, and individual phases (large contiguous files vs. millions of small files) boost or drag instantaneous throughput. Without fine-grained profiling, we cannot isolate those variations — see "What we cannot yet claim" below.

    Resilience proven: 20h30 with no rupture

    This run was the 3rd attempt. The two previous ones had failed:

    • Attempt 1 (Apr 13): Windows VSS broken, snapshot impossible. Client-side fix applied.
    • Attempt 2 (Apr 14): extended network outage + lost HTTP/2 session on the PBS side. The client failed cleanly, leaving no orphan snapshot.
    • Attempt 3 (Apr 15-16): successfully finished backup after 20 h 32 min of continuous transfer.

    Several recent GUI client fixes were exercised under real conditions during this run:

    Session-lost retry (25 min)

    On a lost HTTP/2 session against PBS, the client waits and retries instead of giving up.

    H2 keepalive

    Keeps the HTTP/2 connection alive during long analysis phases, to avoid silent drops.

    Per-directory Finish()

    Per-folder validation rather than a single final block, to limit the cost of a late-run failure.

    These fixes existed before this run, but had never been exercised on a 20 h continuous window. They have now.

    Mechanical bonus: the incremental demonstration

    In parallel, another folder on the same machine — much smaller and already backed up over the previous days — was reprocessed on Apr 17 at 01:18. Result:

    • pxar: 472 bytes (a single chunk, 100 % upload size)
    • catalog: 72 bytes
    • Total duration: under one second

    Important: this folder is not the large-volume one described above. It is on the same machine but a different backup group. This measurement therefore proves nothing about the future incremental behavior of the large volume — it merely illustrates the PBS mechanism: when a previous snapshot exists and nothing changed, the client re-registers the previous snapshot's chunks and closes the index, with zero unnecessary upload.

    Update Apr 19 — the 2nd run confirms the pipeline (and surfaces one last bug)

    Overnight from Apr 18 to Apr 19, a 2nd run was launched against the same group with version v0.2.69 of the GUI client (fixes on NTFS ACL upload and on the finalization handshake). The numbers are what you would hope for from an incremental against a near-stable volume — and they surface one last client-side bug.

    On an 876 GB Windows dataset, the 2nd daily backup pushed ≈ 1.2 GB over the wire (vs. 374 GB on day one) — ≈ 310× less traffic for exactly the same volume protected. The PBS dedup promise, verified in production against a real client.

    MetricRun 1 (Apr 15-16)Run 2 (Apr 18-19)
    Source876.6 GB876.9 GB
    Chunks286,616286,695
    Deduplicated chunks215,784 (75.3 %)286,620 (99.97 %)
    Network upload (uncompressed)624 GB (71 %)1.73 GB (~0 %)
    Duration20 h 3014 h 10
    NTFS ACLs uploadedno (regex 400 bug)yes, 10 MB

    The pipeline holds: 99.97 % of chunks are recognized as already present on the datastore, real network upload drops to under 2 GB for an 877 GB source volume. The expected incremental ratio is confirmed on this dataset — at least on the chunk / dedup / transfer side.

    The last bug: a 400 from PBS on the final /finish call prevented the snapshot from being sealed. PBS will most likely rollback this snapshot, as it did during the 2nd attempt of the initial run.

    What did go through: every piece of content was chunked and deduplicated correctly server-side, the NTFS ACL blob arrived, the catalog and manifest are consistent. Only the final validation is missing. The 876 GB scanned and 14 h of compute are not wasted: the chunks already live on the datastore. The next run, with the corrected /finish, will only replay the source scan and the final call — and will again benefit from the 99.97 % dedup. No full retransmission required.

    Another signal: duration falls from 20h30 to 14h10 with the source essentially unchanged. The delta comes mostly from avoided upload time (1.73 GB vs 624 GB); Auto-Split analysis and client-side disk reads remain incompressible costs on this volume.

    What we cannot (yet) claim

    Three points deserve to be stated openly:

    1. A formally sealed incremental snapshot is still pending. The 2nd run measured the dedup ratio (99.97 %) and effective upload (1.73 GB), but the /finish 400 prevented PBS from validating the snapshot server-side. The dedup + upload pipeline is confirmed, but an end-to-end restore of an incremental on this group will only be proved on the next run.
    2. Per-variable bottlenecks (client CPU, source disk I/O, PBS client version, TLS negotiation, network latency) are not isolated here. Average throughput is measured but not attributed.
    3. Representativeness: a run on a "business data" volume says little about a run on an active SQL database, on a folder of millions of small files, or on highly entropic data (already-compressed video, application-encrypted archives).

    Planned follow-up

    Detailed profiling (CPU, I/O, PBS client version) could not be performed in the context of this test. An internal lab will be the subject of a follow-up article to isolate bottlenecks per variable on controlled datasets (large files, small files, entropic data, open databases).

    Takeaways

    • ✅ The Windows GUI client sustains a 20h30 continuous transferwindow without rupture on a near-1 TB volume.
    • ✅ On a first backup, we already observe 57 % end-to-end savings thanks to in-volume dedup + PBS compression.
    • ✅ The recent network fixes (session-lost retry, H2 keepalive, per-directory Finish) proved themselves in production on a long run.
    • ⚠️ Average throughput (~5 MB/s effective on the wire, ~12 MB/s source-side read) is measurable but not yet attributed — bottlenecks will be isolated in a dedicated lab.
    • ✅ The 2nd run (Apr 18-19, client v0.2.69) measures 99.97 % deduplicated chunks and 1.73 GB of effective upload on 877 GB of source — the expected incremental ratio is confirmed on this dataset.
    • ⚠️ This 2nd run did not seal the snapshot (/finish 400 from PBS) — the snapshot will likely be rolled back. The first restorable incremental snapshot is expected on the next run, once the client fix is deployed.

    What about Veeam?

    Compared to a standard Veeam agent on the same volume profile, the PBS pipeline plays on a different plane: no ingest into a proprietary repository, no synthetic full to replay, no per-socket licensing. The traffic savings observed here (57 % on run 1, ~310× less on run 2) come from client-side dedup and stable PBS chunking. Veeam reports comparable ratios on forever-forward jobs, but behind a very different commercial model and restore chain.

    The detailed breakdown is in our Proxmox Backup Server vs Veeam comparison — compression, dedup, cost per TB, granular restore.

    Going further

    Your Windows volumes deserve the same guarantees

    NimbusBackup is a sovereign cloud PBS: a Proxmox Backup Server hosted in France, with dedicated datastore, native deduplication and an open source Windows client. The same pipeline described in this article, packaged and operated for you.