Can the new Immich release truly replace Google Photos for families who demand both a sleek UI and rock‑solid data survival?
Immich’s 2.7.0 rollout brings a cleaner gallery, automatic duplicate cleanup, stricter CSP rules, and a mobile backup client that finally feels “production‑ready.” On the surface, the platform now mirrors Google Photos well enough to tempt the growing legion of “Google Photos refugees” looking for a self‑hosted alternative. But the real test isn’t how pretty the app looks—it’s whether the underlying backup architecture can survive the very failures that drove users away from Google’s cloud in the first place: disk crashes, database corruption, botched upgrades, and single‑site disasters. In this piece I argue that Immich’s own end‑to‑end (E2E) backup plan, combined with its current hardware and operational requirements, makes the migration a high‑stakes gamble for home‑lab builders, NAS buyers, and privacy‑first families.
Below is a reality check that separates UI polish from data resilience, using the facts from Immich’s documentation, real‑world migration stories, and the broader context of self‑hosted cloud storage.
Does Immich’s new UI actually solve the backup pain points that drove users away from Google Photos?
Immich’s 2.7.0 release introduced a gallery that automatically removes exact duplicates, a feature that previously required manual scripts or third‑party tools. The Content‑Security‑Policy hardening also reduces the attack surface for browsers that access the web UI, aligning Immich with modern security best practices. For many home‑lab admins, these changes feel like the final polish that makes Immich look “ready for the family.”
However, the UI upgrades do not address the core limitation that prompted the migration: Google’s built‑in, geographically redundant backup. Immich’s mobile apps for Android and iOS now support continuous photo and video backup, but they still push data only to the local server you run — no automatic off‑site replication is baked in Immich’s mobile backup capability. The app’s “backup” button is essentially a sync client; if the server disappears, the synced copies vanish with it. The UI may be slick, but the underlying storage remains a single point of failure.
What does Immich’s built‑in end‑to‑end backup actually protect?
Immich advertises an E2E backup workflow that snapshots the PostgreSQL database and copies the media folder to a secondary location. The workflow is optional, requires manual configuration, and assumes you have another storage node (e.g., a second NAS, an external USB drive, or a cloud bucket) ready to receive the backup.
In practice, this means you must:
- Schedule regular PostgreSQL dumps (using
pg_dumpor Immich’s built‑in script). - Rsync the
uploads/directory to a separate volume. - Verify integrity after each run, because Immich does not provide built‑in checksum validation.
If any of these steps are missed, the backup is incomplete. Moreover, Immich does not automatically rotate old snapshots or enforce retention policies, leaving you to script cleanup yourself. For a family that expects “set it and forget it” reliability, the E2E backup is a manual safety net, not a guarantee.
Contrast this with Google Photos, which stores originals redundantly across multiple data centers and automatically restores them after a service outage. Immich’s approach shifts the responsibility entirely onto the user: you must design, test, and maintain the backup pipeline.
How do disk failures and site disasters expose the limits of a self‑hosted Immich archive?
A single‑site deployment is vulnerable to the classic trio of hardware catastrophes:
- Disk failure – If the drive that houses the Immich media folder dies, all synced photos disappear unless a recent rsync copy exists elsewhere.
- Database corruption – PostgreSQL can become unrecoverable after an abrupt power loss, and Immich offers no built‑in point‑in‑time recovery.
- Failed upgrade – Immich’s migration scripts have been known to lock the database if a version jump is missed, potentially leaving the whole catalog inaccessible.
Because Immich’s backup system does not automatically replicate to an off‑site location, a fire, flood, or theft that destroys the entire server also destroys the photo archive. The only way to mitigate this is to maintain a separate disaster‑recovery node, which many home‑lab users consider “extra work” and often skip.
Real‑world accounts illustrate the gap. A user who exported their Google Photos via Google Takeout, then imported the files into Immich, later discovered that a faulty SSD rendered the server unbootable. Without a pre‑existing off‑site copy, the migration effort was lost — the same scenario Google Photos was designed to avoid Google Takeout migration steps. The lesson is clear: the migration itself does not create resilience; it merely moves the data to a new, potentially more fragile location.
Can a home‑lab admin meet Immich’s backup requirements without adding complexity?
The answer depends on how much operational overhead you’re willing to accept. Immich’s documentation suggests a simple cron job that runs pg_dump and rsync nightly. For a single‑user NAS, that may be straightforward:
0 2 * * * pg_dump -U immich -Fc immich > /backups/immich_$(date +%F).dump
0 3 * * * rsync -a /var/lib/immich/uploads/ /backups/immich_uploads/
But a robust solution should also:
- Verify checksums after each rsync to catch silent corruption.
- Rotate backups to keep a 30‑day history without filling the disk.
- Test restores regularly, ensuring the dump can be re‑imported and the media folder re‑linked.
These steps quickly turn a “single‑node” setup into a mini‑backup infrastructure, which defeats the appeal of a “plug‑and‑play” self‑hosted photo library. For families that lack the time or expertise to script and monitor these tasks, the risk of data loss remains high.
In contrast, the self‑hosted cloud storage market positions its solutions as alternatives to Dropbox or Google Drive, emphasizing that the user must handle redundancy themselves — a point made in a recent Kindalame analysis of ProjectSend Self‑hosted cloud storage purpose. Immich follows the same model: it provides the UI and sync client, but the backup guarantees are left to the operator.
Is the trade‑off worth it for privacy‑first families?
Privacy‑first families often cite data ownership as the primary motivator for leaving Google Photos. One user wrote that they chose Immich because it “offers the same commonality with Google Photos while giving me more control of the ownership of my data” User’s motivation for Immich. The sentiment is valid: you no longer hand your images to a corporation that can repurpose them.
But ownership without durability is hollow. If the only place your family’s memories live is a single NAS that could fail tomorrow, the privacy gain is outweighed by the risk of total loss. The decision therefore hinges on whether you can afford to invest in a secondary storage site (e.g., a cheap cloud bucket, a second NAS at a relative’s house, or a periodic offline backup to tape).
For many hobbyists, the cost of a second site is modest compared to a monthly Google Photos subscription, but the operational discipline required is not. The trade‑off is essentially: you gain control, but you also inherit the responsibility of disaster recovery.
If you already run a home‑lab with redundant storage pools, regular snapshots, and a tested restore process, Immich’s 2.7.0 UI upgrades may be the final piece you needed to retire Google Photos. If you are still building that foundation, the migration is premature; you’d be swapping one single‑point‑of‑failure for another, just with a prettier front‑end.
What’s your experience? Have you already migrated your Google Photos library to Immich and built a reliable backup pipeline, or are you still weighing the cost of a second storage node? Share your successes, setbacks, or any scripts you’ve found useful for automating Immich’s E2E backup. Let’s discuss how the community can make self‑hosted photo archives both private and resilient.
