pyratelog

personal blog
git clone git://git.pyratebeard.net/pyratelog.git
Log | Files | Refs | README

commit 3069e2490d5586a75206b208853a7ef40c76f319
parent 958d88619427d680802c82b151ab3c2adf5c5eb8
Author: pyratebeard <root@pyratebeard.net>
Date:   Tue,  1 Nov 2022 23:47:29 +0000

smoke_me_a_kipper

Diffstat:
Mentry/smoke_me_a_kipper.md | 15+++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/entry/smoke_me_a_kipper.md b/entry/smoke_me_a_kipper.md @@ -2,18 +2,18 @@ Earlier this year I wrote about my [backup setup](20220414-speak_of_the_dedup.html) and recently I had to put it to the test. -My PC is a tower that I have on a small stand next to my desk. In the past I had kept the case on my desk but it is rather large and dominates the space a bit too much. The other day my 1 year old toddled into the study and started pushing the power button on my PC power cycling the machine a few times in quick succession. This was unknown to me until the next morning when I booted up my PC but noticed it was very sluggish and it crashed trying to open my browser. After it happened again I started digging through the logs and noticed some filesystem corruption. +My PC is a tower that I have on a small stand next to my desk. In the past I had kept the case on my desk but it is rather large and dominates the space a bit too much. The other day my 1 year old toddled into the study and started pushing the power button on my PC, power cycling the machine a few times in quick succession. This was unknown to me until the next morning when I booted up my PC and noticed it was very sluggish and it crashed trying to open my browser. After it happened again I started digging through the logs and noticed some filesystem corruption. As I described in my backup setup post, I have a 3 disk RAID 5 array as my $HOME. Because of the size I only nightly backup important documents, etc. A full backup is done periodically to an external drive I keep in my bug out bag. Unfortunately I had not done a full back in a while, but I knew my nightly backups were good so nothing too important was lost. -I had used xfs on my $HOME, so I unmounted the device and started an `xfs_repair`. The repair tool very quickly got to Phase 3, showing the output +I had used [xfs](https://en.wikipedia.org/wiki/XFS){target="_blank" rel="noreferrer"} on my $HOME, so I unmounted the device and started an `xfs_repair`. The repair tool very quickly got to Phase 3, showing the output ``` Phase 3 - for each AG... - scan and clear agi unlinked lists - 09:50:01: scanning agi unlinked lists - 0 of 32 allocation groups done ``` -The last line was repeated every 15 minutes, for over 36 hours, never changing from 0 allocation groups done. I don't think it was doing anything. Eventually I stopped it and ran the repair in check mode. This caused a segmentation fault at Phase 3. I tried again but got the same segfault. +The last line was repeated every 15 minutes, for over 36 hours, never changing from "0 allocation groups done". I don't think it was doing anything. Eventually I stopped it and ran the repair in check mode. This caused a segmentation fault at Phase 3. I tried again but got the same segfault. After a few days of digging around and trying different things I decided the effort wasn't worth it. Reluctantly I accepted my losses and started the recovery. @@ -21,7 +21,7 @@ Once the RAID array was reformatted I began the data copy from my external drive This got me to a relatively good position. Okay I had lost some random downloads and a little bit of code that hadn't been pushed to my git server, but nothing serious. It is a little disappointing though, my backup setup is not good enough. -I would like to give [zfs](https://en.wikipedia.org/wiki/ZFS){target="_blank" rel="noreferrer"} a try, or even attempt a mini [ceph](https://ceph.io/en/){target="_blank" rel="noreferrer"} setup, but that would need some planning and some equipment purchases. I need something in the interim. +I would like to give [zfs](https://en.wikipedia.org/wiki/ZFS){target="_blank" rel="noreferrer"} a try, or even attempt a mini [ceph](https://ceph.io/en/){target="_blank" rel="noreferrer"} setup, but that would need some planning and some equipment purchases. I needed something in the interim. An external drive was purchased, which now sits permanently plugged into my PC. Instead of using `dedup` again I opted for an alternative tool, in this case I went with [BorgBackup](https://www.borgbackup.org/){target="_blank" rel="noreferrer"}. @@ -61,7 +61,7 @@ Chunk index: 1620226 2214299 ------------------------------------------------------------------------------ ``` -Nine and a half hours was quicker than I was expecting. Over the next few days I ran backups each evening after I finished work. +Nine and a half hours was quicker than I was expecting. Over the next couple of days I ran backups each evening after I finished work. ``` ──── ─ borg create -v --stats /media/backup/borg-kinakuta::$(date +%Y%m%d) $HOME @@ -109,6 +109,7 @@ Chunk index: 1634630 6477362 ------------------------------------------------------------------------------ ``` +Fourteen and twelve minutes to backup changes is great. I decided to leave it for two days and observe the time again. ``` ------------------------------------------------------------------------------ Repository: /media/backup/borg-kinakuta @@ -129,7 +130,9 @@ Chunk index: 1647966 8540968 ------------------------------------------------------------------------------ ``` -I am really happy with these results from `borg`. When I get chance the next step is to play around with [borgmatic](TK){target="_blank" rel="noreferrer"} to automate the backups. +Almost thirteen minutes for two days of changes, pretty good. + +I am really happy with these results from `borg`. When I get chance the next step is to play around with [borgmatic](https://torsion.org/borgmatic/){target="_blank" rel="noreferrer"} to automate the backups. Another full backup will still be done to the drive in my bug out bag, I just have to be better at doing it more regularly. At least now if I need to restore I will be able to recover all of $HOME and not only the important things.