

Huh, I think you’re right.
Before discovering ZFS, my previous backup solution was rdiff-backup. I have memories of it being problematic for me, but I may be wrong in my remembering of why it caused problems.


Huh, I think you’re right.
Before discovering ZFS, my previous backup solution was rdiff-backup. I have memories of it being problematic for me, but I may be wrong in my remembering of why it caused problems.


Thanks! I was not aware of these options, along with what other poster mentioned about --link-dest. These do turn rsync into a backup program, which is something the root article should explain!
(Both are limited in some aspects to other backup software, but they might still be a simpler but effective solution. And sometimes simple is best!)


Ah, I didn’t know of this. This should be in the linked article! Because it’s one of the ways to turn rsync into a real backup! (I didn’t know this flag- I thought this was the main point of rdiff-backup.)


Beware rdiff-backup. It certainly does turn rsync (not a backup program) into a backup program.
However, I used rdiff-backup in the past and it can be a bit problematic. If I remember correctly, every “snapshot” you keep in rdiff-backup uses as many inodes as the thing you are backing up. (Because every “file” in the snapshot is either a file or a hard link to an identical version of that file in another snapshot.) So this can be a problem if you store many snapshots of many files.
But it does make rsync a backup solution; a snapshot or a redundant copy is very useful, but it’s not a backup.
(OTOH, rsync is still wonderful for large transfers.)


I run mbsync/isync to keep a maildir copy of my email (hosted by someone else).
You can run it periodically with cron or systemd timers, it connects to an IMAP server, downloads all emails to a directory (in maildir format) for backup. You can also use this to migrate to another IMAP server.
If the webmail sucks, I wouldn’t run my own. I would consider using Thunderbird. It is a desktop/Android application. It syncs mail to your desktop/phone, so most of the time, it’s working with local storage so it’s much faster than most webmails.


https://charity.wtf/2021/08/09/notes-on-the-perfidy-of-dashboards/
Graphs and stuff might be useful for doing capacity planning or observing some trends, but most likely you don’t need either.
If you want to know when something is down (and you might not need to know), set up alerts. (And do it well, you should only receive “actionable” alerts. And after setting alerts, you should work on reducing how many actionable things you have to do.)
(I did set up Nagios to send graphs to Clickhouse, plotted by Grafana. But mostly because I wanted to learn a few things and… I was curious about network latencies and wanted to plan storage a bit long term. But I could live perfectly without those.)


Not sure about how it handles video, but I’ve been meaning to take a look at https://getbananas.net/


How much storage you want? Do you want any specific feature beyond file sharing?
How much experience do you have self hosting stuff? What is the purpose of this project? (E.g. maybe you want a learning experience, not using commercial services, just need file sharing?)


To be fair, if you want to sync your work across two machines, Git is not ideal because well, you must always remember to push, If you don’t push before switching to the other machine, you’re out of luck.
Syncthing has no such problem, because it’s real time.
However, it’s true that you cannot combine Syncthing and Git. There are solutions like https://github.com/tkellogg/dura, but I have not tested it.
There’s some lack of options in this space. For some, it might be nicer to run an online IDE.
…
To add something, I second the “just use Git over ssh without installing any additional server”. An additional variation is using something like Gitolite to add multi-user support to raw Git, if you need to support multiple users and permissions; it’s still lighter than running Forgejo.


Reminder that you can go for hybrid approaches; receive email and host IMAP/webmail yourself, and send emails through someone like AWS. I am not saying you can’t do SMTP yourself, but if you want to just dip your toes, it’s an option.
You get many of the advantages; you control your email addresses, you store all of the email and control backups, etc.
…
And another thing: you could also play with https://chatmail.at/relays ; which is pretty cool. I had read about Delta Chat, but decided to play with it recently and… it’s blown my mind.


If you are going to run Jellyfin or some other media sharing, the key is if you need to transcode media (recompress because the playback device cannot handle it or not). Likely not, nowadays, but research that. If you need transcoding, research; you might get by with an old CPU, or maybe hardware transcoding support, but it’s difficult.
Outside transcoding, for file sharing/streaming, every simultaneous client will require additional horsepower and disk transfer usage. If you are the sole client, then likely you can do with an old CPU. But if you and three people more in your household are going to be using the system at the same time, it might be a bit complex.
One of my home servers is a 4gb of RAM, with a “Intel® Celeron® CPU G1610T @ 2.30GHz”. It’s very old and low end, but for file sharing it works quite well, but it rarely has more than a single simultaneous user.


Yep, I do that on Debian hosts, EL (RHEL/Rocky/etc.) have a similar feature.
However, you need to keep an eye for updates that require a reboot. I use my own Nagios agent that (among other things) sends me warnings when hosts require a reboot (both apt/dnf make this easy to check).
I wouldn’t care about last online/reboots; I just do some basic monitoring to get an alert if a host is down. Spontaneous reboots would be a sign of an underlying issue.


Remember that Google News has RSS feeds! They are very well hidden, but they are there.
However, they are also a bit bad.
I started https://github.com/las-noticias/news-rss to postprocess a bit Google News RSS feeds and also play with categorization. I found spaCy worked well to find “topics”, but unfortunately I lost steam.
I think Cloudflare Tunnels will require a different setup on k8s than on regular Linux hosts, but it’s such a popular service among self-hosters that I have little doubt that you’ll find a workable process.
(And likely you could cheat, and set up a small Linux VM to “bridge” k8s and Cloudflare Tunnels.)
Kubernetes is different, but it’s learnable. In my opinion, K8S only comes into its own in a few scenarios:
Really elastic workloads. If you have stuff that scales horizontally (uncommon), you really can tell Amazon to give you more Kubernetes nodes when load grows, and destroy the nodes when load goes down. But this is not really applicable for self hosting, IMHO.
Really clustered software. Setting up say a PostgreSQL cluster is a ton of work. But people create K8S operators that you feed a declarative configuration (I want so many replicas, I want backups at this rate, etc.) and that work out everything for you… in a way that works in all K8S implementations! This is also very cool, but I suspect that there’s not a lot of this in self-hosting.
Building SaaS platforms, etc. This is something that might be more reasonable to do in a self-hosting situation.
Like the person you’re replying to, I also run Talos (as a VM in Proxmox). It’s pretty cool. But in the end, I only run there 4 apps I’ve written myself, so using K8S as a kind of SaaS… and another application, https://github.com/avaraline/incarnator, which is basically distributed as container images and I was too lazy to deploy in a more conventional way.
I also do this for learning. Although I’m not a fan of how Docker Compose is becoming dominant in the self-hosting space, I have to admit it makes more sense than K8S for self-hosting. But K8S is cool and might get you a cool job, so by all means play with it- maybe you’ll have fun!


I haven’t tested this, but I would expect there to be ways to do it, esp for VMs if they are not LXC containers.
(I try to automate provisioning as much as possible, so I don’t do this kind of stuff often.)
The Incus forum is not huge, but it’s friendly, and the authors are quite active.


Came in here to mention Incus if no one had.
I love it. I have three “home production” servers running Proxmox, but mostly because Proxmox is one of very few LTS/comercially-supported ways to run Linux in a supported way with root (and everything else on ZFS). And while its web UI is still a bit clunky in places, it comes in handy some times.
However, Incus automation is just… superior. incus launch --vm images:debian/13 foo, wait a few seconds then incus exec foo -- bash and I’m root on a console of a ready-to-go Debian VM. Without --vm, it’s a lightweight LXC container. And Ansible supports running commands through incus exec, so you can provision stuff WITHOUT BOTHERING TO SET UP ANYTHING.
AND, it works remotely without fuss, so I can set up an Incus remote on a beefy server and spawn VMs nearly transparently. + incus file pull|push to transfer files.
I’m kinda pondering scripting removal of the Proxmox bits from a Proxmox install, so that I just keep their ZFS support and run Incus on top.


If you speak Spanish, a month ago or so I was pointed at https://foro.autoalojado.es/, might be interesting to discuss the in-person stuff, although it doesn’t seem like it’s reaching a critical mass of activity :(
When I learned Git I think there were not decent tools, so I got used to the command line.
I occasionally use gitk for reviewing my commits- it’s nicer to see the files modified and be able to jump back and forth, although I get I could use git log -p instead.
I’m an Emacs user, but I don’t use magit (!)
I like some of the graphical tools- some colleagues use Fork and I like it… but as I’ve already learned the CLI, I don’t see the point for me.
I could use learning some jj because it automates some of the most tedious parts of my workflow, but I’m getting too old.


Incus has a great selection of images that are ready to go, plus gives scripted access to VMs (and LXC containers) very easily; after incus launch to create a VM, incus exec can immediately run commands as root for provisioning.
And also, you learn to make programs of a given difficulty by making programs of a smaller difficulty first.