- 1 Post
- 16 Comments
MinIO is really gutting the open source version. I also found it confusing that all of their docs are for AIStor, which I guess is the same product that was rebranded. I suppose open source is not immune from enshittification.
melfie@lemy.lolto
Programming@programming.dev•Do you guys use AI when programming? If so, how?
7·2 days agoI use Copilot with mostly Claude Sonnet 4.5. Don’t use the autocomplete because it’s useless and annoying. I mostly chat with it, give it specific instructions for how to implement small changes, carefully review its code, make it fix anything I don’t like, then have it write test scripts with curl calling APIs and other methods to exercise the system in a staging environment and output data so I can manually verify it and make sure all of its changes are working as expected in case I overlooked something in the automated tests.
As far as environmental impact, training is where most of the impact occurs, and inference, RAG, querying vector databases, etc. is fairly minimal AFAIK.
melfie@lemy.lolto
Selfhosted@lemmy.world•Can I do VPN + Plex on my hosting server? Do I need it under this circumstance?English
2·4 days agoI have OpenWRT on my router running a VPN that my servers and various other devices use. I’d rather do that than have to setup a ”kill switch” with iptables that might accidentally conflict with IPAM for k3s.
Local, private, no subscriptions, ONVIF, and no need to actually self-host anything. I haven’t found any other options with that combination.
I use k3s with Calico so I can have k8s network policies for each service I’m running.
I use Restic and also use Backrest to have a UI to browse my repos. I would use Backrest for everything, but I’d rather have my backup config completely source controlled.
melfie@lemy.lolto
Programming@programming.dev•Code comments should apply to the state of the system at the point the comment “executes”
1·18 days agoAgreed, that’s why comments exist, IMO, but should be used sparingly.
melfie@lemy.lolto
Programming@programming.dev•DidMySettingsChange - A python script that checks if windows changed your settings behind your back
7·18 days agoI set a static IP for my Windows partition and block it from internet with my firewall. It’s hostile malware that must be quarantined. My Linux partition has a different IP that is not blocked.
On a related note, I jack up my Mint install a few times a year with nobody to blame but myself. I recently reinstalled it with btrfs, Timeshift with automatic snapshots, and btrfs-grub so I can boot from a snapshot instead of troubleshooting or reinstalling. I realize other distros like openSUSE are more or less setup like this out of the box or offer full immutability, but I like Mint.
melfie@lemy.lolto
Programming@programming.dev•Code comments should apply to the state of the system at the point the comment “executes”
42·19 days agoWell-structured code with clear naming > comments. For example, a pet peeve of mine is seeing a long function with comments preceding each section of code instead of moving each section into a smaller function with a name that clearly describes what it does. The best comments are no comments.
Discord 😬
Edit:
DuckDuckGo’s AI says this, which sounds interesting if true, though it doesn’t provide a source to confirm:
Chaptarr is an upcoming project that is a heavily revamped fork of Readarr, currently in closed Alpha phase, and aims to improve interoperability with Readarr. You can find more information and updates on its development on GitHub
melfie@lemy.lolto
Selfhosted@lemmy.world•Using rsync for backups, because it's not shiny and newEnglish
1·21 days agoI originally thought it was one of my drives in my RAID1 array that was failing, but I noticed copying data was yielding btrfs corruption errors on both drives that could not be fixed with a scrub and I was also getting btrfs corruption errors on the root volume as well. I figured it would be quite an odd coincidence if my main SSD and 2 hard disks all went bad and I happened upon an article talking about how corrupt data can also occur if the RAM is bad. I also ran SMART tests and everything came back with a clean bill of health. So, I installed and booted into Memtester86+ and it immediately started showing errors on the single 16Gi stick I was using. I happened to have a spare stick that was a different brand, and that one passed the memory test with flying colors. After that, all the corruption errors went away and everything has been working perfectly ever since.
I will also say that legacy file systems like ext4 with no checksums wouldn’t even complain about corrupt data. I originally had ext4 on my main drive and at one point thought my OS install went bad, so I reinstalled with btrfs on top of LUKS and saw I was getting corruption errors on the main drive at that point, so it occurred to me that 3 different drives could not have possibly had a hardware failure and something else must be going on. I was also previously using ext4 and mdadm for my RAID1 and migrated it to btrfs a while back. I was previously noticing as far back as a year ago that certain installers, etc. that previously worked no longer worked, which happened infrequently and didn’t really register with me as a potential hardware problem at the time, but I think the RAM was actually progressively going bad for quite a while. btrfs with regular scrubs would’ve made it abundantly clear much sooner that I had files getting corrupted and that something was wrong.
So, I’m quite convinced at this point that RAID is not a backup, even with the abilities of btrfs to self-heal, and simply copying data elsewhere is not a backup, because something like bad RAM in both cases can destroy data during the copying process, whereas older snapshots in the cloud will survive such a hardware failure. Older data backed up that wasn’t coped with faulty RAM may be fine as well, but you’re taking a chance that a recent update may overwrite good data with bad data. I was previously using Rclone for most backups while testing Restic with daily, weekly, and monthly snapshots for a small subset of important data the last few months. After finding some data that was only recoverable in a previous Restic snapshot, I’ve since switched to using Restic exclusively for anything important enough for cloud backups. I was mainly concerned about the space requirements of keeping historical snapshots, and I’m still working on tweaking retention policies and taking separate snapshots of different directories with different retention policies according risk tolerance for each directory I’m backing up. For some things, I think even btrfs local snapshots would suffice with the understanding that it’s to reduce recovery time, but isn’t really a backup . However, any irreplaceable data really needs monthly Restic snapshots in the cloud. I suppose if don’t have something like btrfs scrubs to alert you that you have a problem, even snapshots from months ago may have an unnoticed problem.
melfie@lemy.lolto
Selfhosted@lemmy.world•Using rsync for backups, because it's not shiny and newEnglish
3·22 days agoDon’t understand the downvotes. This is the type of lesson people have learned from losing data and no sense in learning it the hard way yourself.
melfie@lemy.lolto
Selfhosted@lemmy.world•Using rsync for backups, because it's not shiny and newEnglish
92·23 days agoHaving a synced copy elsewhere is not an adequate backup and snapshots are pretty important. I recently had RAM go bad and my most recent backups had corrupt data, but having previous snapshots saved the day.
melfie@lemy.lolto
Selfhosted@lemmy.world•Those who are hosting on bare metal: What is stopping you from using Containers or VM's? What are you self hosting?English
5·1 month agoI use k3s and enjoy benefits like the following over bare metal:
- Configuration as code where my whole setup is version controlled in git
- Containers and avoiding dependency hell
- Built-in reverse proxy with the Traefik ingress controller. Combined with DNS in my OpenWRT router, all of my self hosted apps can be accessed via appname.lan (e.g., jellyfin.lan, forgejo.lan)
- Declarative network policies with Calico, mainly to make sure nothing phones home
- Managing secrets securely in git with Bitnami Sealed Secrets
- Liveness probes that automatically “turn it off and on again” when something goes wrong
These are just some of the benefits just for one server. Add more and the benefits increase.
Edit:
Sorry, I realize this post is asking why go bare metal, not why k3s and containers are great. 😬
melfie@lemy.lolto
Selfhosted@lemmy.world•Someone finally made a "Sonarr for YouTube"English
3·1 month agoIt’s based on yt-dlp, which I can’t seem to get working reliably with my VPN, even with manual intervention like using cookies from a browser, switching servers, etc. Guess VPN IPs hit the rate limits pretty regularly, though I don’t want to risk my real IP getting banned. I’ve seen some people suggest using a VPS, but sounds like a lot of effort. Running something like this on a server and expecting it to reliably download videos in the background isn’t going to work that well from my experience.

Quite true, and to that point, here’s the fork for the missing open source admin UI: https://github.com/OpenMaxIO/openmaxio-object-browser