How about Dawarich?
https://github.com/Freika/dawarich
I haven’t used it myself, but I have it in the backlog of things to try out
How about Dawarich?
https://github.com/Freika/dawarich
I haven’t used it myself, but I have it in the backlog of things to try out
To be fair, I don’t know either. I mean he’s supposed to, and he swore an oath to, but if nobody is going to enforce that then must he really? What happens if/when he doesn’t?
I haven’t finished it yet, but Archimedes Engine by Peter F Hamilton has been really interesting so far
Edit: and if you haven’t read them yet, Dragon’s Egg by Robert L. Forward and Project Hail Mary by Andy Weir are both really good
They likely streamed from some other Plex server in the past, and that’s why they’re getting the email. The email specifically states that if the server owner has a plex pass, you don’t need one.
I got the email earlier today and it couldn’t be clearer:
As a server owner, if you elect to upgrade to a Plex Pass, anyone with access to your server can continue streaming your server content remotely as part of your subscription benefits.
I run all of my Docker containers in a VM (well, 4 different VMs, split according to network/firewall needs of the containers it runs). That VM is given about double the RAM needed for everything it runs, and enough cores that it never (or very, very rarely) is clipped. I then allow the containers to use whatever they need, unrestricted, while monitoring the overall resource utilization of the VM itself (cAdvisor + node_exporter + Promethus + Grafana + Alert Manager). If I find that the VM is creeping up on its load or memory limits, I’ll investigate which container is driving the usage and then either bump the VM limits up or address the service itself and modify its settings to drop back down.
Theoretically I could implement per-container resource limits, but I’ve never found the need. I have heard some people complain about some containers leaking memory and creeping up over time, but I have an automated backup script which stops all containers and rsyncs their mapped volumes to an incremental backup system every night, so none of my containers stay running for longer than 24 hours continuous anyway.
People always say to let the system manage memory and don’t interfere with it as it’ll always make the best decisions, but personally, on my systems, whenever it starts to move significant data into swap the system starts getting laggy, jittery, and slow to respond. Every time I try to use a system that’s been sitting idle for a bit and it feels sluggish, I go check the stats and find that, sure enough, it’s decided to move some of its memory into swap, and responsiveness doesn’t pick up until I manually empty the swap so it’s operating fully out of RAM again.
So, with that in mind, I always give systems plenty of RAM to work with and set vm.swappiness=0. Whenever I forget to do that, I will inevitably find the system is running sluggishly at some point, see that a bunch of data is sitting in swap for some reason, clear it out, set vm.swappiness=0, and then it never happens again. Other people will probably recommend differently, but that’s been my experience after ~25 years of using Linux daily.
You’re assuming the democrats and this new party would vote the same for SotH and all important bills, in which case what’s the point of this new party? Also most states use FPTP for congressional elections as well, so while democrats and this third party would likely still win some seats, in most locations they would again split the vote and you’d end up with even more GOP congressional representatives than you have now. So it wouldn’t be 45/35/20 GOP/dem/X, it would be more like 80/15/5. That’s just the nature of how FPTP works.
We need a new party, not a new democrat
Never going to happen while we still have FPTP, at least not the way you think it will. What would really happen, is this new party would split the Democratic vote, and the Republicans would win even harder. You have to get rid of FPTP before any 3rd party will be a realistic possibility. Until then it would just make things even worse.
No names? On what? People just go around saying “no names”?
It says “no mames”. I’m not sure what on earth that means, but I suspect it isn’t a typo (writeo?)
I self-host Bitwarden, hidden behind my firewall and only accessible through a VPN. It’s perfect for me. If you’re going to expose your password manager to the internet, you might as well just use the official cloud version IMO since they’ll likely be better at monitoring logs than you will. But if you hide it behind a VPN, self-hosting can add an additional layer of security that you don’t get with the official cloud-hosted version.
Downtime isn’t an issue as clients will just cache the database. Unless your server goes down for days at a time you’ll never even notice, and even then it’ll only be an issue if you try to create or modify an entry while the server is down. Just make sure you make and maintain good backups. Every night I stop and rsync all containers (including Bitwarden) to a daily incremental backup server, as well as making nightly snapshots of the VM it lives in. I also periodically make encrypted exports of my Bitwarden vault which are synced to all devices - those are useful because they can be natively imported into KeePassXC, allowing you to access your password vault from any machine even if your entire infrastructure goes down. Note that even if you go with the cloud-hosted version, you should still be making these encrypted exports to protect against vault corruption, deletion, etc.
A lot of it comes down to the Just World Fallacy
They believe that, fundamentally, the world is just and good (mostly that stems from religion and a just “god”, but not always). This means that when something bad happens, they assume the person must have deserved it, because bad things don’t happen to good people. They also believe they are a good person, and therefore bad things won’t happen to them. When something bad DOES happen to them, they start screaming from the rooftops that some radical injustice has occurred and somebody needs to do something to make it right! Completely unaware of the fact that nobody from their “tribe” will believe them, because the fact that something bad happened to them meant they must have been a bad person who deserved it.
I don’t like the fact that I could delete every copy using only the mouse and keyboard from my main PC. I want something that can’t be ransomwared and that I can’t screw up once created.
Lots of ways to get around that without having to go the route of burning a hundred blu-rays with complicated (and risky) archive splitting and merging. Just a handful of external HDDs that you “zfs send” to and cycle on some regular schedule would handle that. So buy 3 drives, backup your data to all 3 of them, then unplug 2 and put them somewhere safe (desk at work, friend or family member’s house, etc.). Continue backing up to the one you keep local for the next ~month and then rotate the drives. So at any given time you have a on-site copy that’s up-to-date, and two off-site copies that are no more than 1 and 2 months old respectively. Immune to ransomware, accidental deletion, fire, flood, etc. and super easy to maintain and restore from.
Main reason is that if you don’t already have the right key, VPN doesn’t even respond, it’s just a black hole where all packets get dropped. SSH on the other hand will respond whether or not you have a password or a key, which lets the attacker know that there’s something there listening.
That’s not to say SSH is insecure, I think it’s fine to expose once you take some basic steps to lock it down, just answering the question.
Some people move the port to a nonstandard one, but that only helps with automated scanners not determined attackers.
While true, cleaning up your logs such that you can actually see a determined attacker rather than it just getting buried in the noise is still worthwhile.
Reverse proxy + DNS-challenge wildcard cert for your domain. The end. Super easy to set up and zero maintenance. Adding a new service is just a couple clicks in your reverse proxy and you’re done.
Yes, by staying privately funded and not throwing everything away chasing quarterly profits
Same, I don’t let Docker manage volumes for anything. If I need it to be persistent I bind mount it to a subdirectory of the container itself. It makes backups so much easier as well since you can just stop all containers, backup everything in ~/docker or wherever you put all of your compose files and volumes, and then restart them all.
It also means you can go hog wild with docker system prune -af --volumes
and there’s no risk of losing any of your data.
I would separate the media and the Jellyfin image into different pools. Media would be a normal ZFS pool full of media files that gets mounted into any VM that needs it, like Jellyfin, sonarr, radarr, qbittorrent, etc. (preferably read-only mounted in Jellyfin if you’re going to expose Jellyfin to the internet).
As far as networking, from what I could see the only real change casaos was doing was mapping its dashboard to port 80, but not much more. Is there anything more I should be aware in general?
It depends on how you have things set up. If you’re just doing normal docker compose networking with port forwards then there shouldn’t be much to change, but if you’re doing anything more advanced like macvlan then you might have to set up taps on the host to be able to communicate with the container (not sure if CasaOS handles that automatically).
Just FYI - you’re going to spend far, FAR more time and effort reading release notes and manually upgrading containers than you will letting them run :latest and auto-update and fixing the occasional thing when it breaks. Like, it’s not even remotely close.
Pinning major versions for certain containers that need specific versions makes sense, or containers that regularly have breaking changes that require you to take steps to upgrade, or absolute mission-critical services that can’t handle a little downtime with a failed update a couple times a decade, but for everything else it’s a waste of time.