• 0 Posts
  • 62 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2024

help-circle
  • I was using Veeam when my stack was on VMware, but after moving to Proxmox I’ve been unable to get the Veeam agent working properly for VM recovery.

    I tried Proxmox Backup at one point, and while it did work for base VM backup, the interface and capabilities of it just don’t stack up to Veeam in my opinion, and I’m more concerned about file backup than VM recovery as I can easily recreate anything in my stack through my documentation.

    I’m actually glad you mentioned that because I do need to revisit it. The few times I did have to recover the VM from backup I was able to do so when my backup process was working, but I’ve thankfully not had any recovery situations in the past 2 or so years since moving to Proxmox. And recovery doesn’t help in situations where your cert is expired which is usually my issue historically.

    As for past email recovery, Mailcow does have documentation on recovering from a failed server\database, but I consider my personal deployment volatile since I’m only using it for alerting and mostly internal only services.

    I would fully switch over to it if I had more personal time, and if I knew I could make my family comfortable with accessing it. But right now I feel the risk is too great to move anything personally or financially important over. In the event something bad were to happen to me, I’m the only one with knowledge on how to recover the environment and I don’t need my family to take on that burden if I were to become incapacitated or forbid, pass away suddenly.


  • Mailcow internal on Debian VM.

    SMTP2Go free external relay.

    Have had the occasional issue after an upgrade or reboot can’t find my LetsEncrypt cert and will bork the system until I manually fix it. Perhaps my latest script update finally resolved that.

    Otherwise, not that bad. Been running my own email for about 5 years or so. I don’t sign up for many outside services with it. It’s mainly for internal alerting or testing purposes but still works very well.



  • One such app I can think of would be a client side issue. If the public cert doesnt match the back end private cert it will sever the connection and mark it as insecure. Hopefully I won’t need to deal with it much longer though.

    I just heard back from my other team that “this project sounds great for your team” even though they manage many of their own apps and certificates. Perhaps I should just let them burn then!


  • Unfortunately some apps require the certificate be bound to the internal application, and need to be done so through cli or other methods not easily automated. We could front load over reverse proxy but we would still need to take the proxy cert and bind to the internal service for communication to work properly. Thankfully that’s for my other team to figure out as I already have a migration plan for systems I manage.




  • While I agree for my personal use, it’s not so easy in an enterprise environment. I’m currently working to get services migrated OFF my servers that utilize public certificates to avoid the headache of manual intervention every 45 days.

    While this is possible for servers and services I manage, it’s not so easy for other software stacks we have in our environment. Thankfully I don’t manage them, but I’m sure I’ll be pulled into them at some point or another to help figure out the best path forward.

    The easy path is obviously a load balanced front-end to load the certificate, but many of these services are specialized and have very elaborate ways to bind certificates to services outside of IIS or Apache, which would need to trust the newly issued load balancer CA certificate every 47 days.



  • Welcoming the incoming dowvotes for correcting your comment just like the many similar comments and posts I’ve seen on Reddit, but this is purely a configuration issue.

    Transcoding on local network is allowed without a subscription. If you are running your own DNS server (like pihole or unbound) you need to configure an internal “plex.direct” record. You also need to uncheck an option to “treat your WAN IP as internal” option which corrects double NAT issues.

    I have yet to see a need to move away from Plex. I paid for the cheap lifetime sub over a decade ago at this point and everyone I invite to my server has no complaints and has not had to pay Plex a dime. I don’t use their plex.tv proxy, I direct connect to my own IP and leave their remote proxy option off in the server and everything works great.

    I will check out Jellyfin at some point if Plex makes things more difficult in time, but for now these articles are literally just rage bait in the homelab ecosystem. They enacted this back in April of 2025 already!










  • Same recommendation here. I went through two QNAP units before being fed up and building my own 12 Bay for about 1200. My first QNAP died shortly after the 3 year warranty expired and the second died shortly before. I was able to RMA the second and sell it to recoup some money towards building my own TrueNAS system that I can now fix myself and not rely on proprietary anything.