• 0 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2024

help-circle
  • Your phrasing of the question implies a poor understanding. There’s nothing preventing you from running containers on bare metal.

    My colo setup is a mix of classical and podman systemd units running on bare metal, combined with a little nginx for the domain and tls termination.

    I think you’re actually asking why folks would use bare metal instead of cloud and here’s the truth. You’re paying for that resiliency even if you don’t need it which means that renting the cloud stuff is incredibly expensive. Most people can probably get away with a$10 vps, but the aws meme of needing 5 app servers, an rds and a load balancer to run WordPress has rotted people. My server that I paid a few grand for on eBay would cost me about as much monthly to rent from aws. I’ve stuffed it full of flash with enough redundancy to lose half of it before going into colo for replacement. I paid a bit upfront but I am set on capacity for another half decade plus, my costs are otherwise fixed.



  • Ive actually been personally moving away from kubernetes for this kind of deployment and I am a big fan of using ansible to deploy containers using podman systemd units, you have a series of systemd .container files like the one below

    [Unit]
    Description=Loki
    
    [Container]
    Image=docker.io/grafana/loki:3.4.1
    
    # Use volume and network defined below
    Volume=/mnt/loki-config:/mnt/config
    Volume=loki-tmp:/tmp/loki
    PublishPort=3100:3100
    AutoUpdate=registry
    
    [Service]
    Restart=always
    TimeoutStartSec=900
    
    [Install]
    # Start by default on boot
    WantedBy=multi-user.target default.target
    

    You use ansible to write these into your /etc/containers/systemd/ folder. Example the file above gets written as /etc/containers/systemd/loki.container.

    Your ansible script will then call systemctl daemon-reload and then you can systemctl start loki to finish the example


  • I think the gap you have is in understanding that Podman Compose was meant to line up with the limitations of docker’s compose, but technically is more capable.

    Quadlet files let you do more complex workflows like deploying multiple copies of a service in your deployment that regular compose doesn’t, while not running full kube.

    The use I have is that I have something deployed in compose right now that I’d like to scale up on the box since i have the capacity for it, but dont want to deal with a full kube setup or the politic

    Personally I’ve converted most of my single node k3s to using quadlet files instead as its less fragile. I absolutely deploy single containers in the quadlet. They show up in journalctl and the ergonomics are great.