

That’s a great point, yes, the majority of my podcast listening was on my longer drive before covid when I was wfh for a while but also moved closer to my eventual office.
That’s a great point, yes, the majority of my podcast listening was on my longer drive before covid when I was wfh for a while but also moved closer to my eventual office.
I was planning similar, but just for me instead of allowing registrations at all, not sure how viable that actually is though.
I’m general I heard the podcast ecosystem is in a dire state, post covid numbers dropped a lot seemingly.
It’s funny because I used to listen to podcasts way more frequently before covid, but I’ve sorta transitioned to long form videos instead of straight podcasts.
Yep, I was 12-13 when I discovered b
Right, I guess I meant if it necessarily requires internet access to notify, how does it send the notification when it can not reach the internet.
I have uptime Kuma and use ntfy to alert myself for various things, but if I can’t access my server for any reason, the likelihood I’d be alerted first is very low.
How do you get alerted from uptime Kuma if you can’t access the site though?
Completely agree, but be prepared to be dog piled for stating it in some places lol
A vast majority of people on lemmy can’t separate the art from the artists (or the people who enjoy the art while acknowledging the shittiness of the artists)
It’s funny you say a child, as a kid at that time on 4chan (yeah yeah) it didn’t come across as a “child” as they were older then me at the time.
Dark humor be like that though.
Probably too old a reference for most lol
Y’all think fascism is unique to white people?
Woops wrong comment to reply to
Depends on the situation, initially usually anxiety or panic but again depends on the severity and how equipped I am for the response of the situation in question.
The gpu is already running because it’s in the device, by this logic I shouldn’t have a GPU in my homelab until I want to use it for something, rip jellyfin and immich I guess.
I get the impression you don’t really understand how local LLMs work, you likely wouldn’t need a very large model to run basic scraping, just would depend on what OP has in mind really or what kind of schedule it runs on. You should consider the difference between a mega corps server farm compared to some rando using this locally on consumer hardware. (which seems to be the intent from OP)
Does it? I haven’t seen that in my instance settings, will have to take another gander.
You realize the gpu site idle when not actively being used right?
It’d be cheaper if you host it locally, essentially just your normal electricity bill which is the entire point of what op is saying lol.
Seems nifty, bake in stuff like selecting your AI provider (support local llama, local openAI api, and if you have to use a third party I guess lol) make sure it’s dockerized (or is relatively easy to do, bonus points for including a compose)
OH being able to hook into a self host engine like searxng would be nice too, can do that with Oogabooga web search plug-in currently as an example.
WG Ez worked fine for me? Basically just VPNs me right into my LAN.
OH I’m an idiot, I forgot I connect to my domain for the wire guard connection lmao
Though I did mean just tunnel into the Lan then the vpn is applied on outbound connections on the Lan using something like Gluetun or w/e
Correct, trackers will work but DHT or whatever it’s called won’t, end up with a lot of dead torrents trying to run it through mull, but I paid a bit in advance so I can’t swap yet.
Nzbs work most of the time anyway
Look fuck Google and all that, but is it really the search engines job to do this? What happens is then they’ll require age verification of some fashion or attempt to block and sexual content entirely.