

DuckDNS pretty often has problems and fails to propagate properly. It’s not very good, especially with frequent IP changes.
DuckDNS pretty often has problems and fails to propagate properly. It’s not very good, especially with frequent IP changes.
I’d appreciate it very much!
Great suggestion to secure the backups themselfes, but I’m more concerned about the impact an attacker on my network might have on the external network and vice versa.
That’d be the gold standard. Unfortunately, the external network utilizes infrastructure that doesn’t support specifying firewall rules on the existing separate VLAN, so all rules would have to be applied on the Pi itself or on yet another device between, which is something I’d like to avoid. Great general advice, though!
While this is a great approach for any business hosting mission critical or user facing ressources, it is WAY overkill for a basic selfhosted setup involving family and friends.
For this to make sense, you need to have access to 3 different physical locations with their own ISPs or rent 3 different VPS.
Assuming one would use only 1 data drive + an equal parity drive, now we’re talking about 6 drives with the total usable capacity of one. If one decides to use fewer drives and link your nodes to one or two data drives (remotely), I/O and latency becomes an issue and you effectively introduced more points of failure than before.
Not even talking about the massive increase in initial and running costs as well as administrive headaches, this isn’t worth it for basically anyone.
I’ve been tempted by Tailscale a few times before, but I don’t want to depend on their proprietary clients and control server. The latter could be solved by selfhosting Headscale, but at this point I figure that going for a basic Wireguard setup is probably easier to maintain.
I’d like to have a look at your rules setup, I’m especially curious if/how you approached the event of the commercial VPN Wireguard tunnel(s) on your exit node(s) going down, which depending on the setup may send requests meant to go through the commercial VPN through your VPS exit node.
Personally, I ended up with two Wireguard containers in the target LAN, a wireguard-server and a **wireguard-client **container.
They both share a docker network with a specific subnet {DOCKER_SUBNET} and wireguard-client has a static IP {WG_CLIENT_IP} in that subnet.
The wireguard-client has a slightly altered standard config to establish a tunnel to an external endpoint, a commercial VPN in this case:
[Interface]
PrivateKey = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Address = XXXXXXXXXXXXXXXXXXX
PostUp = iptables -t nat -A POSTROUTING -o wg+ -j MASQUERADE
PreDown = iptables -t nat -D POSTROUTING -o wg+ -j MASQUERADE
PostUp = iptables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
PreDown = iptables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
[Peer]
PublicKey = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
AllowedIPs = 0.0.0.0/0,::0/0
Endpoint = XXXXXXXXXXXXXXXXXXXX
where
PostUp = iptables -t nat -A POSTROUTING -o wg+ -j MASQUERADE
PreDown = iptables -t nat -D POSTROUTING -o wg+ -j MASQUERADE
are responsible for properly routing traffic coming in from outside the container and
PostUp = iptables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
PreDown = iptables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
is your standard kill-switch meant to block traffic going out of any network interface except the tunnel interface in the event of the tunnel going down.
The wireguard-server container has these PostUPs and -Downs:
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
default rules that come with the template and allow for routing packets through the server tunnel
PostUp = wg set wg0 fwmark 51820
the traffic out of the tunnel interface get marked
PostUp = ip -4 route add 0.0.0.0/0 via {WG_CLIENT_IP} table 51820
add a rule to routing table 51820 for routing all packets through the wireguard-client container
PostUp = ip -4 rule add not fwmark 51820 table 51820
packets not marked should use routing table 51820
PostUp = ip -4 rule add table main suppress_prefixlength 0
respect manual rules added to main routing table
PostUp = ip route add {LAN_SUBNET} via {DOCKER_SUBNET_GATEWAY_IP} dev eth0
route packages with a destination in {LAN_SUBNET} to the actual {LAN_SUBNET} of the host
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; ip route del {LAN_SUBNET} via {DOCKER_SUBNET_GATEWAY_IP} dev eth0
delete those rules after the tunnel goes down
PostUp = iptables -I OUTPUT ! -o %i -m mark ! --mark 0xca6c -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -I OUTPUT ! -o %i -m mark ! --mark 0xca6c -m addrtype ! --dst-type LOCAL -j REJECT
PreDown = iptables -D OUTPUT ! -o %i -m mark ! --mark 0xca6c -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -D OUTPUT ! -o %i -m mark ! --mark 0xca6c -m addrtype ! --dst-type LOCAL -j REJECT
Basically the same kill-switch as in wireguard-client, but with the mark manually substituted since the command it relied on didn’t work in my server container for some reason and AFAIK the mark actually doesn’t change.
Now do I actually need the kill-switch in wireguard-server? Is the kill-switch in wireguard-client sufficient? I’m not even sure anymore.
Oh I’m fully aware. I personally don’t care, but one could add a capable VPS and deploy the Wireguard Host Container + two Client Containers, one for the LAN and one for the commercial VPN (like so), if the internet connection of the LAN in question isn’t sufficient.
Oh, neat! Never noticed that option in the Wireguard app before. That’s very helpful already. Regarding your opnsense setup:
I’ve dabbled in some (simple) routing before, but I’m far from anything one could call competent in that regard and even if I’d read up properly before writing my own routes/rules, I’d probably still wouldn’t trust that I hadn’t forgotten something to e.g. prevent IP/DNS leaks.
I’m mainly relying on a Docker and was hoping for pointers on how to configure a Wireguard host container to route only internet traffic through another Wireguard Client container.
I found this example, which is pretty close to my ideal setup. I’ll read up on that.
Indeed it does, I was talking about adding a checkbox tagged “Only transfer blocked users” instead of having to click through some menus.
Sure, the code is completely client-side, simply clone it. If you’re running into CORS problems due to the file:// scheme Origin of opening a local file, simply host it as a local temporary server with something like python -m http.server
.
This is due to the two ways most instances validate Cross-Origin requests:
file://
URLs will result in a null
or file://
Origin which can’t be authorized via the second option, therefore the need to sometimes host the application via (local) webserver.
The whole point of this being a web app is to make it as easy as possible for the user to download/modify/transfer their user data. LASIM is a traditional app the user has to download and install, similar to a script this web app was developed to replace due to being too difficult to use for some users.
The import functionality targeted by this API is additive and my app features a built-in editor to add, modify or remove information as the user sees fit. To achieve your stated goal, you’d have to remove anything except the blocked_users
entries before importing, which my app supports, I added a wiki entry explaining the workflow in more Detail.
I may add options to modify the exported data in some ways via a simple checkbox in the future, but I wouldn’t count on it. I’m always open for pull requests!
The export/import functionality is, yes. This implementation uses the same API endpoints, but the main reason for this existing:
An instance I was on slowly died, starting with the frontend (default web UI). At least at the time, no client implemented the export/import functionality, so I wrote a simple script in Bash to download the user data, if the backend still works. Running a script can still be a challenge to some users, so I wrote a web application with the same functionality. It’s a bit redundant if we’re talking about regularly working instances, but can be of use if the frontend isn’t available for some reason.
An dieser Stelle reposte ich nochmal zwei einfache Wege, um seinen User (Settings und abonnierte/geblockte Communities) von einer Lemmy Instanz auf eine andere umzuziehen, beispielsweise von feddit.de auf feddit.org, von meinem ursprünglichen Post unter feddit.de/c/main ( https://alexandrite.app/feddit.de/post/11325409)
Weg 1, falls man noch einen Browser mit aktiver Session auf feddit.de hat:
Lemmy bietet seit Version 0.19 eine Funktion an, um die user data zu ex- und importieren. Das geht normalerweise über einen Button in den Settings des Webinterfaces, das geht aktuell bei feddit.de nicht.
Aber der zugrundeliegende API-Aufruf funktioniert noch, solange man noch mit einem Browser auf feddit.de eingeloggt ist:
Das funktioniert mit jeder Instanz >=0.19, man muss lediglich das “feddit.de” in der URL ersetzen. Und wenn das Webinterface funktioniert, geht das auch über den Export- Button in den Settings.
Weg 2:
Für die Leute, die keine offene Browser Session haben, hier ein kleines, aber funktionales Bash Script, welches im Ausführungsverzeichnis eine myFedditUserData.json
erstellt, welche bei anderen Instanzen importiert werden kann.
Anforderungen:
sudo apt install -y jq
Anleitung:
.sh
Endung abspeichern, z.B. getMyFedditUserData.sh
chmod +x getMyFedditUserData.sh
ausführen (Namen eventuell anpassen)./getMyFedditUserData.sh
im Terminal eingebenmyFedditUserData.json
Anmerkung: Das Script ist recht simpel, es wird ein JWT Bearer Token angefragt und als Header bei dem GET Aufruf von https://feddit.de/api/v3/user/export_settings mitgegeben. Wer kein Linux/Mac OS X zur Verfügung hat, kann den Ablauf mit anderen Mitteln nachstellen.
Das Script:
#!/bin/bash
# Basic login script for Lemmy API
# CHANGE THESE VALUES
my_instance="https://feddit.de" # e.g. https://feddit.nl
my_username="" # e.g. freamon
my_password="" # e.g. hunter2
########################################################
# Lemmy API version
API="api/v3"
########################################################
# Turn off history substitution (avoid errors with ! usage)
set +H
########################################################
# Login
login() {
end_point="user/login"
json_data="{\"username_or_email\":\"$my_username\",\"password\":\"$my_password\"}"
url="$my_instance/$API/$end_point"
curl -H "Content-Type: application/json" -d "$json_data" "$url"
}
# Get userdata as JSON
getUserData() {
end_point="user/export_settings"
url="$my_instance/$API/$end_point"
curl -H "Authorization: Bearer ${JWT}" "$url"
}
JWT=$(login | jq -r '.jwt')
printf 'JWT Token: %s\n' "$JWT"
getUserData | jq > myFedditUserData.json
@elvith@feddit.org hat mein Script auch in PowerShell nachgebaut, welches unter Windows ohne WSL auskommt: https://gist.github.com/elvith-de/89107061661e001df659d7a7d413092b
# CHANGE THESE VALUES
$my_instance="https://feddit.de" # e.g. https://feddit.nl
$target_file = "C:\Temp\export.json"
########################################################
#Ask user for username and password
$credentials = Get-Credential -Message "Logindata for $my_instance" -Title "Login"
$my_username= $credentials.UserName
$my_password= $credentials.GetNetworkCredential().Password
# Lemmy API version
$API="api/v3"
# Login
function Get-AuthToken() {
$end_point="user/login"
$json_data= @{
"username_or_email" = $my_username;
"password" = $my_password
} | ConvertTo-Json
$url="$my_instance/$API/$end_point"
(Invoke-RestMethod -Headers @{"Content-Type" = "application/json"} -Body $json_data -Method Post -Uri $url).JWT
}
# Get userdata as JSON
function Get-UserData() {
$end_point="user/export_settings"
$url="$my_instance/$API/$end_point"
Invoke-RestMethod -Headers @{"Authorization"="Bearer $($JWT)"} -Method Get -Uri $url
}
$JWT= Get-AuthToken
Write-Host "Got JWT Token: $JWT"
Write-Host "Exporting data to $target_file"
Get-UserData | ConvertTo-Json | Out-File -FilePath $target_file
I prefer Lemmy for:
I prefer Reddit for:
Lemmy’s got some problems and I can’t stand the interinstance drama, also, due to the decentralized nature, some instances can’t keep up or the admins don’t care any more, so whole communities can essentially be held hostage or simply die until a toolset to move a community from one instance to another (and propagate the change properly to the Fediverse) becomes available.
The problem isn’t necessarily “stuff not sent over vpn isn’t encrypted”. Everyone uses TLS.
Never said it was. It’s a noteworthy detail, since some (rare) HTTP unencrypted traffic as well as LAN traffic in general is a bit more concerning than your standard SSL traffic contentwise, apart from the IP.
For this to be practical you first need a botnet of compromised home routers
This is more of a Café/Hotel Wi-Fi thing IMO. While it may take some kind of effort to get control over some shitty IoT device in your typical home environment, pretty much every script kiddie can at least force spoof the DHCP server in an open network.
Interesting read.
So, in short:
DHCP option 121 is still used for a reason, especially in business networks. At least on Linux, using network namespaces will fix this. Firewall mitigations can also work, but create other (very theoretical) attack surfaces.
Buying a domain. There might be some free services that, similar to DuckDNS in the beginning, work reliably for now. But IMHO they are not worth the potential headaches.