L2TP und Tunneldigger: Unterschied zwischen den Versionen
Zeile 316: | Zeile 316: | ||
# Remove the interface from our bridge | # Remove the interface from our bridge | ||
brctl delif l2-br-sued${MTU} $INTERFACE | brctl delif l2-br-sued${MTU} $INTERFACE | ||
<br> | |||
<br> | |||
Jetzt noch die Module ergänzen <br> | |||
vi /etc/modules <br> | |||
<br> | |||
batman-adv <br> | |||
loop <br> | |||
l2tp_core <br> | |||
l2tp_eth <br> | |||
l2tp_netlink <br> | |||
modules (END) | |||
Version vom 9. Juni 2016, 15:45 Uhr
Diese Seite befindet sich noch im Entwurfsstadium.
Hilf mit sie zu verbessern!
Dies sind sie ersten zaghaften Versuche von fastd auf L2TP zu switchen, bzw. beide parallel zu nutzen. Dies soll nur eine kurze Zusammenfassung sein, was schon gemacht wurde. Nicht dass doppelte Arbeit geleistet wird. Doku folgt bei Erfolg
Vorteile: *Weniger CPU Last auf den Gateway Servern
Nachteile: *Umstellung von GWs und Firmware
Aktuell (19.3.): Auf allen 4 Gateways in den Hoods Hasberge und Hasberge-Süd läuft der Tunnelbroker
Das heisst, ihr könnt komplett auf l2tp umschalten, da alle GWs direkt erreicht werden.
Am Router
Es gibt eine Firmware mit Tunneldiger zum Testen auf: http://10.50.60.3/firmware/ oder http://78.47.36.148/firmware/
// Auf einen wr841n v9 getestet und funktioniert. Nach dem flashen per SSH drauf und "l2tunnel conf" danach "/etc/init.d/tunneldigger start" und "l2tunnel on" und schon steht der Tunnel (getestet in Fürth) Das WebUI sagt (eigentlich sogar richtigerweiße) das kein VPN Tunnel besteht. Mein Vorschlag die 3 Befehle ins WebUI einführen und dort anzeigen ob ein l2tp Tunnel besteht.
Jetzt einmal die Konfig erstellen:
l2tunnel conf
und starten (macht der Router beim nächsten Reboot automatisch)
/etc/init.d/tunneldigger start
man steuert das ganze mit dem Skript l2tunnel:
Nur l2tp verwenden:
l2tunnel on
Nur fastd verwenden:
l2tunnel off
fastd & l2tp gleichzeitig verwenden (noch nicht wirklich getestet und auch nicht in der o.g. Software enthalten!):
l2tunnel dual
Viel Spaß damit!
Wenn der Router neu startet, läuft fastd und tunneldigger. Alles bleibt beim alten nur dass jetzt beide Tunnel laufen.
Wie man den Tunneldigger in die Firmware einbaut, findest du hier
Am Gateway
So, jetzt brauchen wir noch Gateways mit dem Broker. Aus der Doku auf https://tunneldigger.readthedocs.org/en/latest/server.html haben wir mal die wichtigen Zeilen rausgezogen:
Nötige Pakete holen:
sudo apt-get install iproute bridge-utils libnetfilter-conntrack3 python-dev libevent-dev ebtables python-virtualenv
Installation:
mkdir /srv/tunneldigger
cd /srv/tunneldigger
virtualenv env_tunneldigger
cd /srv/tunneldigger
git clone https://github.com/wlanslovenija/tunneldigger.git
. env_tunneldigger/bin/activate
pip install -r tunneldigger/broker/requirements.txt
Konfiguration:
Die ersten Tests haben gezeigt, dass Batman es nicht mag, wenn Interfaces kommen und gehen. Wir können also die L2-Tunnel nicht direkt an Batman hängen. Schade, dann könnte man no_rebroadcast einschalten.
Die Slovenen schreiben:
Example hook scripts present in the scripts/ subdirectory are set up to create one bridge device per MTU and attach L2TP interfaces to these bridges. They also configure a default IP address to newly created tunnels, set up ebtables to isolate bridge ports and update the routing policy via ip rule so traffic from these interfaces is routed via the mesh routing table. Each tunnel established with the broker will create its own interface. Because we are using OLSRv1, we cannot dynamically add interfaces to it, so we group tunnel interfaces into bridges. We could put all tunnel interfaces into the same bridge, but this would actually create a performance problem. Different tunnels can have different MTU values -- but there is only one MTU value for the bridge, the minimum of all interfaces that are attached to that bridge. To avoid this problem, we create multiple bridges, one for each MTU value -- this is what the example scripts do. We also configure some ip policy rules to ensure that traffic coming in from the bridges gets routed via our mesh routing table and not the main one (see bridge_functions.sh). Traffic between bridge ports is not forwarded (this is achieved via ebtables), otherwise the routing daemons at the nodes would think that all of them are directly connected -- which would cause them to incorrectly see a very large 1-hop neighbourhood. This file also contains broker-side IP configuration for the bridge which should really be changed.
Ähnliche Probleme hatten die mit OLSR. Also haben wir mal die Config deren Vorschlägen angepasst.
Es wird für jede MTU die ankommt eine Bridge aufgemacht und da kommen dann alle Interfaces mit der gleichen MTU rein.
Beispiel-config für has und has-sued Hood auf GW von Max (fff-has 78.47.36.148)
Die l2tp_broker-xxxxx.cfg Dateien in /srv/tunneldigger/tunneldigger/broker/
l2tp_broker-has.cfg :
[broker] ; IP address the broker will listen and accept tunnels on address=78.47.36.148 ; Ports where the broker will listen on port=20004 ; Interface with that IP address interface=eth0 ; Maximum number of tunnels that will be allowed by the broker max_tunnels=1024 ; Tunnel port base port_base=24000 ; Tunnel id base tunnel_id_base=4000 ; Namespace (for running multiple brokers); note that you must also ; configure disjunct ports, and tunnel identifiers in order for ; namespacing to work namespace=has ; check if all kernel module are loaded. Do not check for built-ins. check_modules=true [log] ; Log filename filename=tunneldigger-broker.log ; Verbosity verbosity=DEBUG ; Should IP addresses be logged or not log_ip_addresses=false [hooks] ; Arguments to the session.{up,pre-down,down} hooks are as follows: ; ; <tunnel_id> <session_id> <interface> <mtu> <endpoint_ip> <endpoint_port> <local_port> ; ; Arguments to the session.mtu-changed hook are as follows: ; ; <tunnel_id> <session_id> <interface> <old_mtu> <new_mtu> ; ; Called after the tunnel interface goes up session.up=/srv/tunneldigger/tunneldigger/broker/scripts/has.up ; Called just before the tunnel interface goes down session.pre-down=/srv/tunneldigger/tunneldigger/broker/scripts/has.down ; Called after the tunnel interface goes down session.down= ; Called after the tunnel MTU gets changed because of PMTU discovery session.mtu-changed=/srv/tunneldigger/tunneldigger/broker/scripts/mtu_changed-has.sh
l2tp_broker-hassued.cfg :
[broker] ; IP address the broker will listen and accept tunnels on address=78.47.36.148 ; Ports where the broker will listen on port=20005 ; Interface with that IP address interface=eth0 ; Maximum number of tunnels that will be allowed by the broker max_tunnels=1024 ; Tunnel port base port_base=21000 ; Tunnel id base tunnel_id_base=1000 ; Namespace (for running multiple brokers); note that you must also ; configure disjunct ports, and tunnel identifiers in order for ; namespacing to work namespace=hassued ; check if all kernel module are loaded. Do not check for built-ins. check_modules=true [log] ; Log filename filename=tunneldigger-broker.log ; Verbosity verbosity=DEBUG ; Should IP addresses be logged or not log_ip_addresses=false [hooks] ; Arguments to the session.{up,pre-down,down} hooks are as follows: ; ; <tunnel_id> <session_id> <interface> <mtu> <endpoint_ip> <endpoint_port> <local_port> ; ; Arguments to the session.mtu-changed hook are as follows: ; ; <tunnel_id> <session_id> <interface> <old_mtu> <new_mtu> ; ; Called after the tunnel interface goes up session.up=/srv/tunneldigger/tunneldigger/broker/scripts/has-sued.up ; Called just before the tunnel interface goes down session.pre-down=/srv/tunneldigger/tunneldigger/broker/scripts/has-sued.down ; Called after the tunnel interface goes down session.down= ; Called after the tunnel MTU gets changed because of PMTU discovery session.mtu-changed=/srv/tunneldigger/tunneldigger/broker/scripts/mtu_changed-has-sued.sh
Skripte in scripts/
bridge_functions.sh:
ensure_policy() { ip rule del $* ip rule add $* } ensure_bridge() { #MAC aus MTU generieren (auf jedem GW eine Stelle ändern) mac=$(echo $MTU | sed -E "s/^(..)/0a:00:03:01:\1:/") local brname="$1" brctl addbr $brname 2>/dev/null if [[ "$?" == "0" ]]; then # Bridge did not exist before, we have to initialize it ip link set dev $brname address $mac up # Neue Bridge batman hinzufügen batctl -m $BATDEV if add $brname # Disable forwarding between bridge ports ebtables -A FORWARD --logical-in $brname -j DROP fi }
has.up:
#!/bin/bash TUNNEL_ID="$1" INTERFACE="$3" MTU="$4" BATDEV=bat4 . scripts/bridge_functions.sh # Set the interface to UP state ip link set dev $INTERFACE up mtu $MTU # Add the interface to our bridge ensure_bridge l2-br-has${MTU} brctl addif l2-br-has${MTU} $INTERFACE
has-sued.up:
#!/bin/bash TUNNEL_ID="$1" INTERFACE="$3" MTU="$4" BATDEV=bat7 . scripts/bridge_functions.sh # Set the interface to UP state ip link set dev $INTERFACE up mtu $MTU # Add the interface to our bridge ensure_bridge l2-br-sued${MTU} brctl addif l2-br-sued${MTU} $INTERFACE
mtu_changed-has.sh
#!/bin/bash TUNNEL_ID="$1" INTERFACE="$3" OLD_MTU="$4" NEW_MTU="$5" BATDEV=bat4 MTU=$NEW_MTU . scripts/bridge_functions.sh # Remove interface from old bridge brctl delif l2-br-has${OLD_MTU} $INTERFACE # Change interface MTU ip link set dev $INTERFACE mtu $NEW_MTU # Add interface to new bridge ensure_bridge l2-br-has${NEW_MTU} brctl addif l2-br-has${NEW_MTU} $INTERFACE
mtu_changed-has-sued.sh
#!/bin/bash TUNNEL_ID="$1" INTERFACE="$3" OLD_MTU="$4" NEW_MTU="$5" BATDEV=bat7 MTU=NEW_MTU . scripts/bridge_functions.sh # Remove interface from old bridge brctl delif l2-br-sued${OLD_MTU} $INTERFACE # Change interface MTU ip link set dev $INTERFACE mtu $NEW_MTU # Add interface to new bridge ensure_bridge l2-br-sued${NEW_MTU} brctl addif l2-br-sued${NEW_MTU} $INTERFACE
has.down
#!/bin/bash TUNNEL_ID="$1" INTERFACE="$3" MTU="$4" # Remove the interface from our bridge brctl delif l2-br-has${MTU} $INTERFACE
has-sued.down
#!/bin/bash TUNNEL_ID="$1" INTERFACE="$3" MTU="$4" # Remove the interface from our bridge brctl delif l2-br-sued${MTU} $INTERFACE
Jetzt noch die Module ergänzen
vi /etc/modules
batman-adv
loop
l2tp_core
l2tp_eth
l2tp_netlink
modules (END)
Start einrichten:
In /etc/systemd/system/ Units erstellen
tunneldigger-broker-sued.service:
[Unit] Description=tunneldigger tunnelling network daemon using l2tpv3 After=network.target auditd.service [Service] Type=simple WorkingDirectory=/srv/tunneldigger/tunneldigger ExecStart=/srv/tunneldigger/env_tunneldigger/bin/python -m broker.main /srv/tunneldigger/tunneldigger/broker/l2tp_broker-hassued.cfg KillMode=process Restart=on-failure [Install] WantedBy=multi-user.target
tunneldigger-broker-has.service:
[Unit] Description=tunneldigger tunnelling network daemon using l2tpv3 After=network.target auditd.service [Service] Type=simple WorkingDirectory=/srv/tunneldigger/tunneldigger ExecStart=/srv/tunneldigger/env_tunneldigger/bin/python -m broker.main /srv/tunneldigger/tunneldigger/broker/l2tp_broker-has.cfg KillMode=process Restart=on-failure [Install] WantedBy=multi-user.target
Jetzt noch den Service anmelden, autostart und manuell Start
systemctl daemon-reload
systemctl enable tunneldigger-broker-sued.service tunneldigger-broker-has.service
systemctl start tunneldigger-broker-sued.service tunneldigger-broker-has.service
Das kann man jetzt mehrfach machen mit unterschiedlichen Konfigurationen für verschiedene Hoods.
über "systemctl status tunneldigger-broker-xxx.service" kann man den Status erfragen.