02 Aug 2021 - tsp
Last update 04 Aug 2021
19 mins
Another common problem - one has different networks or nodes at different locations and no ability to establish a direct physical wired or wireless link but wants to join the networks. As long as there is any kind of network connection the solution is simple: A virtual private network (VPN). The basic idea is to tunnel the internal traffic over any external or public network like the internet and bridge the gap between hosts and networks that way.
This might be done for different reasons. For example:
There are many different approaches to this problem. Just to mention a few of them:
gif
)gre
) that is specifically designed
for this usetinc
ipsec
tunnelsNote that all settings and services described have to be performed on a router machine that is passed by all traffic or the routing of the default gateway has to be modified (for example by running OLSR or interior BGP to see announcements by the VPN endpoints machine) to allow clients not using routing protocols but just the default route to use routing services. For small networks: Run them on the gateway machine. For large networks: You should already know how to do this without reading this blog post …
The presented unencrypted solutions - the generic routing encapsulation and the
generic tunneling interface - are solutions mostly not found for private home
deployments or small companies but usually at larger telecommunication systems,
internet network operators, internet exchange points, mobile network operators
and - for the gif
- also as backhaul for IPv6 tunnel broker. In these scenarios
they’re also often used to carry protocols such as MPLS that are then used to
carry stuff like telephony protocols or IP packets.
An unencrypted static tunnel is setup pretty simple - it requires two endpoints
with static public IP addresses and the choice between gif
or gre
. The
main difference is in the type of packets that can be encapsulated. The generic
tunneling interface is capable of encapsulating all layer 2 packets, the generic
routing interface only encapsulates IP traffic - but support for GRE is wider.
Basically a GRE interface only requires some basic configuration. Let’s assume the following:
To establish the tunnel one just has to setup the GRE interface on both ends.
ifconfig gre0 create
ifconfig gre0 inet 128.66.1.1 128.66.2.1
ifconfig gre0 inet tunnel 128.66.0.1 128.66.0.2
ifconfig gre0 up
One can as usual persist the settings in /etc/rc.conf
:
cloned_interfaces="gre0"
ifconfig_gre0="inet 128.66.1.1 128.66.2.1 tunnel 128.66.0.1 128.66.0.2 up"
On the other end (router B) the same has to be done with mirrored configuration:
ifconfig gre0 create
ifconfig gre0 inet 128.66.2.1 128.66.1.1
ifconfig gre0 inet tunnel 128.66.0.2 128.66.0.1
Again this can be persisted in /etc/rc.conf
:
cloned_interfaces="gre0"
ifconfig_gre0="inet 128.66.2.1 128.66.1.1 tunnel 128.66.0.2 128.66.0.1 up"
In case one wants to route traffic one might use either a routing daemon such as olsrd or even bgpd - which is highly advisable - or configure the routes statically which really only makes sense for really small scale deployments.
To establish the static route on router A:
route add -net 128.66.2.1/24 128.66.0.2
And on router B:
route add -net 128.66.1.1/24 128.66.0.1
The gif
interface works similar to the gre
interface. Assuming the
same configuration as above:
The steps are pretty similar on both ends. Again first for router A:
ifconfig gif0 create
ifconfig gif0 inet 128.66.1.1 128.66.2.1
ifconfig gif0 inet tunnel 128.66.0.1 128.66.0.2
ifconfig gif0 up
This also can be persisted in /etc/rc.conf
:
cloned_interfaces="gif0"
ifconfig_gif0="inet 128.66.1.1 128.66.2.1 tunnel 128.66.0.1 128.66.0.2 up"
And on router B:
ifconfig gif0 create
ifconfig gif0 inet 128.66.2.1 128.66.1.1
ifconfig gif0 inet tunnel 128.66.0.2 128.66.0.1
ifconfig gif0 up
This also can be persisted in /etc/rc.conf
:
cloned_interfaces="gif0"
ifconfig_gif0="inet 128.66.2.1 128.66.1.1 tunnel 128.66.0.2 128.66.0.1 up"
Again in case one wants to route traffic one might use either a routing daemon such as olsrd or even bgpd - which is highly advisable - or configure the routes statically which really only makes sense for really small scale deployments.
To establish the static route on router A:
route add -net 128.66.2.1/24 128.66.0.2
And on router B:
route add -net 128.66.1.1/24 128.66.0.1
The main difference between gre
and gif
that gif
is capable
of carrying layer 2 frames - and it can carry either IPv4 or IPv6 but not both
at the same time - while gre
can do that. Generic interface encapsulation
is often seen with IPv6 tunnel brokers.
Since tinc is my favorite VPN client I’ll base my description on this VPN client. Tinc is somewhat special in the kind it works and allows easy bridging of full networks as well as being a backhaul for single clients. The basic design pattern of tinc is a mesh VPN - each node connects to each other node if possible. Of course a VPN might also not be fully meshed in case one doesn’t want to synchronize keysets to all clients, has different administrative domains, etc. - if one configures a routing protocol later on this is no problem as long as at least one known path between each node exists.
Tinc can be installed on many different platforms including:
security/tinc
)On FreeBSD installation again is pretty simple. Either from packages:
pkg install security/tinc
or from ports:
cd /usr/ports/security/tinc
make install clean
Basically each VPN mesh that one wants to configure is configured using it’s
own configuration file and configuration directory. These are stored
inside /usr/local/etc/tinc/
and given a locally unique identification of
the VPN network that one’s configuring. For this example I’ll use the examplenet
name - thus configuration files will be at /usr/local/etc/tinc/examplenet
The main configuration file is tinc.conf
- in this example /usr/local/etc/tinc/examplenet/tinc.conf
:
Name = anyexamplenodenamea
Mode = switch
DecrementTTL = yes
Device = /dev/tap1
DeviceType = tap
Forwarding = internal
ConnectTo = anyexamplenodenameb
As one can see the node gets assigned a name - that will also be used as filename
for it’s keyfiles and has to be unique in the given mesh. In this case the
local node name is anyexamplenodenamea
, the name of the only configured
reachable remote will be anyexamplenodenameb
(more on how to configure that
later on).
The mode has been set to switch
to allow all ethernet frames to be passed
in contrast to router mode. This doesn’t really matter since FreeBSD’s native
routing capabilities will be used - one can then imagine the tinc
instances
just substituting a switch that all routers are attached to. In the example
it has also been configured to decrement the TTL which is usually helpful to
prevent traffic loops.
More interesting there is a tap
device configured and fixed as being /dev/tap1
.
This device will later be configured using ifconfig
inside startup scripts or
cloned in rc.conf
depending on the setup.
A list of optional ConnectTo
statements tells tinc
which nodes to
connect to - the required IP addresses and keys are contained in host description
files that are later kept in /usr/local/etc/tinc/examplenet/hosts
One should create a host configuration for the local node. To do this first
create the hosts
directory
mkdir -p /usr/local/etc/tinc/examplenet/hosts/
and then edit /usr/local/etc/tinc/examplenet/hosts/anyexamplenodenamea
(use
the same node name as in tinc.conf
). This includes some basic settings:
Address = 128.66.10.20
Compression = 9
Port = 656
Subnet = 128.66.0.2/32
Compression and port specifications are optional - compression just enabled traffic compression which is a trade of between processing power and bandwidth. The port specification selects the TCP port that will be used by this node - it’s a good idea to use the same port on all nodes though.
The Subnet
declaration is more crucial. It tells tinc for which subnet this
node is responsible. In the example above I’ve set it to a single IP address
by specifying the prefix length of 32 bits.
The most important configuration is Address
which is the publicly reachable
IP address of this node. Other nodes will try to use this IP whenever a ConnectTo
statement is found in tincd.conf
. Of course only nodes on static IP addresses
can supply this information - mobile nodes have to actively dial into the static
ones.
There are two additional really useful files tinc-up
and tinc-down
.
These are simple shell scripts that are executed whenever the tinc daemon starts
up or is stopped. In case one wants to configure the interface in tinc-up
one could create a simple /usr/local/etc/tinc/examplenet/tinc-up
with
executable bit set (chmod 755 /usr/local/etc/tinc/examplenet/tinc-up
):
#!/bin/sh
ifconfig tap1 create
ifconfig tap1 inet 128.66.0.1
# One might even configure some static routes here:
# route add -net 128.66.2.1/24 128.66.0.2
The counterpart is the tinc-down
script that should then perform
cleanup:
#!/bin/sh
route del -net 128.66.2.1/24 128.66.0.2
ifconfig tap1 destroy
Before one can really launch the VPN daemon one has to create the keyset by launching tinc in key creation mode:
tincd -n examplenet -K
This will create the /usr/local/etc/tinc/examplenet/rsa_key.priv
private
keyfile that contains the nodes private key - this should never ever be shared or
leave the host except for backup purposes.
It will also add the key to mkdir -p /usr/local/etc/tinc/examplenet/hosts/anyexamplenodenamea
(the
filename is generated from the node name set in tinc.conf
)
The last step that has to be taken on the node is to enable tinc in /etc/rc.conf
and tell the startup script which network to launch:
tincd_enable="YES"
tincd_cfg="examplenet"
The same steps have to be taken on all other nodes that should join the VPN - and
their respective hosts
files have to be copied to all other nodes that
they should connect to. Then one can list external nodes that the daemon
should connect to as a ConnectTo
statement on the given machines. Personally
I’ve built a small shell script to automate that process that periodically fetches
a set of hostfiles from a central location, verifies an GPG signature of the file,
extracts the hosts files, scans them for Address
lines and if they’re present
lists them with ConenctTo
in the tincd.conf
files.
Of course sometimes it might be interesting to hide metadata of nodes. This is
of course entirely possible for TCP based tinc
by using the TOR
daemons SOCKS5
proxy server and setting up hidden services.
This can be done pointing to the proxy using proxy = socks5 127.0.0.1 9050
- but
beware that DNS resolution might leak hostnames. Since tincd
uses the systems
DNS resolver one has to use the DNSPort
option of tor inside torrc
,
set the port to DNSPort 53
and redirect all local DNS resolutions towards
the TOR daemon in /etc/resolv.conf
. Since machines that support TOR hidden
services should be usually isolated from any other public internet access to prevent
data leakage this should not be a real problem anyways.
Of course using tinc
on top of TOR can be considered safe usually since
the VPNs traffic is already connected to travel via an unencrypted network.
Since it’s nice to have when bridging networks - especially when extending them later on - here a short introduction on how to setup olsrd for this simple scenario. The optimized link state routing protocol basically is a proactive route advisory protocol that allows all other routers on the same mesh to discover new routes and automatically configure their routing tables. One often sees OLSR being employed in wireless LAN mesh networks such as FunkFeuer. One could of course also use some exterior protocol such as BGP - they’re somewhat equivalent for the given problem with BGP being currently the only routing protocol that can handle networks of the size of the Internet.
Installing olsrd
is pretty easy - either from packages or ports:
pkg install net/olsrd
or from ports:
cd /usr/ports/net/olsrd
make install clean
After that one just has to configure olsrd
by editing /usr/local/etc/olsrd/olsrd.conf
.
First there is a set of configurations specific to this daemons instance and operation:
DebugLevel 0
AllowNoInt yes
FIBMetric "flat"
TcRedundancy 2
MprCoverage 1
LinkQualityAlgorithm "etx_ff"
LinkQualityFishEye 1
UseHysteresis no
This is just an example of my usual configuration when bridging networks via VPN links:
DebugLevel 0
basically disables debug output and daemonizes the processAllowNoInt
allows olsrd to stay up and running even when it sees no
network interfaces. Since the daemon allows dynamic attachment and detachment
of interfaces and interfaces might vanish whenever the links go online or
are reconfigured it’s a good idea to keep it running in such a situationflat
metric
would set all metrics to a constant value of FIBMetricDefault 2
and thus
perform no weighting of the links. Other really useful metrics are correct
which uses the hop count (i.e. the topologically shortest route will be selected)
or approx
which uses the hop count too but only updates if the next hop
changes too. When not building a VPN mesh correct
or approx
are of
course the way to go.TcRedundancy
specifies how much neighbor information should be sent
in each TC message. This value must be set to 2 (send all neighbors)LinkQualityAlgorithm
selects the algorithm to be used to determine link
quality in case this should be used to include link quality in metric calculation.LinkQualityFishEye 1
enables the fisheye mechanism for TCs.UseHysteresis no
disables the hysteresis for link sensins which would
improve robustness to link sensing but delay neighbor registration - and it makes
not much sense for VPN meshes.The next blocks configure host network associations. There is one block for IPv4
and one for IPv6. This basically includes the routes or subnets that are available
through this node and that it will initially advertise (in addition to learned
routes). In case this is a border router one should include 0.0.0.0 0.0.0.0
or 0:: 0
to allow routing towards the public internet (i.e. a default route).
Hna4
{
0.0.0.0 0.0.0.0
128.66.1.0 255.255.255.0
}
Hna6
{
0:: 0
fec0:2200:1:0:0:0:0:0 48
}
Then one just has to specify the interfaces that olsrd
should listen on,
setting the operation mode as well as optionally the broadcast addresses:
Interface "tap1"
{
Mode "mesh"
Ip4Broadcast 128.66.0.255
}
Just in case one wants to monitor the node via some basic mechanism one might
also want to enable the txtinfo
plugin - one has to check the current
version number of the plugin though:
LoadPlugin "olsrd_txtinfo.so.1.1"
{
PlParam "Accept" "127.0.0.1"
}
This plugin exposes neighbor tables, HNAs and learned routes as well as the discovered topology.
After that one can simply enable the service in /etc/rc.conf
olsrd_enable="YES"
and launch it using the rc script:
/usr/local/etc/rc.d/olsrd start
In case one has loaded the txtinfo
plugin successfully one might query the
status of the node using netcat:
echo "/all" | nc 127.0.0.1 2006
This produces output comparable to the following (for a small mesh consisting of 7 OLSR capable endpoints):
Table: Neighbors
IP address SYM MPR MPRS Will. 2-hop count
10.0.3.15 YES NO NO 3 5
10.0.3.12 YES NO NO 3 5
10.0.3.17 YES NO NO 3 5
10.0.3.21 YES NO NO 3 5
10.0.3.9 YES NO NO 3 5
10.0.3.6 YES NO NO 3 5
Table: Links
Local IP Remote IP Hyst. LQ NLQ Cost
10.0.3.2 10.0.3.12 0.000 1.000 1.000 1.000
10.0.3.2 10.0.3.21 0.000 1.000 1.000 1.000
10.0.3.2 10.0.3.17 0.000 1.000 1.000 1.000
10.0.3.2 10.0.3.6 0.000 1.000 1.000 1.000
10.0.3.2 10.0.3.15 0.000 1.000 1.000 1.000
10.0.3.2 10.0.3.9 0.000 1.000 1.000 1.000
Table: Routes
Destination Gateway IP Metric ETX Interface
10.0.3.6/32 10.0.3.6 1 1.000 tap1
10.0.3.9/32 10.0.3.9 1 1.000 tap1
10.0.3.12/32 10.0.3.12 1 1.000 tap1
10.0.3.15/32 10.0.3.15 1 1.000 tap1
10.0.3.17/32 10.0.3.17 1 1.000 tap1
10.0.3.21/32 10.0.3.21 1 1.000 tap1
10.2.1.0/24 10.0.3.15 1 1.000 tap1
10.2.2.0/24 10.0.3.15 1 1.000 tap1
10.2.4.0/24 10.0.3.17 1 1.000 tap1
10.2.5.0/24 10.0.3.17 1 1.000 tap1
10.2.6.0/24 10.0.3.12 1 1.000 tap1
10.3.0.0/16 10.0.3.9 1 1.000 tap1
Table: HNA
Destination Gateway
10.0.10.0/24 10.0.3.2
10.2.1.0/24 10.0.3.15
10.2.2.0/24 10.0.3.15
10.0.3.12/32 10.0.3.12
10.2.6.0/24 10.0.3.12
10.0.3.17/32 10.0.3.17
10.2.4.0/24 10.0.3.17
10.2.5.0/24 10.0.3.17
10.0.3.21/32 10.0.3.21
10.0.3.9/32 10.0.3.9
10.3.0.0/16 10.0.3.9
10.0.3.6/32 10.0.3.6
Table: MID
IP address (Alias)+
Table: Topology
Dest. IP Last hop IP LQ NLQ Cost
10.0.3.6 10.0.3.2 1.000 1.000 1.000
10.0.3.9 10.0.3.2 1.000 1.000 1.000
10.0.3.12 10.0.3.2 1.000 1.000 1.000
10.0.3.15 10.0.3.2 1.000 1.000 1.000
10.0.3.17 10.0.3.2 1.000 1.000 1.000
10.0.3.21 10.0.3.2 1.000 1.000 1.000
10.0.3.2 10.0.3.6 1.000 1.000 1.000
10.0.3.9 10.0.3.6 1.000 1.000 1.000
10.0.3.12 10.0.3.6 1.000 1.000 1.000
10.0.3.15 10.0.3.6 1.000 1.000 1.000
10.0.3.17 10.0.3.6 1.000 1.000 1.000
10.0.3.21 10.0.3.6 1.000 1.000 1.000
10.0.3.2 10.0.3.9 1.000 1.000 1.000
10.0.3.6 10.0.3.9 1.000 1.000 1.000
10.0.3.12 10.0.3.9 1.000 1.000 1.000
10.0.3.15 10.0.3.9 1.000 1.000 1.000
10.0.3.17 10.0.3.9 1.000 1.000 1.000
10.0.3.21 10.0.3.9 1.000 1.000 1.000
10.0.3.2 10.0.3.12 1.000 1.000 1.000
10.0.3.6 10.0.3.12 1.000 1.000 1.000
10.0.3.9 10.0.3.12 1.000 1.000 1.000
10.0.3.15 10.0.3.12 1.000 1.000 1.000
10.0.3.17 10.0.3.12 1.000 1.000 1.000
10.0.3.21 10.0.3.12 1.000 1.000 1.000
10.0.3.2 10.0.3.15 1.000 1.000 1.000
10.0.3.6 10.0.3.15 1.000 1.000 1.000
10.0.3.9 10.0.3.15 1.000 1.000 1.000
10.0.3.12 10.0.3.15 1.000 1.000 1.000
10.0.3.17 10.0.3.15 1.000 1.000 1.000
10.0.3.21 10.0.3.15 1.000 1.000 1.000
10.0.3.2 10.0.3.17 1.000 1.000 1.000
10.0.3.6 10.0.3.17 1.000 1.000 1.000
10.0.3.9 10.0.3.17 1.000 1.000 1.000
10.0.3.12 10.0.3.17 1.000 1.000 1.000
10.0.3.15 10.0.3.17 1.000 1.000 1.000
10.0.3.21 10.0.3.17 1.000 1.000 1.000
10.0.3.2 10.0.3.21 1.000 1.000 1.000
10.0.3.6 10.0.3.21 1.000 1.000 1.000
10.0.3.9 10.0.3.21 1.000 1.000 1.000
10.0.3.12 10.0.3.21 1.000 1.000 1.000
10.0.3.15 10.0.3.21 1.000 1.000 1.000
10.0.3.17 10.0.3.21 1.000 1.000 1.000
Table: Interfaces
Name State MTU WLAN Src-Adress Mask Dst-Adress
tap1 UP 1472 No 10.0.3.2 255.255.255.0 10.0.3.255
Table: Neighbors
IP address SYM MPR MPRS Will. (2-hop address)+
10.0.3.15 YES NO NO 3 10.0.3.9 10.0.3.21 10.0.3.12 10.0.3.6 10.0.3.17
10.0.3.12 YES NO NO 3 10.0.3.9 10.0.3.21 10.0.3.6 10.0.3.15 10.0.3.17
10.0.3.17 YES NO NO 3 10.0.3.9 10.0.3.21 10.0.3.15 10.0.3.12 10.0.3.6
10.0.3.21 YES NO NO 3 10.0.3.9 10.0.3.15 10.0.3.12 10.0.3.17 10.0.3.6
10.0.3.9 YES NO NO 3 10.0.3.12 10.0.3.17 10.0.3.6 10.0.3.15 10.0.3.21
10.0.3.6 YES NO NO 3 10.0.3.9 10.0.3.21 10.0.3.15 10.0.3.17 10.0.3.12
This article is tagged:
Dipl.-Ing. Thomas Spielauer, Wien (webcomplains389t48957@tspi.at)
This webpage is also available via TOR at http://rh6v563nt2dnxd5h2vhhqkudmyvjaevgiv77c62xflas52d5omtkxuid.onion/