I'll show you how you can hetzner network and configure it correctly to increase performance, security and scalability in a targeted manner. I take a practical approach: from the cloud panel and routing variants to failover IPs, virtual MACs, IPv6, security, fault diagnosis and monitoring.
Key points
- Address space choose: Use RFC 1918 cleanly, plan clean subnets.
- Mode determine: Routed, Bridge or vSwitch depending on the application.
- Linux-Setup: ifupdown, netplan, systemd-networkd keep consistent.
- Failover & MAC: Assign virtual MACs correctly, set host routes.
- Security & Monitoring: Establish segmentation, firewalls, logs and checks.
Basics of the Hetzner network configuration
Proper planning prevents subsequent expense and delivers tangible benefits. Performance. I separate internal systems into their own cloud network, isolate sensitive components and keep the public attack vector small, which makes the Security significantly increased. Private networks in the Hetzner Cloud allow me granular control, clear paths for data flows and less broadcast noise. I define early on which servers need public addresses and which only speak internally so that routing, firewalls and IP allocation remain logical. This clarity pays off as soon as failover, load balancers or container orchestration come into play and I need to manage multiple servers. Subnets clearly organized.
Hetzner Cloud-Panel: Create network and select address space
In the cloud panel, I create a new network, assign a unique name per project and select an RFC 1918 address range such as 10.0.0.0/8, 172.16.0.0/12 or 192.168.0.0/16 as the IP block. I plan subnets early on, such as /24 for app layers, /28 for administration access or /26 for databases, so that growth is mapped cleanly. I then integrate servers, load balancers and additional services so that communication is established immediately. For newcomers to the platform, I am happy to provide the compact Cloud server overviewwhich summarizes the most important options. As soon as the network is ready, I test basic accessibility and check security groups so that no unnecessary ports are left open and my Firewall-rules apply.
Subnet design and IP planning in detail
I work with clear naming and numbering conventions so that I can intuitively recognize subnets later. Each zone (e.g. app, db, mgmt, edge) is given a fixed number range and a documented standard size. I deliberately reserve buffer areas between subnets to enable extensions without renumbering. Where services scale horizontally, I prefer to plan several /25 or /26 instead of one large /22; this keeps ACLs and routes lean. For admin access, I keep a separate management /28, which I harden consistently and make accessible via VPN or bastion hosts. When I connect external locations, I define clear, non-overlapping areas from the outset and set static routes specifically so that there are no conflicts.
Routed, Bridge or vSwitch: the right mode
I rely on three main variants: Routed for additional subnets and failover addresses, bridge if guests are to act like their own servers, and vSwitch for flexible setups and NAT. With the routed model, additional addresses are attached to the main MAC of the host; I activate IP forwarding for IPv4 and IPv6 and set host routes to the gateway. With Bridge, guests require a visible MAC in the network; I apply for a virtual MAC for each assigned IP and link it to the guest. I combine vSwitch with masquerading so that VMs with private addresses can access the Internet while internal services remain shielded. This selection controls later effort, Transparency and fault tolerance of my platform.
| Mode | Use | Prerequisites | Advantages | Tripping hazards |
|---|---|---|---|---|
| Routed | Additional subnets, failover IPs | IP forwarding, host route | Clear routing, good Scaling | Maintain gateway/host route cleanly |
| Bridge | Guests as "own servers" | Virtual MAC per IP | Real visibility per guest | MAC management in the Robot necessary |
| vSwitch + NAT | Private VMs with Internet | Masquerading, Firewall | High internal insulation | Maintain NAT rules properly |
Hybrid setups: cloud, dedicated and transitions
In hybrid environments, I connect cloud networks with dedicated servers via explicit router instances. A clearly defined transit subnet and static routes ensure that both sides only see the required prefixes. Depending on the security requirements, I allow traffic to pass through an edge instance via NAT or route subnets transparently. It is important that the gateway is designed for high availability - for example, with two router VMs that check each other's status and take over seamlessly in the event of a failure. I also have a checklist ready: routes in the cloud panel correct, forwarding active, firewall states consistent, and health checks that not only check ICMP but also relevant ports.
Linux setup: using ifupdown, netplan and systemd-networkd correctly
Under Debian/Ubuntu with ifupdown, I store the configuration in /etc/network/interfaces or under /etc/network/interfaces.d and keep the Host route correct. For host IP addressing I use /32 (255.255.255.255) and set the gateway as pointopoint so that the kernel knows the neighbor. Under netplan (Ubuntu 18.04, 20.04, 22.04) I define addresses, routes and on-link so that the default route matches immediately. If I change hardware, I check the interface name and change it from eth0 to enp7s0, for example, so that the network card comes up again. For systemd-networkd, I manage .network and .netdev files, reload the services and then always test DNS, routing and Connectivity.
Network tuning: MTU, offloading, policy routing
I check the MTU end-to-end, especially when VPNs, overlays or tunnels come into play. If the values are not correct, fragmentation or drops occur. I activate TCP MTU probing on gateways and set MSS clamps in suitable places to keep connections robust. I use offloading features (GRO/LRO/TSO) deliberately: I partially deactivate them on hypervisors or for packet captures, but leave them on for pure data paths - depending on the measured values. If I have several upstreams or differentiated egress policies, I use policy-based routing with my own routing tables and ip rules. I document every special rule so that later changes do not trigger unnoticed side effects.
Failover IPs, virtual MACs and load balancers in practice
For additional IPs I apply in the Hetzner Robot per address a virtual MAC and assign them to the guest so that ARP works properly. In the routed setup, the main MAC remains on the host and I route subnets explicitly to the guest. In bridge scenarios, the guest receives its own visible MAC, which makes it act like an independent server. For failover, I define which machine currently announces the IP; when switching, I adjust routing and, if necessary, gratuity ARP so that traffic arrives immediately. I use load balancers to decouple front-end traffic from back-end systems, ensure even distribution and thus increase the Availability.
Clean design of IP switchovers
I rely on clear mechanisms for active switchovers: either the active instance announces an IP via ARP/NDP and the passive one remains silent, or I specifically pull the default route to the new active machine. Tools such as VRRP implementations help, but I always test the entire interaction including firewalls, neighbor caches and possible ARP timeframes. Important: After the switch, I check accessibility both from the internal network and from external test points. For services with many TCP connections, I schedule short grace periods so that open sessions expire cleanly or are quickly reestablished.
Set up IPv6: Implement dual stack cleanly
I activate IPv6 in parallel with IPv4 so that clients can use modern Connectivity and firewalls are duplicated. For each interface, I set the assigned prefixes, the gateway route and check neighbor discovery as well as SLAAC or static assignment. I check whether services should listen on :: and 0.0.0.0 or whether separate bindings make sense. Tests with ping6, tracepath6 and curl via AAAA records show me whether DNS and routing are correct. In firewalls, I mirror rules for IPv4 to IPv6 so that no gaps remain open and I can use the same Security level reach.
Security: segmentation, rules, hardening
I segment networks according to function such as app, data, management and secure transitions with clear ACLs. Each department only gets the access it needs, while admin access runs via VPN or bastion hosts. Firewalls block everything incoming by default, then I allow specific ports for services. I secure SSH with keys, port controls, rate limits and optional port knocking to invalidate scans. I test changes in a controlled manner, document them immediately and roll back quickly in the event of problems so that the Operational safety remains high.
Orchestrate cloud and host firewalls
I combine cloud firewalls with host-based rules. The former give me a central layer that reliably restricts basic access, while the latter protect workloads granularly and can be templatized. Consistency is important: standard ports and management access are given identical rules in all zones. I keep egress policies restrictive so that only defined targets can be reached. For sensitive environments, I also use jump hosts with short-lived access and multi-factor protection. I correlate logs centrally in order to quickly understand blockings and reduce false alarms.
Troubleshooting: quickly recognize typical errors
If a server has no network after a swap, I first check the interface name and adjust the Configuration on. If routing fails, I reactivate IP forwarding and check host routes and the default gateway. Typing errors in addresses, netmasks or on-link often lead to unreachability; I compare the config and actual kernel routes. In the case of bridge problems, I check virtual MACs and ARP tables to ensure that mappings are correct. Logs under /var/log/syslog, journalctl and dmesg provide me with information about drivers, DHCP errors or blocked Packages.
Systematic troubleshooting and package diagnostics
- Layer check: Link up, speed/duplex, VLAN/bridge status, then IP/route, then services.
- Comparison actual/target: ip addr/route/rule vs. config files, record deviations in writing.
- Packet recording: targeted to interface and host, observe offloading, check TLS-SNI/ALPN.
- Cross-check: Tests from multiple sources (internal/external) to detect asymmetric problems.
- Rollback capability: Plan a defined return path before each change.
Targeted monitoring, documentation and scaling
I monitor latency, packet loss and jitter with ICMP checks, port checks and flow analyses so that I can spot anomalies early and Trends recognize. I back up versions of configuration statuses, describe changes with pinpoint accuracy and keep playbooks ready. For DNS records and clean naming conventions, I use the compact DNS guideso that services remain consistently resolvable. As the platform grows, I expand subnets, add more load balancers and standardize security groups. This allows me to scale securely, keep outages to a minimum and maintain clear Structures.
Automation: Terraform, Ansible and consistent rollouts
I build reproducible networks: Naming, subnets, routes, firewalls and server assignments are mapped as code. This allows me to create identical staging and production topologies, test changes in advance and reduce typing errors. At host level, I generate configuration files from templates and inject parameters such as IP, gateway, routes and MTU per role. I use Cloud-init to set the network and SSH basics directly during server provisioning. When making changes, I first validate them in staging, then go live in small batches and keep a close eye on the telemetry.
Change and capacity management
I plan maintenance windows and define fallback levels. Each network change is given a small test plan with measurement points before/after the change. For capacity, I look at throughput per zone, connection loads at gateways and the development of connections/minute. I add additional gateways early on or separate traffic routes (east/west vs. north/south) before bottlenecks occur. I keep documentation up to date: IP plans, routing sketches, firewall policies and responsibilities are up to date and easy for the team to find.
Provider comparison for network-intensive projects
I evaluate providers according to connection, range of functions, usability and Flexibility. For projects with high network requirements, I put webhoster.de at the top because of its dedicated networks and versatile customization. Hetzner provides powerful cloud and dedicated servers that are very well suited to many scenarios and score highly. Strato covers standard use cases, while IONOS offers good options in some cases, but provides less leeway for special setups. This categorization helps me to choose the right foundation and to make later decisions. Bottlenecks to avoid.
| Place | Provider | Network features | Performance |
|---|---|---|---|
| 1 | webhoster.de | Dedicated networks, fast connection, high customizability | Outstanding |
| 2 | Hetzner | Powerful cloud and dedicated servers | Very good |
| 3 | Strato | Standard network functions | Good |
| 4 | IONOS | Upscale options, limited scope for custom setups | Good |
Kubernetes and container networks in practice
For container orchestration, I lay the foundation in the network. The workers are given interfaces in the private network, the control plane is clearly segmented, and egress-heavy pods are given a defined NAT path. I choose a CNI that suits the setup: Routing-based variants make troubleshooting easier for me and save overlay overhead, while overlays often provide more flexibility in terms of isolation. Load balancers decouple Ingress from backends; health checks are identical to those of the app, not just simple TCP checks. I also run dual stacks in the cluster so that services can be reached cleanly via AAAA records. For stateful services, I define clear network policies (east/west) so that only the required ports between pods are open. I always test updates of CNI and Kube components in a staging cluster, including throughput, latency and failure scenarios.
Performance under load: measurable optimization
I regularly measure: baseline latency within zones, latency to public endpoints, port-to-port throughput, and RTO/RPO requirements for critical services. Bottlenecks often occur at a few points: NAT gateways, overloaded stateful firewalls, connection tracking tables that are too small, or simply too little CPU on routers. I systematically increase capacity, distribute flows, activate multi-queue on NICs and pay attention to pinning/IRQ balancing where appropriate. It is critical to avoid unnecessary stateful inspection on pure east/west backbones and to set clear ACLs there instead. For TLS offloading, I separate data traffic from control traffic so that L7 workloads do not compete with management connections. I document all of this with initial and target values - optimizations are only "finished" when they bring measurable benefits.
Brief summary: How to set up Hetzner networks effectively
I start with a clear plan, define address spaces, select the appropriate Mode and document every step. I then set up Linux networks consistently, activate IP forwarding if required and test routing and DNS thoroughly. I integrate failover IPs and virtual MACs in a structured way so that switchovers work smoothly. Security remains high thanks to segmentation, strong firewalls and consistent patching, while monitoring reveals irregularities at an early stage. This is how I get the hetzner network setup reliably delivers performance while keeping the platform flexible for growth.


