View Full Version : F10: Creating A Static IP Bridge to VM
4th November 2008, 04:49 PM
I've been working with F10's virtualization suite with KVM, QEMU, and Xenner. All my VMs run, but as NAT clients. I am having problems configuring a bridge network interface to allow a client to have a publicly available, static IP address. Primarily, once the network is bridged, the physical interface fails to respond on the physical network.
If I build a vnetX, and attach it to ethX, host and VM can talk, but the VM can not get to the outside network, and the outside network can not get to the VM. If I leave the VM on the NAT, it gets out, but no one gets in. I've seen a few kluge hacks that use iptables to redirect into the NAT'ed VM, but as public, static, clients have worked with earlier versions, I was hoping for a more standard solution.
Anyone else working with virtualizing servers? Suggests on bridging?
4th November 2008, 07:43 PM
works fine with virtualbox or vmware.
kvm and xen are not exactly user-friendly.....
4th November 2008, 09:44 PM
You're correct all counts. I've worked extensively with VMware's stuff, but am heavily invested in RedHat. With Citrix controlling Xen, there will most likely be some changes with RedHat's model. If Fedora is blazing this path, we may find RedHat right behind it.
I did find a link about qemu networking (http://alien.slackbook.org/dokuwiki/doku.php?id=slackware:qemu) that states:
"QEMU will act as a firewall between guest OS and the host computer, so that no network communication is possible from any host program to the guest OS."
"There is actually no proper network connection between the guest and the world outside the Virtual Machine."
Maybe host based routing is the future. In a weird kind of way, its similar to the ESX... If it works.
4th November 2008, 11:00 PM
See if this one helps at all. It's a page I did on bridged networking with VirtualBox and should be valid for KVM and Qemu as well
There's also an article of mine on KVM on the CentOS wiki, which covers bridged networking, I think.
6th November 2008, 03:29 AM
Good work, scottro. That was just the resource I needed.
By default F10 creates a virbr0 which is configured to DHCP to guests and NAT to the physical network. This allows outbound, but firewalls inbound. The trick is to slave a physical NIC to a bridge that will enable routing.
Adding the following script to /etc/sysconfig/network-scripts/ as ifcfg-nvet1 does just that:
brctl addbr $BR
ifconfig $IF 0.0.0.0
brctl addif $BR $IF
ifconfig $BR $NET.$HOST netmask 255.255.255.0 up
route add -net $NET.0 netmask 255.255.255.0 $BR
route add default gw $NET.1 $BR
By using this directory and name for the file, it enables the bridge at boot time. This would allow VMs on the host to use addresses on the specified subnet. Since the interface is slaved to the bridge, an ifconfig shows no IP for eth1, but instead for virbr1.
The assumption is that this card is the system's primary interface.
6th November 2008, 04:13 AM
Good webpage scottro, but the OP should try virt-manager (needs the libvrt service).
This does the bridge setup by default.
Redhat is releasing oVirt soon - another virt manager approach.
6th November 2008, 11:52 AM
Thanks to both of you. Dougbunger, I'm going to add a link to your post to my page in the next day or so,when I have time. @stevea, I'll have to add a mention of virt-manager as well. I haven't played with it.
6th November 2008, 03:56 PM
The virt-manager included in F10-beta does not support inbound bridging, only NAT bridging. Needless to say that was a surprise. As long as the virbr is operational, it will recognize it. It doesn't seem to be a bug, as the only options in the wizard are "Isolated" and "Forwarding".
I'll look into oVirt. Thanks for the input.
vBulletin® v3.8.7, Copyright ©2000-2013, vBulletin Solutions, Inc.