Recently I was given three virtual machines running Oracle Enterprise Linux 5 and Oracle 11gR2 RAC on Oracle VM 2.2.1, copied straight from /OVS/running_pool/. I had to get these machines up and running at my lab environment, but I found hard to setup the network. I’ve spent half day in debugging without success, but finally found a workaround, which I’ll explain here.
Just a little technical notes – Oracle VM (xen) has three main setup configurations within /etc/xen/xend-config.sxp:
Bridge Networking – this configuration is configured by default and it’s simplest to configure. Using this type of networking means that the VM guest should have IP from the same network as the VM host. Another thing is that the VM guest could take advantage of DHCP, if any. The following lines should be uncommented in /etc/xen/xend-config.sxp:
Routed Networking with NAT – this configuration is most common where a private LAN must be used, for example you have a VM host running on your notebook and you can’t get another IP from corporate or lab network. For this you have to setup private LAN and NAT the VM guests so they can access the rest of the network. The following lines should be uncommented in /etc/xen/xend-config.sxp:
Two-way Routed Network – this configuration requires more manual steps, but offers greater flexibility. This one is exactly the same at the second one, except the fact that VM guests are exposed on the external network. For example when VM guest make connection to external machine, its original IP is seen. The following lines should be uncommented in /etc/xen/xend-config.sxp:
Typically only one of the above can be used at one time and selection and choice depends on the network setup. For second and third configurations to work, a “route” must be added to the Default Gateway. For example if my Oracle VM host has an IP address 192.168.143.10, then on the default gateway (192.168.143.1) a route has to be added to explicitly route all connection requests to my VM guests through my VM host. Something like that:
route add -net 10.0.1.0 netmask 255.255.255.0 gw 192.168.143.10
Now back to the case itself. Each of the RAC nodes had two NICs – one for the public connections and one for the private, which is used by GI an RAC. The public network was 10.0.1.X and private 192.168.1.X. What I wanted was to run the VM guests at my lab and access them directly with IP addresses from the lab network, which was 192.168.143.X. As we know the default network configuration is to use bridged networking so I went with this one. Having the vm guests config files all I had to do was to change the first address of every guest:
vif = ['mac=00:16:3e:22:0d:04, ip=10.0.1.11, bridge=xenbr0', 'mac=00:16:3e:22:0d:14, ip=192.168.1.11',]
vif = ['mac=00:16:3e:22:0d:04, ip=192.168.143.151, bridge=xenbr0', 'mac=00:16:3e:22:0d:14, ip=192.168.1.11',]
This turned to be real nightmare, I’ve spent half a day looking why my VM gusts doesn’t have access to the lab network. They had access to VM host, but not to the outside world. Maybe because I’m running Oracle VM on top of VMWare, but finally I gave up this configuration.
Thus I had to use one of the other two network configurations – Routed Networking with NAT OR Two-way Routed Network. Either case I didn’t have access to the default gateway and would not be able to put static route to my VM guests.
Here is how I solved this – to run three nodes RAC on Oracle VM Server 2.2.1, keep their original network configuration and access them with IP address from my lab network (192.168.143.X). I’ve put logical IP’s of the VM guests on the VM host using ip (ifconfig could also be used) and then using iptables change packet destination to the VM guests themselves (10.0.1.X).
Change Oracle VM configuration to Two-way Routed Network, comment the lines for default bridge configuration and remove comments for routed networking:
Configure VM host itself for forwarding:
echo 1 > /proc/sys/net/ipv4/conf/all/proxy_arp
iptables -t nat -A POSTROUTING -s 10.0.1.0 -j MASQUERADE
Set network alias with the IP address that you want to use for the VM guests:
ip addr add 192.168.143.151/32 dev eth0:1
ip addr add 192.168.143.152/32 dev eth0:2
ip addr add 192.168.143.153/32 dev eth0:3
Create iptables rules in PREROUTING chain that will redirect the request to VM guests original IPs once it receive it on the lab network IP:
iptables -t nat -A PREROUTING -d 192.168.143.151 -i eth0 -j DNAT –to-destination 10.0.1.11
iptables -t nat -A PREROUTING -d 192.168.143.152 -i eth0 -j DNAT –to-destination 10.0.1.12
iptables -t nat -A PREROUTING -d 192.168.143.153 -i eth0 -j DNAT –to-destination 10.0.1.13
Just untar the VM guest in /OVS/running_pool/
[root@ovm22 running_pool]# ls -al /OVS/running_pool/dbnode1/
drwxr-xr-x 2 root root 3896 Aug 6 17:27 .
drwxrwxrwx 6 root root 3896 Aug 3 11:18 ..
-rw-r–r– 1 root root 2294367596 May 16 17:27 swap.img
-rw-r–r– 1 root root 4589434792 May 16 17:27 system.img
-rw-r–r– 1 root root 20107128360 May 16 17:27 u01.img
-rw-r–r– 1 root root 436 Aug 6 11:20 vm.cfg
Run the guest:
xm create /OVS/running_pool/dbnode1/vm.cfg
Now I have a three node RAC, nodes have their original public IPs and I can access them using my lab network IPs. The mapping is like this:
Request to 192.168.143.151 –> the IP address is up on the VM host –> on the VM host iptables takes action –> packet destination IP address is changed to 10.0.1.11 –> static route is already in place at VM host routing packet to the vif interface of the VM guest.
Now I can access my dbnode1 (10.0.1.11) directly with its lab network IP 192.168.143.151.