Notes on setting up a FreeBSD Xen HVM DomU
These notes are heavily influenced by another guide to FreeBSD Xen DomUs and they were streamlined to fit my particular systems.
PLEASE NOTE: This content contains outdated and possibly irrelevant information.
You might draw some insight from this information and that’s great, but please be aware that it does not constitute a comprehensive guide and is more just a set of notes for my own reference.
It’s a good practice to use LVM volumes for disk partitions in Xen environments when better alternatives are not available, so this is what I am using for new FreeBSD DomU instances.
Assuming available space on a volume group, create similar logical volumes:
lvcreate -L 20G -n example-main vg1 lvcreate -L 8G -n example-swap vg1 lvcreate -L 200G -n example-data vg1
This is a group of volumes representing an 8GB volume for swap space, a 20GB volume to be used as the root partition, and a 200GB partition for user data.
Loop Device for ISO
These notes are applicable to the Xen 3.X series, which is able to use an ISO image as a boot device ala CDROM, but in the even that there are issues, a loop device can be arranges on the host machine to provide a bootable CDROM from ISO.
If you have problems specifying the ISO image directly for CDROM usage, then you can mount a loop device on the Xen Dom0:
losetup -f /root/FreeBSD-8.2-RELEASE-amd64-disc1.iso losetup /dev/loop0
Once you’ve created a loop device, update the disk entry in your Xen configuration to include an entry for the “cdrom drive”:
Here is a complete example Xen DomU configuration:
# Example FreeBSD 8.2 Xen DomU name = "example" kernel = "/usr/lib/xen/boot/hvmloader" builder = 'hvm' memory = 2048 shadow_memory = 8 cpus = "1" vcpus = "0" vif = [ 'mac=00:23:3e:55:73:78, ip=10.10.1.76, vifname=examplewan, bridge=xenbr0', 'mac=00:23:3e:77:73:78, ip=192.168.1.76, vifname=examplelan, bridge=xenbr1' ] disk = [ # CDROM loop device 'phy:/dev/loop0,ioemu:hdd:cdrom,r', 'phy:/dev/vg1/example-main,ioemu:hda,w', 'phy:/dev/vg1/example-swap,ioemu:hdb,w', 'phy:/dev/vg1/example-data,ioemu:hdb,w'] boot= 'dc' # cdrom then disk # boot='cd' # disk then cdrom serial = 'pty' # VNC console for installation only sdl=0 vnc=1 vnclisten='127.0.0.1' vncconsole=1 vncpassword='' stdvga=1 on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart'
You should change certain values as appropriate to match your own system configuration before trying to use the example.
If you looked closely at the configuration details, you’ll notice some VNC specific directives in the example Xen DomU configuration above. This is required for initial console connections (e.g., to install FreeBSD) as they must be done via the VNC framebuffer until the system is aware of its actual virtual console device.
This is typically accomplished by enabling the appropriate configuration settings and then using an SSH tunnel to allow for remote VNC to localhost.
A typical ssh invocation looks like this:
ssh -l username -L 5900:localhost:5900 remote_hostname
You’ll want to do the following basic steps to make this kind of connection work out:
- Establish a proper SSH tunnel as in the above example
- Start the Xen DomU
- Use VNC on your local machine to connect to localhost
Once you’ve established a console, you can complete with the installation.
You can remove the
loop0 entry from your configuration when you complete
installation and change the boot order (
boot= in the Xen config) to
to boot from the hard disk first. There is a commented version of the
sequence in the above example DomU configuration.
FreeBSD now ships with a standard Xen HVM kernel configuration (XENHVM) that among other things, will build paravirtual drivers, which will increase network and disk performance in the DomU instance.
You can enable the Xen
xm console to work properly from this kernel with
some configuration changes.
You should build a fully Xen HVM aware custom kernel to take full advantage of your particular hardware environment, desired performance characteristics, capabilities, and so on.
Here’s an example of an AMD-64 XENHVM kernel configuration:
ident POTRZEBIE machine amd64 cpu HAMMER options VESA options SC_PIXEL_MODE options VGA_WIDTH90 options SC_DISABLE_REBOOT options ATA_STATIC_ID options SMP options KDB_TRACE options KDB options INCLUDE_CONFIG_FILE options FLOWTABLE options MAC options AUDIT options HWPMC_HOOKS options KBD_INSTALL_CDEV options PRINTF_BUFR_SIZE=128 options _KPOSIX_PRIORITY_SCHEDULING options P1003_1B_SEMAPHORES options SYSVSEM options SYSVMSG options SYSVSHM options STACK options KTRACE options SCSI_DELAY=5000 options COMPAT_FREEBSD7 options COMPAT_FREEBSD6 options COMPAT_FREEBSD5 options COMPAT_FREEBSD4 options COMPAT_FREEBSD32 options COMPAT_43TTY options GEOM_LABEL options GEOM_PART_GPT options PSEUDOFS options PROCFS options CD9660 options MD_ROOT options UFS_GJOURNAL options UFS_DIRHASH options UFS_ACL options SOFTUPDATES options FFS options SCTP options INET6 options INET options PREEMPTION options SCHED_ULE options XENHVM options NO_ADAPTIVE_RWLOCKS options NO_ADAPTIVE_MUTEXES options GEOM_PART_MBR options GEOM_PART_EBR_COMPAT options GEOM_PART_EBR options GEOM_PART_BSD device isa device mem device io device uart_ns8250 device xenpci device cpufreq device acpi device pci device ata device atadisk device ataraid device atapicd device atapifd device atapist device scbus device da device atkbdc device atkbd device psm device kbdmux device vga device splash device sc device agp device uart device miibus device re device loop device random device ether device vlan device tun device pty device md device gif device faith device firmware device bpf
To get the console working, edit
/boot/loader.conf and add the following:
boot_multicons="YES" boot_serial="YES" comconsole_speed=115200 console="comconsole,vidconsole"
/etc/ttys and activate
ttyu0 "/usr/libexec/getty std.115200" dialup on secure
Restart and you’ll see output (and login terminals) via both xm console and vncviewer.
Now you’re ready to complete installation and configuration of the new DomU as you’d like, such as public network interface configuration, enabling SSH access, etc.