The following tips work both under VMWare ESX (2 & 3) and VMWare Server 1.0 and 2.0. See also information about VMWare Server 2 and FreeBSD. I've successfully run dozens VMWare Virtual Machines with FreeBSD 7 and 8 guests on the free ESXi product, which I recommend due to its better performance over VMWare Server.

1. General tips

Don't use a virtual machine for network-heavy workloads. VMware and other full-hardware virtualization environments (MS Virtual PC, QEmu, etc.) introduce a heavy penalty on I/O, especially network I/O. Expect to be able to get only around 30%-40% out of a gigabit interface (which still amounts to ~~40 MB/s). For example, don't use virtual machines for network routers and similar tasks. Also, don't use them for tasks which require exact timing (e.g. multimedia processing, industrial machine control, etc.). These points are actually valid for any combination of virtualization software and guest OS. For a detailed discussion, see this Slashdot thread on jails vs VMWare.

2. Don't use lnc

This tip is no longer current, as lnc doesn't exist in FreeBSD 7. It's still valid for FreeBSD 6.x versions.

While it's the default, lnc driver is the worst network driver for your virtual machine. It's GIANT-locked (meaning it doesn't allow for much parallelizm in the OS), and it's actually deprecated and will be dropped in FreeBSD 7. The replacement for lnc is le and it's present at least in FreeBSD 6.2 and newer, but it's not included in the default GENERIC kernel. Thus, you'll have to configure and compile a custom kernel with device lnc replaced with device le. (Just loading the if_le kernel module won't work because the lnc driver present in the kernel at boot time will detect and use the hardware first.)

There's an undocumented configuration option for the virtual machines that enables VMWare Server to emulate Intel E1000 hardware instead of the AMD Lance. To use it, edit the .vmx file and put ethernet0.virtualDev="e1000" in the appropriate place (anywhere). The simulated device also has TSO support (which is usable in FreeBSD 7, though I don't know what performance can be achieved with the simulated hardware). The em driver is faster and not GIANT-locked so it should give the best performance.

3. Reduce kern.hz

Kernel's timer frequency ("HZ") in FreeBSD 6.x and above is set to a relatively large value - 1000 Hz. While beneficial on real hardware, High HZ setting has a negative impact on simulated machine's performance because the VM host software spends too much time handling timer interrupts, which causes context switches, cache flushes and other performance-hindering operations.

You can change the HZ setting by adding a line like kern.hz=50 in /boot/loader.conf. You might also want to use very low values for kern.hz like 10, but test first!

4. Disable internal VMWare swapping

Consider disabling VMWare internal memory swapping and make the virtual machine fit in in the physical memory of the host for best performance. Of course, be informed on the impact of VMWare's memory management before you commit on doing this.

5. VMWare Tools not necessary

It would be nice to have VMWare Tools 100% working on FreeBSD but apparently the company doesn't want to support it properly. Currently, the only features VMWare Tools brings to FreeBSD are GUI enhancers like clipboard sharing and automatic mouse focus grab in X11. VMWare Tools are not needed at all for the following things to work: networking, timer, X.Org GUI.

Networking is handled by the le driver or the em driver. These two will work without any special configuration of FreeBSD. To use the em driver, you might need to modify the VM configuration to include ethernet0.virtualDev = "e1000" or a similar appropriate line. To use the VMWare vmxnet driver (which as far as I can see isn't much different than the le driver), you need to build a kernel without the le driver first.

Timer issues can be lessened (never resolved, even with VMware tools) by reducing kern.hz to something like 50 or 100 Hz (in loader.conf), and installing ntpd.

X.Org can use the generic "vmware" display driver which is included in the default X.Org collection of drivers. Mouse, etc. are also handled generically.

The only remaining functionalities are the ability to "shrink" drives and the ability to soft-shutdown guest machines. For the first one there is a substitute if you're running VMWare Server on a Windows host: third party utilities for VMWare that can do the same thing. Soft shutdown is still best handled by proper VMWare Tools

The emulators/open-vm-tools port builds and works fine in 7.x and 8.x on both i386 and AMD64. This port builds working kernel modules: vmmemctl, vmblock and vmhgfs. It also builds vmxnet, but the network is usually better handled by the em driver. As far as I can tell the Open VMWare Tools (those that are working in FreeBSD) are stable and at least there is no downside to using them.

6. SMP

SMP can be useful at least for certain workloads, at least in recent versions of VMWare products. Virtualized IO is slow and it seems slightly slower with SMP so there are no benefits in enabling SMP for IO-driven workloads (either disk, network or something else). On the other hand it will help with CPU-driven workloads. For example, running make buildworld -j2 on a two-CPU machine will make a farily good use of a real hardware 2CPU system, but when running virtualized, IO wait is so pronounced it takes at least -j3 to avoid noticable idle times.