Hacker News new | past | comments | ask | show | jobs | submit login

Just deployed a FreeBSD droplet and I'm not sure if it's just because the host network is busier than my other droplets, but I seem to be getting about half the network performance that I can in a default linux droplet. They are using Virtio, which is good since it doesn't require hardware emulation like the E1XXX devices on KVM. I should probably use a better test than cachefly but just wondering if theres any known tweaks/tips that should be done for FBSD on KVM with virtio devices.

Disk performance is also lacking in comparison to the ubuntu droplet as shown in the pastebin. Could just be because everyone's spinning up fbsd boxes on this host? :)


I am also seeing really poor write speeds when comparing to my Linux droplets. 11MB/sec vs. 216MB/sec on Linux

With FreeBSD, in order to be able to have automatic backups work, we were required to disable journaling on the disk. This will cause the disk speed to be much slower than our Linux distributions.

If you wish to have higher disk speeds, but not use backups, we recommend for you to remount your disk with journaling. https://www.freebsd.org/doc/en/articles/gjournal-desktop/con...

DO disabling journaling is a big deal.

What else has DO disabled and/or modified from a vanilla install?

DO modifies a lot of stuff on vanillas installs. They're one of the only providers I know of that removes the swap partition.

Where does DO document all of the modifications they make to a vanilla FreeBSD install?

I just use Vultr and install from a FreeBSD downloaded ISO. No modifications to the base install at all and works as expected.

DO have screwed something up with their configuration if they need to make so many changes to make it work. No other VPS provider I have used that support FreeBSD (or OpenBSD for that matter) require any changes to the default install.

On UFS soft-updates have a lower overhead than soft-updates with metadata journaling. The downside of soft-updates without journaling is that it requires a background fsck to recover leaked space after unclean shutdowns. I prefer UFS2 SU (not SU+J) on FreeBSD for small systems, because SU+J is incompatible with UFS snapshots.

> required to disable journaling on the disk

Shouldn't this make it much faster as the FS no longer maintains a journal?

Why did journaling interfere with automatic backups?

journaling generally interferes with dump(8). It seems odd that they would be using dump to do backups though, since they are virtualizing the OS.

The parent link to gjournal is weird too, since gjournal is not the same as softupdates w/journaling (SU+J). It makes me wonder if they actually disabled softupdates too.

I found a random post[1] about issues (unmapped io?) running i386 freebsd with su+j in virtualbox (apparently a virtualbox bug[2]).

[1]: https://forums.freebsd.org/threads/freebsd-10-i386-data-corr...

[2]: https://lists.freebsd.org/pipermail/freebsd-current/2013-Nov...

That deserves an explanation - you can backup non-ext2 linux partitions, surely? What have I misunderstood?

So I enabled Journaling and that brought it up to 30MB/sec. Still a substantial difference vs 216MB/sec for Linux.. Something with the D.O. KVM HOST setup needs to be tweaked a bit - 30MB/sec is pretty poor quality disk throughput for SSD attached storage.

There is no way to boot into single user mode for now (or at least instructions provided aren't working).

And the reason was invalid /boot/loader.conf:

console="vidconsole,comconsole" autoboot_delay="10" console="comconsole,vidconsole" autoboot_delay="1"

Second console actually disables access from web console access, which is fatal.

Autoboot is not critical, although default to 3 sec should work much better for opportunity to change boot options from web console.

This got apparently fixed, in my "ams3" instance I've had to issues getting web console to work (needed to "tunefs -n enable /")

This should be done by default on droplets which do not have backups enabled.

Are these the same datacenter? The same rack? I suspect that degraded instances and overly-busy disks are not endemic to AWS.

Mine are both in NYC3, so same datacenter. No clue regarding rack/host. I can't comment on AWS...

FreeBSD is plenty fast. I expect Digital Ocean simply needs to work out some kinks. It will be fast soon enough.

It's not only DO. I had to create a Linux VPS in order to run a Sinatra application because when deployed on FreeBSD it took more than 60 seconds to send a response to the remote API and the connection was timed out!

After performing some tests[2] I figure out that the problem was not FreeBSD per se, but the FreeBSD deployment on the specific virtual server... I think that *BSDs should be avoided because they tend to be a lot slower than linux deployments on virtual machines.

[1] http://www.transip.eu

[2] https://gist.github.com/atmosx/14efea27eb2c1e38af09/

> I think that *BSDs should be avoided because they tend to be a lot slower than linux deployments on virtual machines.

Many virtualisation providers don't support it properly, but "should be avoided because my suppliers are stupid" is a terrible plan.

something must have been horribly, horribly broken on your freebsd installation or hosting environment, because there's no way any sinatra service takes 60 seconds to respond. And, your gist doesn't show a significant performance difference between linux and freebsd.

FreeBSD often has better performance than Linux when both are properly hosted. I think the lesson is more like "when a provider is new to a particular system, there will be problems; for production systems use a provider that's experienced at supporting the kind of system you want to use".

I disagree with this. FreeBSD is running fine and speedy at other providers in a virtualized platform.

> time dd if=/dev/zero of=/tmp/test bs=64k count=16k

16384+0 records in

16384+0 records out

1073741824 bytes transferred in 57.605991 secs (18639412 bytes/sec)

0.023u 6.128s 0:57.61 10.6% 25+172k 7+81916io 3pf+0w

> sudo mount -o nosync -u /

> mount

/dev/gpt/rootfs on / (ufs, local, soft-updates)

devfs on /dev (devfs, local, multilabel)

> time dd if=/dev/zero of=/tmp/test bs=64k count=16k

16384+0 records in

16384+0 records out

1073741824 bytes transferred in 5.135908 secs (209065631 bytes/sec)

0.016u 2.274s 0:05.16 44.1% 24+169k 8+8193io 0pf+0w

Yes, it's really sluggish. Also both power off and resize do not work for me - as I wanted to upgrade it to compare the performance.

Could you open up a support ticket so our support team could help you out? Both of those should work without issue.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact