CVS Checkout on NFS Share in Ubuntu Desktop Performance Issue

Hi All,

We’re encountering issues checking-out a CVS repo to an NFS share mounted on a local Ubuntu machine. Checkout to the Ubuntu machine’s local disk just takes less than 2 minutes for 3.2GB, while it takes approximately 5 minutes on an NFS share mounted as a local folder. In another test, it took almost 10 minutes to checkout the repo.

Is this expected or is there something funky going on?

That sounds expected (IMHO).

If your network is 100 Mbps then realistically you will be able to get about 10 MB per second of useful data. That’s much slower than any local disk so you only need to measure the network as the bottleneck. 3.2 GB at 10 MB/s is 5.3 minutes. Or in a perfect environment with gigabit ethernet, you will get it down in about 32 seconds.

However, if your target storage is NFS then writing to the disk is also going to suck up that network bandwidth. So absolutely yes it will be slower.

If you assume CVS and NFS are using equal bandwidth then halve the original figure: checking out to NFS could take 10 minutes. This is expected. Although it sounds like your actual network performance (probably bottlenecked by CVS and NFS themselves) is around 200 Mbps.

The point is that a local disk is going to be so fast compared to the network that the network is always the bottleneck. If you’re both reading from and writing to the network then your performance will be half (or less) that of only reading from the network. And even fast networks are slow compared to any local disk.

Hi Vanvugt,

Thanks for the quick and very detailed response. We’re running on a Gigabit network hence the performance issue was raised by the user.

I’m wondering though how such a process (checkout to a machine but writing to an NFS share) is implemented. Will the files be copied first to the local machine then pushed by the local machine to the NFS share? If this is true then this might be the actual bottleneck.

I’ll run another test tomorrow using a different scenario (local machine is a VM on the same subnet as the NFS share and running 10G links) and see how it goes.

Kind regards,

Yes that’s what I’m saying. The hardware/software is doing its best under the circumstances.

There is no bug or fault here. If you want better performance then you’d need to avoid saturating the network so much (avoid NFS, or add a second network card).

Remember a fast network (gigabit) is still much slower than a slow harddisk, and is no substitute for a local drive.

You can use cachefilesd on the client to use the local disk as an NFS cache. Though you might want to test with the different options to optimize for your use case. I’ve used this setup for a long time and it works, however usually for things like version control (with tons of little files) it might be diminishing returns and might be worth investigating moving to a more traditional client/server setup instead of putting the repo on NFS:

EDIT: Just wanted to clarify that this clearly won’t affect initial performance, but might help subsequent reads/writes on the client.