Learning to love my I/O , a Raspi NBD retrospective.


After struggling with IO issues on my Raspi I decided I would try something a little less conventional. This was to use a network block device (NBD) served from a CentOS 7 box to my Pidora21 Raspi2 system.

I tested the NBD with FIO and found this

CPU (more through testing coming soon)

  • 100% = 100% of all cores not 100% per core
  • FIO run from 3ed server with FIO listening in server mode on the client. There was some overhead running FIO on the client which was about 5-8%

CPU Load with SDCard IO

  • Server Load : N/A
  • Client Load : 30-40%

NBD Load

  • Server Load : (J1900 Celeron) Approx 3% (nbd-server 1025 /mnt/$FILE.NBD)
  • Client Load : (Raspi 2 – ARMv7 Processor rev 5) Approx 45-55%  (nbd-client $NBD_SERVER 1025 /dev/nbd0) a decent part of this was IRQ load which is harder to balance on a Pi


FIO Config



I mostly focus on Max Bytes (maxb) but the other results do have some minor significance so I left them.

Micro-SDCard on Pi (had to space tests far apart to let the card “catch up”)

 #SEQ Write
 WRITE: io=61956KB, aggrb=1031KB/s, minb=1031KB/s, maxb=1031KB/s, mint=60070msec, maxt=60070msec
#Random Write
 WRITE: io=140372KB, aggrb=2333KB/s, minb=2333KB/s, maxb=2333KB/s, mint=60148msec, maxt=60148msec
#SEQ Read
 READ: io=967888KB, aggrb=16106KB/s, minb=16106KB/s, maxb=16106KB/s, mint=60093msec, maxt=60093msec
#Random Read
 READ: io=315660KB, aggrb=5260KB/s, minb=5260KB/s, maxb=5260KB/s, mint=60002msec, maxt=60002msec

Pi 100Mb Ethernet

#SEQ Write
READ: io=262144KB, aggrb=9798KB/s, minb=9798KB/s, maxb=9798KB/s, mint=26753msec, maxt=26753msec
#Random Write
READ: io=262144KB, aggrb=3258KB/s, minb=3258KB/s, maxb=3258KB/s, mint=80437msec, maxt=80437msec
#SEQ Read
WRITE: io=262144KB, aggrb=19156KB/s, minb=19156KB/s, maxb=19156KB/s, mint=13684msec, maxt=13684msec
#Random Read
WRITE: io=262144KB, aggrb=12867KB/s, minb=12867KB/s, maxb=12867KB/s, mint=20372msec, maxt=20372msec

As you can see aside from Sequential Read the NBD kills the MicroSD Card in the Pi. As with any test your results WILL vary. 

Steps to set this up

  1. (Both) Make sure both boxes have NBD kernel modules built and the tools installed (CentOS dropped support for NBD a while back, but Fedora still has it)
    1. (Build module walkthrough here ) https://www.misterx.org/2013/03/05/getting-nbd-network-block-device-back-in-rhel-6-x-and-centos-6-x/
    2. (Tools) yum install nbd
      1.  RH/CENT7  EPEL
      2. Fedora its in default repos
    3. (Verify Module is loaded)
      modprobe nbd && lsmod | grep nbd
      nbd 9421 1 
  2. (Server) Create a file using ‘dd’ via these steps :
    1. (Make File) : dd if=/dev/zero of=$FILE bs=1G count=$NUMBER_of_Gigs_You_Want (you can also argue you can use fallocate here if you understand what that changes)
    2. (Make FS on File) : ‘(mkfs.xfs | mkfs.ext4 | $WHATEVER_YOU_PREFER) $FILE’
    3. (OPTIONAL – Verify FS) file $FILE (would return something like “$FILE_PATH_AND_NAME: SGI XFS filesystem data (blksz 4096, inosz 256, v2 dirs)” )
  3. (Server) Create a script to start this at boot ‘nbd-server $PORT /path/to/file/to/export’ or use ‘/etc/sysconfig/nbd-server’ and start nbd-server however you prefer.
  4. (Client) Start a nbd-client similar with syntax similar to this ‘nbd-client $NBD_SERVER_HOST  $PORT_OF_NBD_SERVER /dev/nbd0’ example ‘nbd-client NBDSERVER.EXAMPLE.COM 2929 /dev/nbd0’  UPDATE:

    For newer versions of NBD you can use this to use a default port of  10809 

    1. nbd-server -C /path//to/nbd-server/config
    2. nbd-client $NBD_SERVER  /dev/nbd0 -N $SECTION_NAME_IN_SERVER_CONFIG
  5. (Client) Mount NBD locally via ‘mount /dev/nbd$NUMBER_FROM_NBDCLIENT /PATH/TO/MOUNT’ example ‘mount /dev/nbd0 /usr/local/NBD/’


  •  If you use your Pi’s NIC for other network related tasks this could cause issues if your NBD is under high load. I used a USB ethernet device on my Pi for other basic network needs so the ethernet is mostly allocated for the NBD and management.
  • This is not perfect and should not be used for mission critical data ! This was used to get space/speed in a pinch for a project. If used in anything resembling production a lot more work would need to be done or more robust solutions should be considered !!
  • The results off the SDcard varied wildly when compared to the NBD but it was never close in any one area outside of SEQ Read.
  • NBD performance will vary based on what the server is using for storage/network and the load it is under. Your results WILL vary !

Leave a Reply