How to: LetsEncrypt in standalone mode for Unifi on Ubuntu 20.04 LTS

This is an update of my previous post, now that cert-manager is more mature, and i’ve rebuilt my server on Ubuntu 20.04 (from 18.04).

  1. install certbot
  2. install script to update unifi certificate
  3. Test
  4. Issue full certificate
  5. Install cron jobs to automate renewal

Install certbot

Certbot installation instructions are at online of course but here’s a summary:

  1. Update package list:
    sudo apt update
  2. install:
    sudo apt install -y certbot

Create a new certificate using LetsEncrypt

We’re going to use standalone mode, and first we’ll get a test certificate just to validate that everything’s working (so that we don’t trigger LetsEncrypt’s rate limits).

  1. open port 80 in ufw:
sudo ufw allow http
  1. Test certificate issuance:
sudo certbot certonly --standalone -d <hostname> -n --test-cert --agree-tos -m <email>

You should see something like this:

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for <hostname>
Waiting for verification...
Cleaning up challenges

 - Congratulations! Your certificate and chain have been saved at:
   Your key file has been saved at:
   Your cert will expire on 2021-04-08. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot
   again. To non-interactively renew *all* of your certificates, run
   "certbot renew"
 - Your account credentials have been saved in your Certbot
   configuration directory at /etc/letsencrypt. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Certbot so
   making regular backups of this folder is ideal.
  1. If that’s worked, close the firewall (sudo ufw deny http) and move on to the next step and install the certificate in unifi. Later, we’ll come back and get a ‘real’ (not staging) certificate.

Install certificate in unifi

I use an amazing certificate installation script from Steve Jenkins.

  1. Get the script:
  1. Edit the config settings in the script to add hostname, switch from Fedora/RedHat/CentOS to Debian/Ubuntu, enable LE_MODE, and disable key paths:

# Uncomment following three lines for Fedora/RedHat/CentOS
# UNIFI_DIR=/opt/UniFi
# KEYSTORE=${UNIFI_DIR}/data/keystore

# Uncomment following three lines for Debian/Ubuntu

# Uncomment following three lines for CloudKey

# Generate your Let's Encrtypt key & cert with certbot before running this script

# PRIV_KEY=/etc/ssl/private/
# SIGNED_CRT=/etc/ssl/certs/
# CHAIN_FILE=/etc/ssl/certs/startssl-chain.crt
  1. copy to /usr/local/bin and make executable:
sudo cp /usr/local/bin/
sudo chmod +x /usr/local/bin/
  1. Run the script to import the certificate. Look for any errors:
sudo /usr/local/bin/
  1. Navigate to your server (https://<hostname>:8443). If it worked, you’ll see a warning that the certificate isnt trusted, but when you examine the cert, it’s issued by a ‘fake’ Lets Encrypt issuer, for example:
Certificate showing a chain back to a root called 'Fake LE Intermediate X1'

Get the real LetsEncrypt certificate

Simply run the same certbot command as before, but leave off the --test-cert flag, and add the --force-renew flag to force it to replace the (unexpired) test certificate:

sudo certbot certonly --standalone -d <hostname> -n --force-renew --agree-tos -m <email>

and rerun the installation script:

sudo /usr/local/bin/

Close the browser window and reopen it, then navigate to your server again. You should now see the valid certificate:

A trusted certificate chain for the host

Automate renewal and issuance

Set up a crontab to renew the cert. Pick a randomish time. It should run every day – if the certificate is still valid, it’ll just skip

  1. load crontab – you may be asked to pick an editor – i suggest nano:
    sudo crontab -e
  2. add the schedule – use crontab guru if you arent familiar with crontab schedule expressions, and set up tasks to:
    1. request a new certificate, and
    2. install the updated certificate. I chose a time just over an hour after certificate issue.

It should look like this:

# renew any certificates due to expire soon at 05:20 each day
20 5 * * * /usr/bin/certbot renew --standalone -n --agree-tos -m <email> --pre-hook 'ufw allow http' --post-hook 'ufw deny http'
# install any updated certificates at 06:29 each day
29 6 * * * /usr/local/bin/

The --pre-hook and --post-hook commands tell UFW to open up port 80 and then close it again afterwards.

Wiring a Yale Keyless Connected Smart Lock to the mains

For various reasons, not least because I wanted to play with it, we have a Yale Keyless Connected Smart Door Lock with a Z-Wave module (we have the v1 module which works fine). This lock has a couple of key features that we liked:

a hand places a round tag next to a lock which is lighting up
  • You can grant or revoke access using RFID tags or cards, or by entering a 6-8 digit code on the keypad.
  • With the Z-Wave module (and a compatible Z-Wave controller), you can programatically add and remove codes so that you can enable codes at specific time or dates. For us, this meant we could create a code for the cleaner, but if they turned up at 2am on a Saturday, the door wouldn’t open for them.

It’s connected to our Samsung SmartThings hub, and i run the RBoy Apps custom device type and smart app to enable the scheduled key rotation etc. Overall, we’ve been fairly happy with it, but the thing really does eat up batteries, and I started to feel guilty about putting between 4 or 8 AA batteries in the bin each month. Of course I also got annoyed at constantly having to buy them and change them, so I decided to try rechargeables.

We bought some Panasonic Eneloop Pro batteries. I’d read a very interesting piece of research showing how high performance NiMH batteries actually outperform alkaline batteries – delivering a stable ~1.2v for far longer. As it turns out, this is a problem.

With a regular battery, as the charge drops, the device detects this and fires off an alert reminding you to change them. As the research showed, however, NiMH batteries provide a fairly constant 1.2v until the “power” in the batteries is pretty much depleted, and then they just die. This isn’t a problem for a radio controlled car. But of course if the batteries go flat on your front door lock, you can’t get in to your house as there’s no key override on it, and with no alerts, we wouldn’t know to change them. Although you can power the lock from the outside in an emergency using a 9v battery, after a particularly embarrassing situation where I discovered that the 9v battery I had stored in the glove box had expired 2 years ago, I decided to figure out how to wire the lock up to a permanent power supply. The main challenge here of course is that I would like to be able to unlock my house even when the power is out. After a bit of thinking, i decided that I probably needed a battery in there somewhere too.

Picture from ebay seller random-bargains2009

The first challenge is working out how to wire up the device. Ideally, didn’t want to be soldering connectors on. After a bit of research, I found a “4 X 6V AA MONEY SAVING BATTERY REPLACEMENT PLUG IN ADAPTER” on ebay (the item i bought is from “random-bargains2009” but there were three or four different ones from other sellers). This is basically an AA battery with a wire coming out of it connected to a mains adaptor, and 3 “dummy” battery blanks. I cut off the AC adaptor, and soldered on a USB A plug, and connected it to a Belkin USB battery pack, then plugged that in to a charger to keep it constantly topped up and … nothing. It turns out that Belkin battery packs can’t provide power and be charged at the same time. Doh!

I tried another battery pack, and all was fine until my wife tried to come in about 5 minutes later. Apparently the “smart” charge controller built in to the battery didn’t detect sufficient current, and so switched off the battery pack. Brilliant.

After a bit of research, I was able to find a 3,000mAh battery pack with a USB plug (from Amazon) that was designed to provide backup power for 12v CCTV cameras. Made by Chinese company TalentCell, it claims CE compliance for both the batteries and the charger. Mine arrived from Germany with an EU plug, but I already had some fused, screw fixed adaptor plugs, so not a problem.

Finally, the I ran the cable around the frame of the glass in the door to try and keep it discrete, and I’m now confident that we won’t ever be locked out again.

LetsEncrypt with DNS-01 validation for Unifi

Update 2021-01-08: this is now out of date. See my updated post with a much easier method.

I have a number of Ubiquiti UAPs, and I manage them with the UniFi app, installed on a linode server. Like any publicly hosted server, i want to use a trusted SSL certificate, and for that, I chose LetsEncrypt with DNS-01 validation, as i found a useful helper script by thatsamguy on the UniFi forums. I use AWS Route53 to host the DNS zone.

UniFi doesn’t have built in support for LetsEncrypt, so I put together a simple solution using the DNS-01 validation method. Here’s how i did it:

  1. Created a new, unprivileged user on the host:
    sudo adduser dehydrated
  2. Created the directory to store the certificates. I chose to use the default /etc/letsencrypt folder
    sudo mkdir /etc/letsencrypt
  3. I granted the dehydrated user full access to this folder:
    sudo chown dehydrated:dehydrated /etc/letsencrypt
  4. installed the dependencies which are in the repo:
    sudo apt-get install --no-install-recommends jq sed findutils s-nail
  5. installed the cli53 dependency which is not in the repo by following the instructions on the git file
  6. logged in as the dehydrated user:
    su - dehydrated
  7. fetched dehydrated and made it executable:
    chmod +x dehydrated
  8. fetched the dehydrated route53 hook script and made it executable:
    chmod +x
  9. Created a new IAM access policy in AWS. I found that the sample policy given with the root 53 hook didn’t work – here’s the policy that I added:
        "Version": "2012-10-17",
        "Statement": [
                "Effect": "Allow",
                "Action": [
                "Resource": "*"
                "Effect": "Allow",
                "Action": [
                "Resource": "arn:aws:route53:::hostedzone/*"
                "Effect": "Allow",
                "Action": [
                "Resource": "arn:aws:route53:::change/*"
  10. Created a new IAM user in AWS and assigned the policy to them (and only this policy), taking a note of the AWS Access Key and AWS Secret Access Key.
  11. Created a new file to store the key:
    mkdir ~/.aws
    nano ~/.aws/credentials

    The file looks like this (obviously, put your own data in there):

    aws_access_key_id = AKIAIJFAKGII4MDS3KHA
    aws_secret_access_key = Q+XoOGa5J3AS39as593Ds1f5F91zRy0btkfW
  12. Created a new config file:
    nano ~/config

    The file looks like this (edited from the sample):

    # This is the main config file for dehydrated          #
    #                                                      #
    # This file is looked for in the following locations:  #
    # $SCRIPTDIR/config (next to this script)              #
    # /usr/local/etc/dehydrated/config                     #
    # /etc/dehydrated/config                               #
    # ${PWD}/config (in current working-directory)         #
    #                                                      #
    # Default values of this config are in comments        #
    # Resolve names to addresses of IP version only. (curl)
    # supported values: 4, 6
    # default: 
    # Path to certificate authority (default:
    # CA=""
    # Path to certificate authority license terms redirect (default:
    # CA_TERMS=""
    # Path to license agreement (default: )
    # Which challenge should be used? Currently http-01 and dns-01 are supported
    # Path to a directory containing additional config files, allowing to override
    # the defaults found in the main configuration file. Additional config files
    # in this directory needs to be named with a '.sh' ending.
    # default: 
    # Base directory for account key, generated certificates and list of domains (default: $SCRIPTDIR -- uses config directory if undefined)
    # File containing the list of domains to request certificates for (default: $BASEDIR/domains.txt)
    # Output directory for generated certificates
    # Directory for account keys and registration information
    # Output directory for challenge-tokens to be served by webserver or deployed in HOOK (default: /var/www/dehydrated)
    # Default keysize for private keys (default: 4096)
    # Path to openssl config file (default:  - tries to figure out system default)
    # Extra options passed to the curl binary (default: )
    # Program or function called in certain situations
    # After generating the challenge-response, or after failed challenge (in this case altname is empty)
    # Given arguments: clean_challenge|deploy_challenge altname token-filename token-content
    # After successfully signing certificate
    # Given arguments: deploy_cert domain path/to/privkey.pem path/to/cert.pem path/to/fullchain.pem
    # BASEDIR and WELLKNOWN variables are exported and can be used in an external program
    # default: 
    # Chain clean_challenge|deploy_challenge arguments together into one hook call per certificate (default: no)
    # Minimum days before expiration to automatically renew certificate (default: 30)
    # Regenerate private keys instead of just signing new certificates on renewal (default: yes)
    # Create an extra private key for rollover (default: no)
    # Which public key algorithm should be used? Supported: rsa, prime256v1 and secp384r1
    # E-mail to use during the registration (default: )
    [email protected]
    # Lockfile location, to prevent concurrent access (default: $BASEDIR/lock)
    # Option to add CSR-flag indicating OCSP stapling to be mandatory (default: no)
    # Fetch OCSP responses (default: no)
    # Issuer chain cache directory (default: $BASEDIR/chains)
    # Automatic cleanup (default: no)
  13. Created a new file with the list of domains to register:
    nano ~/domains.txt

    The file looks like this (obviously, put your own data in there):
  14. Checked that the certificate registers correctly – note that if you are having trouble, you should enable the “staging” CA / terms file in config while you troubleshoot to avoid hitting letsencrypt limits:
    [email protected]:~$ ./dehydrated --cron --accept-terms --out /etc/letsencrypt
    # INFO: Using main config file /home/dehydrated/config
    + Generating account key...
    + Registering account key with ACME server...
     + Signing domains...
     + Creating new directory /home/dehydrated/certs/ ...
     + Generating private key...
     + Generating signing request...
     + Requesting challenge for
    Creating challenge record for in zone
    Created record: ' 60 IN TXT "cpE1VF_xshMm1IVY1Y66Kk9Zb_7jA2VFkP65WuNgu3Q"'
    Waiting for sync...................................
     + Responding to challenge for
    Deleting challenge record for from zone
    1 record sets deleted
     + Challenge is valid!
     + Requesting certificate...
     + Checking certificate...
     + Done!
     + Creating fullchain.pem...
     + Using cached chain!
     + Done!
    + Running automatic cleanup
  15. Created the helper script, remembering to edit the domain (find/replace) and the certificate path:
    nano updateunificert
    My file looks like this:


    openssl pkcs12 -export -in /etc/letsencrypt/ -inkey /etc/letsencrypt/ -out /etc/letsencrypt/ -name tomcat -CAfile /etc/letsencrypt/ -caname root -password pass:aaa;
    rm -f /etc/letsencrypt/;
    keytool -importkeystore -srcstorepass aaa -deststorepass aircontrolenterprise -destkeypass aircontrolenterprise -srckeystore /etc/letsencrypt/ -srcstoretype PKCS12 -alias tomcat -keystore /etc/letsencrypt/;
    keytool -import -trustcacerts -alias unifi -deststorepass aircontrolenterprise -file /etc/letsencrypt/ -noprompt -keystore /etc/letsencrypt/;
    mv /var/lib/unifi/keystore /var/lib/unifi/keystore-backup;
    cp /etc/letsencrypt/ /var/lib/unifi/keystore;
    service unifi restart;
  16. Created cron jobs to trigger the cert update daily:
    crontab -e
    I added this line:


    20 5 * * * /home/dehydrated/dehydrated --cron --accept-terms --out /etc/letsencrypt >/dev/null 2>&1
  17. returned to my previous shell:
  18. Created cron jobs to install the new cert daily:
    sudo crontab -e
    I added this line:


    29 5 * * * /home/dehydrated/updateunificert >/dev/null 2>&1

How to upgrade VMWare ESXi on HP Gen8 Microserver

  1. go to VMware ESXi Patch Tracker and check for the latest ImageProfile e.g. ESXi-6.5.0-20170404001-standard
  2. Shut down all VMs and turn on maintenance mode
  3. allow outbound firewall requests
    esxcli network firewall ruleset set -e true -r httpClient
  4. execute the update:
    esxcli software profile update -p ESXi-6.5.0-20170404001-standard -d
  5. Disable firewall ports
    esxcli network firewall ruleset set -e false -r httpClient
  6. Disable maintenance mode
  7. Reboot

Creating a VM to test Docker – migrating from Unraid to Napp-It


  1. downloaded the Ubuntu 16.04 Server LTS CD
  2. Created new VM
  3. Connected ISO to virtual DVD in console in VMWare vSphere client
  4. booted to the ISO
  5. pressed F4 and selected “minimal”
  6. pressed F6 and selected minimal VM
  7. walked through and added SSH at the right point
  8. SSHd to the server
  9. I installed docker using these instructions, including configuring docker to start on boot.
  10. created /mnt/cups
  11. ran a new docker container…
    [email protected]ubd:/mnt$ sudo docker run -d --name="cups-google-print" --net="host" --privileged="true" -e TZ="UTC" -e HOST_OS="ubuntu" -e "CUPS_USER_ADMIN"="admin" -e "CUPS_USER_PASSWORD"="pass" -e "TCP_PORT_631"="631" -v "/mnt/cups":"/config":rw -v /dev:/dev -v /etc/avahi/services:/avahi -v /var/run/dbus:/var/run/dbus mnbf9rca/cups-google-print --restart=unless-stopped Unable to find image 'mnbf9rca/cups-google-print:latest' locally latest: Pulling from mnbf9rca/cups-google-print a3ed95caeb02: Pull complete 3b1d42cd9af9: Pull complete d2ff49536f4d: Pull complete f94adccdbb9c: Pull complete ae857e8dd13c: Pull complete 327565847940: Pull complete 83835dcb6373: Pull complete 78b26d55dd43: Pull complete 388ec0e358c7: Extracting [=======================================>           ]   190 MB/238.1 MB 388ec0e358c7: Pull complete 05dd908ba895: Pull complete 87c9e1d25f3b: Pull complete 75d49e6da022: Pull complete 4b8ca4d5d690: Pull complete Digest: sha256:3c231589c941288c4541016944658ee5915e4d8761648e1d88254c90dea9beca Status: Downloaded newer image for mnbf9rca/cups-google-print:latest f33434ebccaf2b5644260e664014d7364d4d5ead45bf4374e931a0acedd06015 
  12. checked it’s running with the docker ps -a command:
    [email protected]:/mnt$ sudo docker ps -a CONTAINER ID        IMAGE                        COMMAND             CREATED             STATUS              PORTS               NAMES f33434ebccaf        mnbf9rca/cups-google-print   "/sbin/my_init"     2 minutes ago       Up 2 minutes                            cups-google-print
  13. Browsed to https://<server IP>:631/ to see that it’s running


Installing VMWare – migrating from Unraid to Napp-It

# To roll back to hpvsa build 88 (for MicroServer Gen8 compatibility)
esxcli system maintenanceMode set --enable true
esxcli network firewall ruleset set -e true -r httpClient
esxcli software vib remove -n scsi-hpvsa -f
esxcli software vib install -v
esxcli system shutdown reboot -r "Rolled back to scsi-hpvsa build 88"

How to flash IBM ServeRaid M1015 to LSI9211-IT for ZFS in HP Gen8 Microserver

First things first – you do this at your own risk. I take no responsibility for anything going wrong – and it can go wrong. If you are in doubt – don’t do it. And if it goes wrong – don’t blame me…

  1. Make a DOS USB boot stick
    1. download the very useful Rufus from here.
    2. select “Create a bootable disk using FreeDos”:
    3. flash the USB.
  2. Download the firwmare files from here: sas2008 (see footnote for original source)
  3. Extract the files and place them on the root of the USB stick.
  4. Download the latest LSI firmware from the Avago site. You’re looking for Firmware for an SAS 9211-8i Host Bus Adaptor. At the time of writing, this is version P20.
  5. Extract the LSI firmware to a folder on your machine.
  6. Create a subfolder on the USB called P20
  7. From the extracted LSI firmware, copy the following to the P20 folder on the USB:
    1. The 2118it.bin file from <zip>\\firmware\\HBA_9211_8i_IT folder
    2. mptsas2.rom from sasbios_rel folder
    3. sas2flsh.exe from sas2flash_dos_rel folder
  8. Look at the back of the card and note down the SAS address – it’s something like 500605B0xxxxxxxx.
  9. put the card in the machine, and switch it on.
  10. Boot to the USB stick – press F11 during POST and select USB.
  11. Flash the firmware:
    1. Type the following:
      megarec -writesbr 0 sbrempty.bin
      megarec -cleanflash 0
    2. Reboot, again booting from the USB stick
    3. Next, install the P10 or P11 firmware – type the following:
      sas2flsh -o -f 2118it.bin -b mptsas2.rom
      sas2flsh -o -sasadd 500605bxxxxxxxxx (x= numbers for SAS address)
    4. Reboot, again booting from the USB stick
    5. Finally, upgrade to the P20 firmware – type the following to change to the folder and execute flash of the new firmware:
      cd p20
      sas2flsh -o -f 2118it.bin -b mptsas2.rom
  12. Remove the USB stick
  13. Reboot.
  14. Some people recommend to disable loading the Option ROM. On my machine, loading the option room caused an NMI, so i ignored it, but if you want to do it: Load the Option ROM (press CTRL-C on boot) and set “Boot Support” to “Disabled”

The original instructions for this task are here, with my additions to update to the P20 firmware – I’ve archived them here for my own reference.

Preparing for migration – migrating from Unraid to Napp-It

This post is the second in my series describing how i migrated from Unraid to Napp-It, and describes how I prepared for migration.

So – preparing for migration…

  1. First, i wanted to capture the docker configuration for each of my existing containers. To do this, I forced the container to update, then when unraid presented me with the “success” screen, I captured the docker run command, like this (which captures the RUN command for my CUPS container with Google Print extensions):
    [email protected]:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="cups-google-print" --net="host" --privileged="true" -e TZ="UTC" -e HOST_OS="unRAID" -e "CUPS_USER_ADMIN"="admin" -e "CUPS_USER_PASSWORD"="pass" -e "TCP_PORT_631"="631" -v "/mnt/user/appdata/cups-google-print":"/config":rw -v /dev:/dev -v /etc/avahi/services:/avahi -v /var/run/dbus:/var/run/dbus mnbf9rca/cups-google-print
  2. Next, I found two old 2TB disks to copy my data on to – personally, I’m not concerned about losing the data, but if you are, this is going to be a blocker for you. Anyway, I added these to my server, and they were assigned device names sdd and sde.
  3. i installed the unraid unassigned devices plugin.
  4. I stopped the array then moved my 4TB device to “unassigned” and mounted it.
  5. Then i connected via SSH and created two new folders called “SDD” and “SDE” (to match the device names) and moved the content so there was less than 2DB in each 2TB in each:
    [email protected]:~# cd /mnt/disks/WDC_WD40EFRX_68WT0N0_WD_WCC4E3AN2Y99
    [email protected]:/mnt/disks/WDC_WD40EFRX_68WT0N0_WD_WCC4E3AN2Y99# mkdir sdd
    [email protected]:/mnt/disks/WDC_WD40EFRX_68WT0N0_WD_WCC4E3AN2Y99# mkdir sde
    [email protected]:/mnt/disks/WDC_WD40EFRX_68WT0N0_WD_WCC4E3AN2Y99# mv  sdd
    [email protected]:/mnt/disks/WDC_WD40EFRX_68WT0N0_WD_WCC4E3AN2Y99# mv  sde

    … and eventually i had them roughly balanced…

    [email protected]:/mnt/disks/WDC_WD40EFRX_68WT0N0_WD_WCC4E3AN2Y99# du -sh sd*
    1.8T    sdd
    1.7T    sde
    [email protected]:/mnt/disks/WDC_WD40EFRX_68WT0N0_WD_WCC4E3AN2Y99#
  6. i used rsync to move the data from the old 4TB disk to the new one:
    rsync -idIWpEAXogtlr --remove-source-files --numeric-ids --inplace /mnt/disks/WDC_WD40EFRX_68WT0N0_WD_WCC4E3AN2Y99/sdd/ /mnt/disks/WDC_WD20EARS_00MVWB0_WD_WCAZA4636167
  7. physically removed the 2x2TB drives and put them somewhere safe, so that i didn’t accidentally erase them.

Now i was ready to move to the next step, installing my new OS host – VMWare.

Hardware required – migrating from Unraid to Napp-It

I’ve been a user of Unraid since 2012, when I had to find a solution to my home storage after Windows Home Server was abandoned by Microsoft. Unraid has been very good for me, and the introduction of a Docker engine with Unraid 6 was very welcome. That said, I’ve recently encountered issues with bitrot, and the fact that unraid can’t use ZFS as the disk format annoys me. LimeTech claim that their parity check process should detect bitrot – however, something doesn’t seem to be working, as using using the Dynamix File Integrity plugin i can see it happening. In any case, knowing it’s happened isnt the same as being able to correct it, which just isn’t possible on Unraid without using BTRFS but many people simply don’t trust BTRFS, and besides, I fancy a change. So – over to VMWare EXSi and ZFS on Napp-It.

This blog post describes the hardware i needed.

The HP Gen8 Microserver is certified for use with VMWare and you can even install it to an internal USB or MicroSD card. In my case, I want a RAIDZ array of 3 x 4TB drives, plus two or more SSDs for VM and Docker hosts.

For ZFS to work properly it needs access to the underlying host controller, not a virtualised verison. VMWare is capable of direct device passthrough, but only on processors which support VT-d. Additionally, if I’m running a VM (the Napp-IT guest) on a drive on the storage controller, I can’t pass that same storage controller through to the guest.

When i bought my microserver, i got the base model, and with cashback i think it cost me £120. But the base has a Celeron G1060 and although this has the VT-x extensions, it doesn’t support VT-d.

The B120i controller in the Gen8 has 2 6GBs SATA channels, and 2 3GBs. I wanted to ensure that all 4 drive bays ran at 6GBs, plus have 2 6GBs channels for my host SSDs.

My shopping list was therefore:

  1. a new processor supporting VT-d. I used the chart maintained by users of the homeservershow forum to find a suitable processor for sale on eBay. I ordered an E3-1260L from China. Estimated delivery – a few weeks.
  2. Some thermal paste
  3. A second storage controller. I went for the IBM ServeRaid M1015 as it can be flashed to an LSI9211-81 in IT mode (meaning ZFS has direct access to the disks, without the controller being “smart” or “doing RAID” in the middle). See this post for instructions.
  4. a Mini SAS (SFF-8087) to SATA cable, again from eBay.

some things i already had:

  1. a molex splitter (I already had this)
  2. a molex to two SATA HDD power splitter

How to ensure you can revert changes to function apps

As I’ve been playing around with Azure Functions I’ve slowly outgrown the web-based editor. It’s not that it’s not useful, it’s just that I miss intellisense (I’ll come back to this in a later post), and I accidentally deployed a change which broke one of my functions. I’d made dozens of tiny changes, but I simply could not figure out which one it was. Not having a version history, I was kinda screwed.

I had seen the “Configure Continuous Integration” option before, but never really looked at it. I keep my source code in private GitHub repos, so it was relatively trivial to set up a new repo tied to this function app. After reading the setup instructions, however, I was a little confused by what exactly to do to put my existing functions in to repo, but it was actually much simpler than I thought. It turns out one of the best features is the ability to roll back to a previous commit with a single click:


First, I created a new private GitHub repo and cloned it to my local machine. I chose not to use branching – but I guess you could map different function apps to different branches to support a separation between “dev”, “test”, “production” etc. In the root of my repo, I created a folder for each of the functions I wished to deploy, named exactly the same as the existing functions (I assume they’re not case sensitive but I kept to the same case).

Then, I needed to put the actual code in there. Under the visual editor for each of the functions is a “view files” link: view-files. Clicking this, I was able to see the function.json and run.csx files within each function. I simply cut and pasted the code from there to a file of the same name in the relevant folder.

Next, I needed to find the host.json file. That’s a bit more tricky. In the end, I figured the easiest way was to use the Dev Console. Navigate to Function App Settings, and select “Open dev console”. After a few seconds, the dev console appears:


This appears to be a Linux shell. You should start in the d:\home\site\wwwroot folder – that’s where host.json lives. Just type cat host.json to see the contents. It turns out mine was empty (just an open and close curly brace):


> ls
> cat host.json

I created this in the root of my repo, then committed the changes and pushed them back to GitHub. Within a few seconds, I was able to see the change by clicking “Configure continuous integrations” in Function App Settings. My changes deployed immediately. And when I next screw up, because I’m forced to push changes via GIT, I know I’ll be able to roll back to a known-good configuration.