Wiring a Yale Keyless Connected Smart Lock to the mains

For various reasons, not least because I wanted to play with it, we have a Yale Keyless Connected Smart Door Lock with a Z-Wave module (we have the v1 module which works fine). This lock has a couple of key features that we liked:

a hand places a round tag next to a lock which is lighting up
  • You can grant or revoke access using RFID tags or cards, or by entering a 6-8 digit code on the keypad.
  • With the Z-Wave module (and a compatible Z-Wave controller), you can programatically add and remove codes so that you can enable codes at specific time or dates. For us, this meant we could create a code for the cleaner, but if they turned up at 2am on a Saturday, the door wouldn’t open for them.

It’s connected to our Samsung SmartThings hub, and i run the RBoy Apps custom device type and smart app to enable the scheduled key rotation etc. Overall, we’ve been fairly happy with it, but the thing really does eat up batteries, and I started to feel guilty about putting between 4 or 8 AA batteries in the bin each month. Of course I also got annoyed at constantly having to buy them and change them, so I decided to try rechargeables.

We bought some Panasonic Eneloop Pro batteries. I’d read a very interesting piece of research showing how high performance NiMH batteries actually outperform alkaline batteries – delivering a stable ~1.2v for far longer. As it turns out, this is a problem.

With a regular battery, as the charge drops, the device detects this and fires off an alert reminding you to change them. As the research showed, however, NiMH batteries provide a fairly constant 1.2v until the “power” in the batteries is pretty much depleted, and then they just die. This isn’t a problem for a radio controlled car. But of course if the batteries go flat on your front door lock, you can’t get in to your house as there’s no key override on it, and with no alerts, we wouldn’t know to change them. Although you can power the lock from the outside in an emergency using a 9v battery, after a particularly embarrassing situation where I discovered that the 9v battery I had stored in the glove box had expired 2 years ago, I decided to figure out how to wire the lock up to a permanent power supply. The main challenge here of course is that I would like to be able to unlock my house even when the power is out. After a bit of thinking, i decided that I probably needed a battery in there somewhere too.

Picture from ebay seller random-bargains2009

The first challenge is working out how to wire up the device. Ideally, didn’t want to be soldering connectors on. After a bit of research, I found a “4 X 6V AA MONEY SAVING BATTERY REPLACEMENT PLUG IN ADAPTER” on ebay (the item i bought is from “random-bargains2009” but there were three or four different ones from other sellers). This is basically an AA battery with a wire coming out of it connected to a mains adaptor, and 3 “dummy” battery blanks. I cut off the AC adaptor, and soldered on a USB A plug, and connected it to a Belkin USB battery pack, then plugged that in to a charger to keep it constantly topped up and … nothing. It turns out that Belkin battery packs can’t provide power and be charged at the same time. Doh!

I tried another battery pack, and all was fine until my wife tried to come in about 5 minutes later. Apparently the “smart” charge controller built in to the battery didn’t detect sufficient current, and so switched off the battery pack. Brilliant.

After a bit of research, I was able to find a 3,000mAh battery pack with a USB plug (from Amazon) that was designed to provide backup power for 12v CCTV cameras. Made by Chinese company TalentCell, it claims CE compliance for both the batteries and the charger. Mine arrived from Germany with an EU plug, but I already had some fused, screw fixed adaptor plugs, so not a problem.

Finally, the I ran the cable around the frame of the glass in the door to try and keep it discrete, and I’m now confident that we won’t ever be locked out again.

LetsEncrypt with DNS-01 validation for Unifi

I have a number of Ubiquiti UAPs, and I manage them with the UniFi app, installed on a linode server. Like any publicly hosted server, i want to use a trusted SSL certificate, and for that, I chose LetsEncrypt with DNS-01 validation, as i found a useful helper script by thatsamguy on the UniFi forums. I use AWS Route53 to host the DNS zone.

UniFi doesn’t have built in support for LetsEncrypt, so I put together a simple solution using the DNS-01 validation method. Here’s how i did it:

  1. Created a new, unprivileged user on the host:
    sudo adduser dehydrated
  2. Created the directory to store the certificates. I chose to use the default /etc/letsencrypt folder
    sudo mkdir /etc/letsencrypt
  3. I granted the dehydrated user full access to this folder:
    sudo chown dehydrated:dehydrated /etc/letsencrypt
  4. installed the dependencies which are in the repo:
    sudo apt-get install --no-install-recommends jq sed findutils s-nail
  5. installed the cli53 dependency which is not in the repo by following the instructions on the git readme.md file
  6. logged in as the dehydrated user:
    su - dehydrated
  7. fetched dehydrated and made it executable:
    wget https://raw.githubusercontent.com/lukas2511/dehydrated/master/dehydrated
    chmod +x dehydrated
  8. fetched the dehydrated route53 hook script and made it executable:
    wget https://raw.githubusercontent.com/whereisaaron/dehydrated-route53-hook-script/master/hook.sh
    chmod +x hook.sh
  9. Created a new IAM access policy in AWS. I found that the sample policy given with the root 53 hook readme.md didn’t work – here’s the policy that I added:
        "Version": "2012-10-17",
        "Statement": [
                "Effect": "Allow",
                "Action": [
                "Resource": "*"
                "Effect": "Allow",
                "Action": [
                "Resource": "arn:aws:route53:::hostedzone/*"
                "Effect": "Allow",
                "Action": [
                "Resource": "arn:aws:route53:::change/*"
  10. Created a new IAM user in AWS and assigned the policy to them (and only this policy), taking a note of the AWS Access Key and AWS Secret Access Key.
  11. Created a new file to store the key:
    mkdir ~/.aws
    nano ~/.aws/credentials

    The file looks like this (obviously, put your own data in there):

    aws_access_key_id = AKIAIJFAKGII4MDS3KHA
    aws_secret_access_key = Q+XoOGa5J3AS39as593Ds1f5F91zRy0btkfW
  12. Created a new config file:
    nano ~/config

    The file looks like this (edited from the sample):

    # This is the main config file for dehydrated          #
    #                                                      #
    # This file is looked for in the following locations:  #
    # $SCRIPTDIR/config (next to this script)              #
    # /usr/local/etc/dehydrated/config                     #
    # /etc/dehydrated/config                               #
    # ${PWD}/config (in current working-directory)         #
    #                                                      #
    # Default values of this config are in comments        #
    # Resolve names to addresses of IP version only. (curl)
    # supported values: 4, 6
    # default: 
    # Path to certificate authority (default: https://acme-v01.api.letsencrypt.org/directory)
    # CA="https://acme-staging.api.letsencrypt.org/directory"
    # Path to certificate authority license terms redirect (default: https://acme-v01.api.letsencrypt.org/terms)
    # CA_TERMS="https://acme-staging.api.letsencrypt.org/terms"
    # Path to license agreement (default: )
    # Which challenge should be used? Currently http-01 and dns-01 are supported
    # Path to a directory containing additional config files, allowing to override
    # the defaults found in the main configuration file. Additional config files
    # in this directory needs to be named with a '.sh' ending.
    # default: 
    # Base directory for account key, generated certificates and list of domains (default: $SCRIPTDIR -- uses config directory if undefined)
    # File containing the list of domains to request certificates for (default: $BASEDIR/domains.txt)
    # Output directory for generated certificates
    # Directory for account keys and registration information
    # Output directory for challenge-tokens to be served by webserver or deployed in HOOK (default: /var/www/dehydrated)
    # Default keysize for private keys (default: 4096)
    # Path to openssl config file (default:  - tries to figure out system default)
    # Extra options passed to the curl binary (default: )
    # Program or function called in certain situations
    # After generating the challenge-response, or after failed challenge (in this case altname is empty)
    # Given arguments: clean_challenge|deploy_challenge altname token-filename token-content
    # After successfully signing certificate
    # Given arguments: deploy_cert domain path/to/privkey.pem path/to/cert.pem path/to/fullchain.pem
    # BASEDIR and WELLKNOWN variables are exported and can be used in an external program
    # default: 
    # Chain clean_challenge|deploy_challenge arguments together into one hook call per certificate (default: no)
    # Minimum days before expiration to automatically renew certificate (default: 30)
    # Regenerate private keys instead of just signing new certificates on renewal (default: yes)
    # Create an extra private key for rollover (default: no)
    # Which public key algorithm should be used? Supported: rsa, prime256v1 and secp384r1
    # E-mail to use during the registration (default: )
    [email protected]
    # Lockfile location, to prevent concurrent access (default: $BASEDIR/lock)
    # Option to add CSR-flag indicating OCSP stapling to be mandatory (default: no)
    # Fetch OCSP responses (default: no)
    # Issuer chain cache directory (default: $BASEDIR/chains)
    # Automatic cleanup (default: no)
  13. Created a new file with the list of domains to register:
    nano ~/domains.txt

    The file looks like this (obviously, put your own data in there):

  14. Checked that the certificate registers correctly – note that if you are having trouble, you should enable the “staging” CA / terms file in config while you troubleshoot to avoid hitting letsencrypt limits:
    [email protected]:~$ ./dehydrated --cron --accept-terms --out /etc/letsencrypt
    # INFO: Using main config file /home/dehydrated/config
    + Generating account key...
    + Registering account key with ACME server...
    Processing myhost.mydomain.com
     + Signing domains...
     + Creating new directory /home/dehydrated/certs/myhost.mydomain.com ...
     + Generating private key...
     + Generating signing request...
     + Requesting challenge for myhost.mydomain.com...
    Creating challenge record for myhost.mydomain.com in zone cynexia.net
    Created record: '_acme-challenge.myhost.mydomain.com. 60 IN TXT "cpE1VF_xshMm1IVY1Y66Kk9Zb_7jA2VFkP65WuNgu3Q"'
    Waiting for sync...................................
     + Responding to challenge for myhost.mydomain.com...
    Deleting challenge record for myhost.mydomain.com from zone mydomain.com
    1 record sets deleted
     + Challenge is valid!
     + Requesting certificate...
     + Checking certificate...
     + Done!
     + Creating fullchain.pem...
     + Using cached chain!
     + Done!
    + Running automatic cleanup
  15. Created the helper script, remembering to edit the domain (find/replace) and the certificate path:
    nano updateunificert
    My file looks like this:

    openssl pkcs12 -export -in /etc/letsencrypt/myhost.mydomain.com/fullchain.pem -inkey /etc/letsencrypt/myhost.mydomain.com/privkey.pem -out /etc/letsencrypt/myhost.mydomain.com/cert_and_key.p12 -name tomcat -CAfile /etc/letsencrypt/myhost.mydomain.com/chain.pem -caname root -password pass:aaa;
    rm -f /etc/letsencrypt/myhost.mydomain.com/keystore;
    keytool -importkeystore -srcstorepass aaa -deststorepass aircontrolenterprise -destkeypass aircontrolenterprise -srckeystore /etc/letsencrypt/myhost.mydomain.com/cert_and_key.p12 -srcstoretype PKCS12 -alias tomcat -keystore /etc/letsencrypt/myhost.mydomain.com/keystore;
    keytool -import -trustcacerts -alias unifi -deststorepass aircontrolenterprise -file /etc/letsencrypt/myhost.mydomain.com/chain.pem -noprompt -keystore /etc/letsencrypt/myhost.mydomain.com/keystore;
    mv /var/lib/unifi/keystore /var/lib/unifi/keystore-backup;
    cp /etc/letsencrypt/myhost.mydomain.com/keystore /var/lib/unifi/keystore;
    service unifi restart;
  16. Created cron jobs to trigger the cert update daily:
    crontab -e
    I added this line:

    20 5 * * * /home/dehydrated/dehydrated --cron --accept-terms --out /etc/letsencrypt >/dev/null 2>&1
  17. returned to my previous shell:
  18. Created cron jobs to install the new cert daily:
    sudo crontab -e
    I added this line:

    29 5 * * * /home/dehydrated/updateunificert >/dev/null 2>&1

How to upgrade VMWare ESXi on HP Gen8 Microserver

  1. go to VMware ESXi Patch Tracker and check for the latest ImageProfile e.g. ESXi-6.5.0-20170404001-standard
  2. Shut down all VMs and turn on maintenance mode
  3. allow outbound firewall requests
    esxcli network firewall ruleset set -e true -r httpClient
  4. execute the update:
    esxcli software profile update -p ESXi-6.5.0-20170404001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
  5. Disable firewall ports
    esxcli network firewall ruleset set -e false -r httpClient
  6. Disable maintenance mode
  7. Reboot

/var/lib/docker/aufs/diff is consuming entire drive

Some of my docker containers were complaining that they didn’t have enough drive space. This looked odd – so i logged in to the host and checked around:

[email protected]:/$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 2.0G 0 2.0G 0% /dev
tmpfs 394M 12M 382M 3% /run
/dev/sda1 26G 25G 0 100% /
tmpfs 2.0G 2.0M 2.0G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
//fs.cynexia.net/largeappdata 2.7T 285G 2.4T 11% /mnt/largeappdata
//fs.cynexia.net/video 5.4T 3.0T 2.4T 56% /mnt/video
//fs.cynexia.net/appdata 2.5T 52G 2.4T 3% /mnt/appdata

All space used up. Huh. Wonder why? I did a quick check to see what’s using most space:

[email protected]:/$ sudo du -xh / | grep '[0-9\.]\+G'
8.0K /var/lib/docker/aufs/diff/1dba1b90260105df03d0147c535c104cca0dd24fcc9273f0bc27b725c7cc676f/usr/local/crashplan/jre/lib/locale/zh.GBK/LC_MESSAGES
12K /var/lib/docker/aufs/diff/1dba1b90260105df03d0147c535c104cca0dd24fcc9273f0bc27b725c7cc676f/usr/local/crashplan/jre/lib/locale/zh.GBK
19G /var/lib/docker/aufs/diff
19G /var/lib/docker/aufs
2.2G /var/lib/docker/containers/8c5725f63f681e012fcc479e78133f31ab1c760b7d8d4e0a7e150d213face41f
2.3G /var/lib/docker/containers
21G /var/lib/docker
21G /var/lib
22G /var
25G /

clearly /var/lib/docker/aufs/diff is what’s causing it. Let’s clean that up:

[email protected]:/$ docker rmi $(docker images -aq --filter=dangling=true)
Untagged: mnbf9rca/[email protected]:9846b7570b5ba6d686be21623446cec8abd9db04cf55a39ce45cabfaa0d63f9f
Deleted: sha256:011bf974552570c536f8f98c73e0ed7d09ef9e2bfcbc7b3f3e02e19682b7480e
Deleted: sha256:a0637dd0588be6aee9f4655260176e6da802fcd92347cdf789ae84f3503322c3
Deleted: sha256:6e21a0999ad14a1cc0ccc8e31611b137793e3614338e01f920e13bfeb4128fdc
Deleted: sha256:b98c7813439119c3d2f859060fe11bf10151f69587f850a48448cae0fa4d9305
Untagged: mnbf9rca/[email protected]:ad493202d196dfae418769428ba6dea4d576ce1adec7ebe90837d0b965fe9b42
Deleted: sha256:b8df5a1ffa1eedd7be03d4a2a37549bf81699cc6fa1586c1d3510d90d4e9e562
Deleted: sha256:07c09e3cb65b3cec786933f882a08d5b0a34cd94f6922ada0d6f0cf779482ee0

Let’s check now…

[email protected]:/$ sudo du -xh / | grep '[0-9\.]\+G'
8.0K /var/lib/docker/aufs/diff/1dba1b90260105df03d0147c535c104cca0dd24fcc9273f0bc27b725c7cc676f/usr/local/crashplan/jre/lib/locale/zh.GBK/LC_MESSAGES
12K /var/lib/docker/aufs/diff/1dba1b90260105df03d0147c535c104cca0dd24fcc9273f0bc27b725c7cc676f/usr/local/crashplan/jre/lib/locale/zh.GBK
4.8G /var/lib/docker/aufs/diff
4.8G /var/lib/docker/aufs
2.2G /var/lib/docker/containers/8c5725f63f681e012fcc479e78133f31ab1c760b7d8d4e0a7e150d213face41f
2.3G /var/lib/docker/containers
7.4G /var/lib/docker
7.6G /var/lib
8.3G /var
11G /

much better! It turns out there are a few great cleanup agents e.g. docker-gc-cron which will do the job for me.

Flashing an M1015 – Error code = 64 Failed to flash the image. Please retry recovery

I purchased an IBM M1015 to use as an HBA in my server. As part of that, I wanted to flash it with the IT firmware, however, I was getting errors, even when I used the original IBM firmware from their website:

C:\> megarec -cleanflash 0 M1000FW.ROM

MegaRAID HWR Controller Recovery tool. Version 01.01-004 February 05, 2010
Copyright (c) 2006-2008 LSI Corp.
Supports 1079 controller and its Successors

Erasing Flash Chip (8MB)....
 Completed: 100%
Flashing Image. Please wait...

Currently flashing component = BIOS
Programming Flash....
 Completed: 100%
Verifying the Flashed Data...

Currently flashing component = HIIM
Programming Flash....
 Completed: 100%
Verifying the Flashed Data...

Currently flashing component = APP
Error in downloading the image.
Error code = 64
Failed to flash the image. Please retry recovery

I’d never seen an M1015 before, at least not close up. Closer inspection of the card, however, revealed a code: FRU 46C8927. I know FRU” means “Field Replaceable Unit” i.e. something you can order as a replacement part. So I googled that code, and discovered that this was an IBM M5015, not an M1015. The M5015 cannot be used in IT mode, so I had to send it back.


How to flash IBM ServeRaid M1015 to LSI9211-IT for ZFS in HP Gen8 Microserver

First things first – you do this at your own risk. I take no responsibility for anything going wrong – and it can go wrong. If you are in doubt – don’t do it. And if it goes wrong – don’t blame me…

  1. Make a DOS USB boot stick
    1. download the very useful Rufus from here.
    2. select “Create a bootable disk using FreeDos”:
    3. flash the USB.
  2. Download the firwmare files from here: sas2008 (see footnote for original source)
  3. Extract the files and place them on the root of the USB stick.
  4. Download the latest LSI firmware from the Avago site. You’re looking for Firmware for an SAS 9211-8i Host Bus Adaptor. At the time of writing, this is version P20.
  5. Extract the LSI firmware to a folder on your machine.
  6. Create a subfolder on the USB called P20
  7. From the extracted LSI firmware, copy the following to the P20 folder on the USB:
    1. The 2118it.bin file from <zip>\\firmware\\HBA_9211_8i_IT folder
    2. mptsas2.rom from sasbios_rel folder
    3. sas2flsh.exe from sas2flash_dos_rel folder
  8. Look at the back of the card and note down the SAS address – it’s something like 500605B0xxxxxxxx.
  9. put the card in the machine, and switch it on.
  10. Boot to the USB stick – press F11 during POST and select USB.
  11. Flash the firmware:
    1. Type the following:
      megarec -writesbr 0 sbrempty.bin
      megarec -cleanflash 0
    2. Reboot, again booting from the USB stick
    3. Next, install the P10 or P11 firmware – type the following:
      sas2flsh -o -f 2118it.bin -b mptsas2.rom
      sas2flsh -o -sasadd 500605bxxxxxxxxx (x= numbers for SAS address)
    4. Reboot, again booting from the USB stick
    5. Finally, upgrade to the P20 firmware – type the following to change to the folder and execute flash of the new firmware:
      cd p20
      sas2flsh -o -f 2118it.bin -b mptsas2.rom
  12. Remove the USB stick
  13. Reboot.
  14. Some people recommend to disable loading the Option ROM. On my machine, loading the option room caused an NMI, so i ignored it, but if you want to do it: Load the Option ROM (press CTRL-C on boot) and set “Boot Support” to “Disabled”

The original instructions for this task are here, with my additions to update to the P20 firmware – I’ve archived them here for my own reference.

Fix: Emby Docker fails to start when config on mounted share – SQLite “database is locked”

I have a clean VM running Ubuntu 16.04 on VWare ESXi 6.5. I have a CIFS share mounted at /mnt/appdata with the noperm flag. The share is writeable.

I installed Docker using the instructions here: https://hub.docker.c…mby/embyserver/

docker run -it --rm -v /usr/local/bin:/target \
     -e "APP_USER=robert" \
     -e "APP_CONFIG=/mnt/appdata/emby" \
     emby/embyserver instl


docker run -it --rm -v /etc/systemd/system:/target \
    emby/embyserver instl services

the next command, sudo systemctl enable emby-server.service, didnt work. Instead I had to do:

sudo systemctl enable [email protected]

then I ran emby-server and configured it with a path /mnt/video (Also a CIFS share mounted on my local machine). However, Emby doesnt work – and i see an error in the attached log (“svc.txt”):

?2016-12-04T09:49:04.572948238Z Error, Main, UnhandledException
52016-12-04T09:49:04.573027012Z  *** Error Report ***
42016-12-04T09:49:04.573039000Z  Version: 3.0.8500.0
2016-12-04T09:49:04.573049078Z  Command line: /usr/lib/emby-server/bin/MediaBrowser.Server.Mono.exe -programdata /config -ffmpeg /bin/ffmpeg -ffprobe /bin/ffprobe -restartpath /usr/lib/emby-server/restart.sh
@2016-12-04T09:49:04.573081031Z  Operating system: Unix
32016-12-04T09:49:04.573090300Z  Processor count: 4
02016-12-04T09:49:04.573097909Z  64-Bit OS: True
52016-12-04T09:49:04.573105143Z  64-Bit Process: True
;2016-12-04T09:49:04.573112588Z  Program data path: /config
b2016-12-04T09:49:04.573119889Z  Mono: 4.6.2 (Stable Mon Nov 21 15:56:40 UTC 2016)
h2016-12-04T09:49:04.573127634Z  Application Path: /usr/lib/emby-server/bin/MediaBrowser.Server.Mono.exe
=2016-12-04T09:49:04.573135097Z  One or more errors occurred.
:2016-12-04T09:49:04.573142348Z  System.AggregateException
2016-12-04T09:49:04.573151009Z    at System.Threading.Tasks.Task.WaitAll (System.Threading.Tasks.Task[] tasks, System.Int32 millisecondsTimeout, System.Threading.CancellationToken cancellationToken) [0x00242] in <8f2c484307284b51944a1a13a14c0266>:0 
2016-12-04T09:49:04.573161976Z    at System.Threading.Tasks.Task.WaitAll (System.Threading.Tasks.Task[] tasks, System.Int32 millisecondsTimeout) [0x00000] in <8f2c484307284b51944a1a13a14c0266>:0 
2016-12-04T09:49:04.573172331Z    at System.Threading.Tasks.Task.WaitAll (System.Threading.Tasks.Task[] tasks) [0x00000] in <8f2c484307284b51944a1a13a14c0266>:0 
>2016-12-04T09:49:04.573181859Z    at MediaBrowser.Server.Mono.MainClass.RunApplication (MediaBrowser.Server.Implementations.ServerApplicationPaths appPaths, MediaBrowser.Model.Logging.ILogManager logManager, MediaBrowser.Server.Startup.Common.StartupOptions options) [0x000cf] in <8385af0cf454438f8df15fa62f41afa4>:0 
2016-12-04T09:49:04.573191220Z    at MediaBrowser.Server.Mono.MainClass.Main (System.String[] args) [0x0008a] in <8385af0cf454438f8df15fa62f41afa4>:0 
S2016-12-04T09:49:04.573199399Z  InnerException: System.Data.SQLite.SQLiteException
32016-12-04T09:49:04.573206751Z  database is locked
32016-12-04T09:49:04.573213834Z  database is locked

I tried running the container directly:

docker run -d --name="EmbyServer" \
      --net="host" \
      -e TZ="UTC" \
      -e HOST_OS="ubuntu" \
      -e "TCP_PORT_8096"="8096" \
      -v "/mnt/appdata/emby/":"/config":rw \

but i get the same error (“just run.txt”). I checked, and the /mnt/appdata/emby folder is being created:

[email protected]:~$ ls /mnt/appdata/emby
abc config data localization logs
[email protected]:~$ du -sh /mnt/appdata/emby
3.4M /mnt/appdata/emby

so clearly the share is writeable from within the container. If I run the container without using the mapped volume for the config:

docker run -d --name="EmbyServer" \
      --net="host" \
      -e TZ="UTC" \
      -e HOST_OS="ubuntu" \
      -e "TCP_PORT_8096"="8096" \

it’s reachable at http://host:8096 and works fine (“no map.txt”) – but obviously the configuration isn’t persistent.

It turns out that the root of the problem is the way that CIFS handles byte-range locking, which is incompatible with SQLite. One way to fix this is to add the nobrl parameter to the mount, e.g.:

//fs.cynexia.net/appdata /mnt/appdata cifs iocharset=utf8,credentials=/root/.smbcredentials,nobrl,dir_mode=0775,nofail,gid=10,noperm 0 0

HP Gen8 Microserver error “Embedded media manager failed initialization” – how to get HPQLOCFG

During the process of installing VMWare on to my Gen8 Microserver, I had trouble writing data to the internal SD card – in fact, I couldn’t even see it. Looking in the ILO Event Logs I saw this:

Embedded Flash/SD-CARD: Embedded media manager failed initialization.

googling this didn’t get me much – just forum posts with people complaining about it, but then i found this HPE Customer Advisory, which lists out the steps needed to reset the error. Basically:

  1. create an XML file with the following content:
    <!-- RIBCL Sample Script for HP Lights-Out Products --> 
    <!--Copyright (c) 2016 Hewlett-Packard Enterprise Development Company,L.P. --> 
    <!-- Description: This is a sample XML script to force format ll --> 
    <!-- the iLO partitions. --> 
    <!-- iLO resets automatically for this operation to take effect --> 
    <!-- Warning: This command erases all data on the partition(s) --> 
    <!-- External providers will need to be re-configured if --> 
    <!-- partition is formatted --> 
    <!-- Input: VALUE tag: all - format all available partitions --> 
    <!-- NOTE:You will need to replace the USER_LOGIN and PASSWORD values --> 
    <!-- with values that are appropriate for your environment --> 
    <!-- See "HP Integrated Lights-Out Management Processor Scripting --> 
    <!-- and Command Line Resource Guide" for more information on --> 
    <!-- scripting and the syntax of the RIBCL XML --> 
    <!-- Firmware support information for this script: --> 
    <!-- iLO 4 - Version 2.42 or later. --> 
    <!-- iLO 3 - None. --> 
    <!-- iLO 2 - None. -->
    <RIBCL VERSION="2.0"> 
    <LOGIN USER_LOGIN="Administrator" PASSWORD=""> 
    <RIB_INFO MODE="write"> 
    <FORCE_FORMAT VALUE="all" /> 
  2. run that file against the server using HPQLOCFG.exe:
    hpqlocfg -s <server IP> -l c:\hpqcfg.log -f c:\Force_Format.xml -v -t user=Administrator,password=<password>
  3. some other steps to reinstall intelligent provisioning, if you use it.

All well and good – but where do you get HPQLOCFG from? If you follow the link in the article, it refuses to install because i don’t have the full PSP installed. So how can I apply the change?

Well, in my case, I installed VMWare to an internal USB stick and then ran the command from there – you could even do this with all of your other existing drives removed so that they don’t get erased. You could then restart the process. Problem solved!

error processing package apt-show-versions on Ubuntu 14.04 or Ubuntu 16.04

When installing Webmin, I’ve sometimes come across an error installing a dependency package, apt-show-versions:

Setting up apt-show-versions (0.22.7) ...
** initializing cache. This may take a while **
FATAL -> Failed to fork.
dpkg: error processing package apt-show-versions (--configure):
subprocess installed post-installation script returned error exit status 100
dpkg: dependency problems prevent configuration of webmin:FATAL -> Failed to fork.

This is caused by the fact that apt-show-versions can’t read compressed index files. Thankfully, the solution is quite simple:

First, we need to tell APT not to compress the index. To do this we create an entry in a file called /etc/apt/apt.conf.d/02compress-indexes:

sudo nano /etc/apt/apt.conf.d/02compress-indexes

If the file is empty (mine was), simply put this line in it:

Acquire::GzipIndexes "false";

if the file has some text, check if this parameter is in there as “true” and if so change to false. If it’s missing, just add it.

Then, we need to delete the existing indexes and re-download them:

sudo rm /var/lib/dpkg/info/apt-show*

followed by

sudo apt-get update

Finally, we just need to complete the installation:

sudo apt-get -f install webmin

And job done.

How to ensure you can revert changes to function apps

As I’ve been playing around with Azure Functions I’ve slowly outgrown the web-based editor. It’s not that it’s not useful, it’s just that I miss intellisense (I’ll come back to this in a later post), and I accidentally deployed a change which broke one of my functions. I’d made dozens of tiny changes, but I simply could not figure out which one it was. Not having a version history, I was kinda screwed.

I had seen the “Configure Continuous Integration” option before, but never really looked at it. I keep my source code in private GitHub repos, so it was relatively trivial to set up a new repo tied to this function app. After reading the setup instructions, however, I was a little confused by what exactly to do to put my existing functions in to repo, but it was actually much simpler than I thought. It turns out one of the best features is the ability to roll back to a previous commit with a single click:


First, I created a new private GitHub repo and cloned it to my local machine. I chose not to use branching – but I guess you could map different function apps to different branches to support a separation between “dev”, “test”, “production” etc. In the root of my repo, I created a folder for each of the functions I wished to deploy, named exactly the same as the existing functions (I assume they’re not case sensitive but I kept to the same case).

Then, I needed to put the actual code in there. Under the visual editor for each of the functions is a “view files” link: view-files. Clicking this, I was able to see the function.json and run.csx files within each function. I simply cut and pasted the code from there to a file of the same name in the relevant folder.

Next, I needed to find the host.json file. That’s a bit more tricky. In the end, I figured the easiest way was to use the Dev Console. Navigate to Function App Settings, and select “Open dev console”. After a few seconds, the dev console appears:


This appears to be a Linux shell. You should start in the d:\home\site\wwwroot folder – that’s where host.json lives. Just type cat host.json to see the contents. It turns out mine was empty (just an open and close curly brace):


> ls
> cat host.json

I created this in the root of my repo, then committed the changes and pushed them back to GitHub. Within a few seconds, I was able to see the change by clicking “Configure continuous integrations” in Function App Settings. My changes deployed immediately. And when I next screw up, because I’m forced to push changes via GIT, I know I’ll be able to roll back to a known-good configuration.