Musk Says Excessive Automation Was ‘My Mistake’

While I’m sure that eventually, the process could have been automated, anyone who’s tried this even at a trivial scale would tell you to get the process working first, then to automate. So I don’t get how Tesla ended up in this position.
https://rob.al/2IW0qCf
Tesla Inc.’s Elon Musk, who’s built up an aura around how automated his car assembly plant will be, has good news for humans: We still need your help.

LetsEncrypt with DNS-01 validation for Unifi

Update 2021-01-08: this is now out of date. See my updated post with a much easier method.

I have a number of Ubiquiti UAPs, and I manage them with the UniFi app, installed on a linode server. Like any publicly hosted server, i want to use a trusted SSL certificate, and for that, I chose LetsEncrypt with DNS-01 validation, as i found a useful helper script by thatsamguy on the UniFi forums. I use AWS Route53 to host the DNS zone.

UniFi doesn’t have built in support for LetsEncrypt, so I put together a simple solution using the DNS-01 validation method. Here’s how i did it:

  1. Created a new, unprivileged user on the host:
    sudo adduser dehydrated
  2. Created the directory to store the certificates. I chose to use the default /etc/letsencrypt folder
    sudo mkdir /etc/letsencrypt
  3. I granted the dehydrated user full access to this folder:
    sudo chown dehydrated:dehydrated /etc/letsencrypt
  4. installed the dependencies which are in the repo:
    sudo apt-get install --no-install-recommends jq sed findutils s-nail
  5. installed the cli53 dependency which is not in the repo by following the instructions on the git readme.md file
  6. logged in as the dehydrated user:
    su - dehydrated
  7. fetched dehydrated and made it executable:
    wget https://raw.githubusercontent.com/lukas2511/dehydrated/master/dehydrated
    chmod +x dehydrated
  8. fetched the dehydrated route53 hook script and made it executable:
    wget https://raw.githubusercontent.com/whereisaaron/dehydrated-route53-hook-script/master/hook.sh
    chmod +x hook.sh
  9. Created a new IAM access policy in AWS. I found that the sample policy given with the root 53 hook readme.md didn’t work – here’s the policy that I added:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "route53:ListHostedZones",
                    "route53:ListHostedZonesByName",
                    "route53:ListResourceRecordSets"
                ],
                "Resource": "*"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "route53:ListResourceRecordSets",
                    "route53:ChangeResourceRecordSets"
                ],
                "Resource": "arn:aws:route53:::hostedzone/*"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "route53:GetChange"
                ],
                "Resource": "arn:aws:route53:::change/*"
            }
        ]
    }
  10. Created a new IAM user in AWS and assigned the policy to them (and only this policy), taking a note of the AWS Access Key and AWS Secret Access Key.
  11. Created a new file to store the key:
    mkdir ~/.aws
    nano ~/.aws/credentials

    The file looks like this (obviously, put your own data in there):

    [default]
    aws_access_key_id = AKIAIJFAKGII4MDS3KHA
    aws_secret_access_key = Q+XoOGa5J3AS39as593Ds1f5F91zRy0btkfW
  12. Created a new config file:
    nano ~/config

    The file looks like this (edited from the sample):

    ########################################################
    # This is the main config file for dehydrated          #
    #                                                      #
    # This file is looked for in the following locations:  #
    # $SCRIPTDIR/config (next to this script)              #
    # /usr/local/etc/dehydrated/config                     #
    # /etc/dehydrated/config                               #
    # ${PWD}/config (in current working-directory)         #
    #                                                      #
    # Default values of this config are in comments        #
    ########################################################
    
    # Resolve names to addresses of IP version only. (curl)
    # supported values: 4, 6
    # default: 
    #IP_VERSION=
    
    # Path to certificate authority (default: https://acme-v01.api.letsencrypt.org/directory)
    #CA="https://acme-v01.api.letsencrypt.org/directory"
    # CA="https://acme-staging.api.letsencrypt.org/directory"
    
    # Path to certificate authority license terms redirect (default: https://acme-v01.api.letsencrypt.org/terms)
    #CA_TERMS="https://acme-v01.api.letsencrypt.org/terms"
    # CA_TERMS="https://acme-staging.api.letsencrypt.org/terms"
    
    # Path to license agreement (default: )
    #LICENSE=""
    
    # Which challenge should be used? Currently http-01 and dns-01 are supported
    CHALLENGETYPE="dns-01"
    
    # Path to a directory containing additional config files, allowing to override
    # the defaults found in the main configuration file. Additional config files
    # in this directory needs to be named with a '.sh' ending.
    # default: 
    #CONFIG_D=
    
    # Base directory for account key, generated certificates and list of domains (default: $SCRIPTDIR -- uses config directory if undefined)
    #BASEDIR=$SCRIPTDIR
    
    # File containing the list of domains to request certificates for (default: $BASEDIR/domains.txt)
    #DOMAINS_TXT="${BASEDIR}/domains.txt"
    
    # Output directory for generated certificates
    #CERTDIR="${BASEDIR}/certs"
    
    # Directory for account keys and registration information
    #ACCOUNTDIR="${BASEDIR}/accounts"
    
    # Output directory for challenge-tokens to be served by webserver or deployed in HOOK (default: /var/www/dehydrated)
    #WELLKNOWN="/var/www/dehydrated"
    
    # Default keysize for private keys (default: 4096)
    #KEYSIZE="4096"
    
    # Path to openssl config file (default:  - tries to figure out system default)
    #OPENSSL_CNF=
    
    # Extra options passed to the curl binary (default: )
    #CURL_OPTS=
    
    # Program or function called in certain situations
    #
    # After generating the challenge-response, or after failed challenge (in this case altname is empty)
    # Given arguments: clean_challenge|deploy_challenge altname token-filename token-content
    #
    # After successfully signing certificate
    # Given arguments: deploy_cert domain path/to/privkey.pem path/to/cert.pem path/to/fullchain.pem
    #
    # BASEDIR and WELLKNOWN variables are exported and can be used in an external program
    # default: 
    HOOK=${BASEDIR}/hook.sh
    
    # Chain clean_challenge|deploy_challenge arguments together into one hook call per certificate (default: no)
    HOOK_CHAIN="no"
    
    # Minimum days before expiration to automatically renew certificate (default: 30)
    #RENEW_DAYS="30"
    
    # Regenerate private keys instead of just signing new certificates on renewal (default: yes)
    #PRIVATE_KEY_RENEW="yes"
    
    # Create an extra private key for rollover (default: no)
    #PRIVATE_KEY_ROLLOVER="no"
    
    # Which public key algorithm should be used? Supported: rsa, prime256v1 and secp384r1
    #KEY_ALGO=rsa
    
    # E-mail to use during the registration (default: )
    CONTACT_EMAIL=myemail@mydomain.com
    
    # Lockfile location, to prevent concurrent access (default: $BASEDIR/lock)
    #LOCKFILE="${BASEDIR}/lock"
    
    # Option to add CSR-flag indicating OCSP stapling to be mandatory (default: no)
    #OCSP_MUST_STAPLE="no"
    
    # Fetch OCSP responses (default: no)
    #OCSP_FETCH="no"
    
    # Issuer chain cache directory (default: $BASEDIR/chains)
    #CHAINCACHE="${BASEDIR}/chains"
    
    # Automatic cleanup (default: no)
    AUTO_CLEANUP="yes"
  13. Created a new file with the list of domains to register:
    nano ~/domains.txt

    The file looks like this (obviously, put your own data in there):

    myhost.mydomain.com
  14. Checked that the certificate registers correctly – note that if you are having trouble, you should enable the “staging” CA / terms file in config while you troubleshoot to avoid hitting letsencrypt limits:
    dehydrated@localhost:~$ ./dehydrated --cron --accept-terms --out /etc/letsencrypt
    # INFO: Using main config file /home/dehydrated/config
    + Generating account key...
    + Registering account key with ACME server...
    Processing myhost.mydomain.com
     + Signing domains...
     + Creating new directory /home/dehydrated/certs/myhost.mydomain.com ...
     + Generating private key...
     + Generating signing request...
     + Requesting challenge for myhost.mydomain.com...
    Creating challenge record for myhost.mydomain.com in zone cynexia.net
    Created record: '_acme-challenge.myhost.mydomain.com. 60 IN TXT "cpE1VF_xshMm1IVY1Y66Kk9Zb_7jA2VFkP65WuNgu3Q"'
    Waiting for sync...................................
    Completed
     + Responding to challenge for myhost.mydomain.com...
    Deleting challenge record for myhost.mydomain.com from zone mydomain.com
    1 record sets deleted
     + Challenge is valid!
     + Requesting certificate...
     + Checking certificate...
     + Done!
     + Creating fullchain.pem...
     + Using cached chain!
     + Done!
    + Running automatic cleanup
  15. Created the helper script, remembering to edit the domain (find/replace) and the certificate path:
    nano updateunificert
    My file looks like this:

     

    !/bin/bash
    PATH='/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games';
    openssl pkcs12 -export -in /etc/letsencrypt/myhost.mydomain.com/fullchain.pem -inkey /etc/letsencrypt/myhost.mydomain.com/privkey.pem -out /etc/letsencrypt/myhost.mydomain.com/cert_and_key.p12 -name tomcat -CAfile /etc/letsencrypt/myhost.mydomain.com/chain.pem -caname root -password pass:aaa;
    rm -f /etc/letsencrypt/myhost.mydomain.com/keystore;
    keytool -importkeystore -srcstorepass aaa -deststorepass aircontrolenterprise -destkeypass aircontrolenterprise -srckeystore /etc/letsencrypt/myhost.mydomain.com/cert_and_key.p12 -srcstoretype PKCS12 -alias tomcat -keystore /etc/letsencrypt/myhost.mydomain.com/keystore;
    keytool -import -trustcacerts -alias unifi -deststorepass aircontrolenterprise -file /etc/letsencrypt/myhost.mydomain.com/chain.pem -noprompt -keystore /etc/letsencrypt/myhost.mydomain.com/keystore;
    mv /var/lib/unifi/keystore /var/lib/unifi/keystore-backup;
    cp /etc/letsencrypt/myhost.mydomain.com/keystore /var/lib/unifi/keystore;
    service unifi restart;
  16. Created cron jobs to trigger the cert update daily:
    crontab -e
    I added this line:

     

    20 5 * * * /home/dehydrated/dehydrated --cron --accept-terms --out /etc/letsencrypt >/dev/null 2>&1
  17. returned to my previous shell:
    exit
  18. Created cron jobs to install the new cert daily:
    sudo crontab -e
    I added this line:

     

    29 5 * * * /home/dehydrated/updateunificert >/dev/null 2>&1

How to upgrade VMWare ESXi on HP Gen8 Microserver

  1. go to VMware ESXi Patch Tracker and check for the latest ImageProfile e.g. ESXi-6.5.0-20170404001-standard
  2. Shut down all VMs and turn on maintenance mode
  3. allow outbound firewall requests
    esxcli network firewall ruleset set -e true -r httpClient
  4. execute the update:
    esxcli software profile update -p ESXi-6.5.0-20170404001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
  5. Disable firewall ports
    esxcli network firewall ruleset set -e false -r httpClient
  6. Disable maintenance mode
  7. Reboot

Fix: /var/lib/docker/aufs/diff is consuming entire drive

Some of my docker containers were complaining that they didn’t have enough drive space. This looked odd – so i logged in to the host and checked around:

robert@d:/$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 2.0G 0 2.0G 0% /dev
tmpfs 394M 12M 382M 3% /run
/dev/sda1 26G 25G 0 100% /
tmpfs 2.0G 2.0M 2.0G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
//fs.cynexia.net/largeappdata 2.7T 285G 2.4T 11% /mnt/largeappdata
//fs.cynexia.net/video 5.4T 3.0T 2.4T 56% /mnt/video
//fs.cynexia.net/appdata 2.5T 52G 2.4T 3% /mnt/appdata

All space used up. Huh. Wonder why? I did a quick check to see what’s using most space:

robert@d:/$ sudo du -xh / | grep '[0-9\.]\+G'
8.0K /var/lib/docker/aufs/diff/1dba1b90260105df03d0147c535c104cca0dd24fcc9273f0bc27b725c7cc676f/usr/local/crashplan/jre/lib/locale/zh.GBK/LC_MESSAGES
12K /var/lib/docker/aufs/diff/1dba1b90260105df03d0147c535c104cca0dd24fcc9273f0bc27b725c7cc676f/usr/local/crashplan/jre/lib/locale/zh.GBK
19G /var/lib/docker/aufs/diff
19G /var/lib/docker/aufs
2.2G /var/lib/docker/containers/8c5725f63f681e012fcc479e78133f31ab1c760b7d8d4e0a7e150d213face41f
2.3G /var/lib/docker/containers
21G /var/lib/docker
21G /var/lib
22G /var
25G /

clearly /var/lib/docker/aufs/diff is what’s causing it. Let’s clean that up:

robert@d:/$ docker rmi $(docker images -aq --filter=dangling=true)
Untagged: mnbf9rca/getiplayer@sha256:9846b7570b5ba6d686be21623446cec8abd9db04cf55a39ce45cabfaa0d63f9f
Deleted: sha256:011bf974552570c536f8f98c73e0ed7d09ef9e2bfcbc7b3f3e02e19682b7480e
Deleted: sha256:a0637dd0588be6aee9f4655260176e6da802fcd92347cdf789ae84f3503322c3
Deleted: sha256:6e21a0999ad14a1cc0ccc8e31611b137793e3614338e01f920e13bfeb4128fdc
Deleted: sha256:b98c7813439119c3d2f859060fe11bf10151f69587f850a48448cae0fa4d9305
Untagged: mnbf9rca/getiplayer@sha256:ad493202d196dfae418769428ba6dea4d576ce1adec7ebe90837d0b965fe9b42
Deleted: sha256:b8df5a1ffa1eedd7be03d4a2a37549bf81699cc6fa1586c1d3510d90d4e9e562
...
Deleted: sha256:07c09e3cb65b3cec786933f882a08d5b0a34cd94f6922ada0d6f0cf779482ee0

Let’s check now…

robert@d:/$ sudo du -xh / | grep '[0-9\.]\+G'
8.0K /var/lib/docker/aufs/diff/1dba1b90260105df03d0147c535c104cca0dd24fcc9273f0bc27b725c7cc676f/usr/local/crashplan/jre/lib/locale/zh.GBK/LC_MESSAGES
12K /var/lib/docker/aufs/diff/1dba1b90260105df03d0147c535c104cca0dd24fcc9273f0bc27b725c7cc676f/usr/local/crashplan/jre/lib/locale/zh.GBK
4.8G /var/lib/docker/aufs/diff
4.8G /var/lib/docker/aufs
2.2G /var/lib/docker/containers/8c5725f63f681e012fcc479e78133f31ab1c760b7d8d4e0a7e150d213face41f
2.3G /var/lib/docker/containers
7.4G /var/lib/docker
7.6G /var/lib
8.3G /var
11G /

much better! It turns out there are a few great cleanup agents e.g. docker-gc-cron which will do the job for me.

Creating a VM to test Docker – migrating from Unraid to Napp-It

onwards…

  1. downloaded the Ubuntu 16.04 Server LTS CD
  2. Created new VM
  3. Connected ISO to virtual DVD in console in VMWare vSphere client
  4. booted to the ISO
  5. pressed F4 and selected “minimal”
  6. pressed F6 and selected minimal VM
  7. walked through and added SSH at the right point
  8. SSHd to the server
  9. I installed docker using these instructions, including configuring docker to start on boot.
  10. created /mnt/cups
  11. ran a new docker container…
    robert@ubd:/mnt$ sudo docker run -d --name="cups-google-print" --net="host" --privileged="true" -e TZ="UTC" -e HOST_OS="ubuntu" -e "CUPS_USER_ADMIN"="admin" -e "CUPS_USER_PASSWORD"="pass" -e "TCP_PORT_631"="631" -v "/mnt/cups":"/config":rw -v /dev:/dev -v /etc/avahi/services:/avahi -v /var/run/dbus:/var/run/dbus mnbf9rca/cups-google-print --restart=unless-stopped Unable to find image 'mnbf9rca/cups-google-print:latest' locally latest: Pulling from mnbf9rca/cups-google-print a3ed95caeb02: Pull complete 3b1d42cd9af9: Pull complete d2ff49536f4d: Pull complete f94adccdbb9c: Pull complete ae857e8dd13c: Pull complete 327565847940: Pull complete 83835dcb6373: Pull complete 78b26d55dd43: Pull complete 388ec0e358c7: Extracting [=======================================>           ]   190 MB/238.1 MB 388ec0e358c7: Pull complete 05dd908ba895: Pull complete 87c9e1d25f3b: Pull complete 75d49e6da022: Pull complete 4b8ca4d5d690: Pull complete Digest: sha256:3c231589c941288c4541016944658ee5915e4d8761648e1d88254c90dea9beca Status: Downloaded newer image for mnbf9rca/cups-google-print:latest f33434ebccaf2b5644260e664014d7364d4d5ead45bf4374e931a0acedd06015 
  12. checked it’s running with the docker ps -a command:
    robert@ubd:/mnt$ sudo docker ps -a CONTAINER ID        IMAGE                        COMMAND             CREATED             STATUS              PORTS               NAMES f33434ebccaf        mnbf9rca/cups-google-print   "/sbin/my_init"     2 minutes ago       Up 2 minutes                            cups-google-print
  13. Browsed to https://<server IP>:631/ to see that it’s running

bingo!

Fix: Flashing an M1015 – Error code = 64 Failed to flash the image. Please retry recovery

I purchased an IBM M1015 to use as an HBA in my server. As part of that, I wanted to flash it with the IT firmware, however, I was getting errors, even when I used the original IBM firmware from their website:

C:\> megarec -cleanflash 0 M1000FW.ROM

MegaRAID HWR Controller Recovery tool. Version 01.01-004 February 05, 2010
Copyright (c) 2006-2008 LSI Corp.
Supports 1079 controller and its Successors


Erasing Flash Chip (8MB)....
 Completed: 100%
Flashing Image. Please wait...


Currently flashing component = BIOS
Programming Flash....
 Completed: 100%
Verifying the Flashed Data...


Currently flashing component = HIIM
Programming Flash....
 Completed: 100%
Verifying the Flashed Data...


Currently flashing component = APP
Error in downloading the image.
Error code = 64
Failed to flash the image. Please retry recovery

I’d never seen an M1015 before, at least not close up. Closer inspection of the card, however, revealed a code: FRU 46C8927. I know FRU” means “Field Replaceable Unit” i.e. something you can order as a replacement part. So I googled that code, and discovered that this was an IBM M5015, not an M1015. The M5015 cannot be used in IT mode, so I had to send it back.

Installing VMWare – migrating from Unraid to Napp-It

# To roll back to hpvsa build 88 (for MicroServer Gen8 compatibility)
esxcli system maintenanceMode set --enable true
esxcli network firewall ruleset set -e true -r httpClient
esxcli software vib remove -n scsi-hpvsa -f
esxcli software vib install -v http://vibsdepot.hp.com/hpq/nov2014/esxi-550-drv-vibs/hpvsa/scsi-hpvsa-5.5.0-88OEM.550.0.0.1331820.x86_64.vib
esxcli system shutdown reboot -r "Rolled back to scsi-hpvsa build 88"