Hardware required – migrating from Unraid to Napp-It

I’ve been a user of Unraid since 2012, when I had to find a solution to my home storage after Windows Home Server was abandoned by Microsoft. Unraid has been very good for me, and the introduction of a Docker engine with Unraid 6 was very welcome. That said, I’ve recently encountered issues with bitrot, and the fact that unraid can’t use ZFS as the disk format annoys me. LimeTech claim that their parity check process should detect bitrot – however, something doesn’t seem to be working, as using using the Dynamix File Integrity plugin i can see it happening. In any case, knowing it’s happened isnt the same as being able to correct it, which just isn’t possible on Unraid without using BTRFS but many people simply don’t trust BTRFS, and besides, I fancy a change. So – over to VMWare EXSi and ZFS on Napp-It.

This blog post describes the hardware i needed.

The HP Gen8 Microserver is certified for use with VMWare and you can even install it to an internal USB or MicroSD card. In my case, I want a RAIDZ array of 3 x 4TB drives, plus two or more SSDs for VM and Docker hosts.

For ZFS to work properly it needs access to the underlying host controller, not a virtualised verison. VMWare is capable of direct device passthrough, but only on processors which support VT-d. Additionally, if I’m running a VM (the Napp-IT guest) on a drive on the storage controller, I can’t pass that same storage controller through to the guest.

When i bought my microserver, i got the base model, and with cashback i think it cost me £120. But the base has a Celeron G1060 and although this has the VT-x extensions, it doesn’t support VT-d.

The B120i controller in the Gen8 has 2 6GBs SATA channels, and 2 3GBs. I wanted to ensure that all 4 drive bays ran at 6GBs, plus have 2 6GBs channels for my host SSDs.

My shopping list was therefore:

  1. a new processor supporting VT-d. I used the chart maintained by users of the homeservershow forum to find a suitable processor for sale on eBay. I ordered an E3-1260L from China. Estimated delivery – a few weeks.
  2. Some thermal paste
  3. A second storage controller. I went for the IBM ServeRaid M1015 as it can be flashed to an LSI9211-81 in IT mode (meaning ZFS has direct access to the disks, without the controller being “smart” or “doing RAID” in the middle). See this post for instructions.
  4. a Mini SAS (SFF-8087) to SATA cable, again from eBay.

some things i already had:

  1. a molex splitter (I already had this)
  2. a molex to two SATA HDD power splitter

error processing package apt-show-versions on Ubuntu 14.04 or Ubuntu 16.04

When installing Webmin, I’ve sometimes come across an error installing a dependency package, apt-show-versions:

Setting up apt-show-versions (0.22.7) ...
** initializing cache. This may take a while **
FATAL -> Failed to fork.
dpkg: error processing package apt-show-versions (--configure):
subprocess installed post-installation script returned error exit status 100
dpkg: dependency problems prevent configuration of webmin:FATAL -> Failed to fork.

This is caused by the fact that apt-show-versions can’t read compressed index files. Thankfully, the solution is quite simple:

First, we need to tell APT not to compress the index. To do this we create an entry in a file called /etc/apt/apt.conf.d/02compress-indexes:

sudo nano /etc/apt/apt.conf.d/02compress-indexes

If the file is empty (mine was), simply put this line in it:

Acquire::GzipIndexes "false";

if the file has some text, check if this parameter is in there as “true” and if so change to false. If it’s missing, just add it.

Then, we need to delete the existing indexes and re-download them:

sudo rm /var/lib/dpkg/info/apt-show*

followed by

sudo apt-get update

Finally, we just need to complete the installation:

sudo apt-get -f install webmin

And job done.

How to ensure you can revert changes to function apps

As I’ve been playing around with Azure Functions I’ve slowly outgrown the web-based editor. It’s not that it’s not useful, it’s just that I miss intellisense (I’ll come back to this in a later post), and I accidentally deployed a change which broke one of my functions. I’d made dozens of tiny changes, but I simply could not figure out which one it was. Not having a version history, I was kinda screwed.

I had seen the “Configure Continuous Integration” option before, but never really looked at it. I keep my source code in private GitHub repos, so it was relatively trivial to set up a new repo tied to this function app. After reading the setup instructions, however, I was a little confused by what exactly to do to put my existing functions in to repo, but it was actually much simpler than I thought. It turns out one of the best features is the ability to roll back to a previous commit with a single click:

azure-roll-back-ci-deployment

First, I created a new private GitHub repo and cloned it to my local machine. I chose not to use branching – but I guess you could map different function apps to different branches to support a separation between “dev”, “test”, “production” etc. In the root of my repo, I created a folder for each of the functions I wished to deploy, named exactly the same as the existing functions (I assume they’re not case sensitive but I kept to the same case).

Then, I needed to put the actual code in there. Under the visual editor for each of the functions is a “view files” link: view-files. Clicking this, I was able to see the function.json and run.csx files within each function. I simply cut and pasted the code from there to a file of the same name in the relevant folder.

Next, I needed to find the host.json file. That’s a bit more tricky. In the end, I figured the easiest way was to use the Dev Console. Navigate to Function App Settings, and select “Open dev console”. After a few seconds, the dev console appears:

azure-dev-console

This appears to be a Linux shell. You should start in the d:\home\site\wwwroot folder – that’s where host.json lives. Just type cat host.json to see the contents. It turns out mine was empty (just an open and close curly brace):

D:\home\site\wwwroot

> ls
D:\home\site\wwwroot
QueueTriggerCSharp1
fetchDepartureBoardOrchestrator
host.json
postToTwitter
> cat host.json
D:\home\site\wwwroot
{}
>

I created this in the root of my repo, then committed the changes and pushed them back to GitHub. Within a few seconds, I was able to see the change by clicking “Configure continuous integrations” in Function App Settings. My changes deployed immediately. And when I next screw up, because I’m forced to push changes via GIT, I know I’ll be able to roll back to a known-good configuration.

When Function App updates won’t save…

I accidentally deployed a dodgy bit of code to one of my function apps. Then, even when I tried to revert to a “known good” version of my code via the GUI, it still didn’t work. No matter what I did, I kept getting the dreaded “Compilation Failed” error – but there was no more information. No useful compilation errors…

2016-10-23T20:35:46.853 Function started (Id=5ededb27-6144-40d1-8d14-8db77d70b771)
2016-10-23T20:35:47.901 Function compilation error
2016-10-23T20:35:47.901 Function completed (Failure, Id=5ededb27-6144-40d1-8d14-8db77d70b771)
2016-10-23T20:35:48.369 Exception while executing function: Functions.fetchDepartureBoardOrchestrator. Microsoft.Azure.WebJobs.Script: Script compilation failed.

It was so frustrating. Until I worked out that you can actually restart the power app host – just go to the Function App Settings then click “Go To App Service Settings” at the bottom.  You’ll see the traditional “full” properties and settings blade, and at the top is a “Restart” button. Then I was able to save my updated code.

National Rail LDBWS to Twitter

I’ve been playing around with my Nextion and a Particle Photon for a while. The idea is to pull data from a variety of services and have it available on a display by the front door – things like the weather, the outside temperature (from my Netatmo), and the next 3 trains to Seven Sisters from our station. Living, as we do, at the end of the Enfield Town branch line, it can be a bit hit and miss as to whether or not you make the train, or if it’s even running.

Anyway, I got the first part of this working. I regularly poll the National Rail SOAP API, parsing the content in to a simple minified JSON format using a Power App. Took me a while.

Then I discovered that someone has written a JSON proxy for the LDBWS. I’ll try that next.

missing dependencies Microsoft.CodeAnalysis.CSharp.Scripting

While trying to install the C# scripting package Microsoft.CodeAnalysis.CSharp.Scripting from nuget, but was getting odd messages about missing dependencies. I spent ages trying to figure it out as all the dependency versions appeared to match (e.g.

Unable to find a version of 'Microsoft.CodeAnalysis.Common' that is compatible with 'Microsoft.CodeAnalysis.CSharp 2.0.0 constraint: Microsoft.CodeAnalysis.Common (= 2.0.0)', 'Microsoft.CodeAnalysis.Scripting.Common 2.0.0 constraint: Microsoft.CodeAnalysis.Common (= 2.0.0)'

however, eventually I discovered that you neeed .NET 4.6. Doh!

Logic Apps are so expensive!!

I started writing EnfieldTownBot using Azure Logic Apps. It’s pretty easy, but i soon hit a challenge – it’s so expensive! My app was pretty simple – a trigger, a “for…each”, a condition and a http callout to my Twitter Poster Function App:

enfieldtownbotlogicapp

So – if there are no delays, that’s (recurrence + httprequest + foreach + 3 x (condition)) = 6 actions. There could be up to 9 if the postToTwitter action also triggers. I want to run this function 2x (once for trains FROM Enfield Town, once for trains TO Enfield Town), so that’s 12-18 actions per request. And I want to run it every 15-30 seconds to get the latest information published ASAP. So that’s 48-72 actions per minute. Over a day, that’s 69,120 to 103,680 actions. Over a (31 day) month, that’s 2,142,720 to 3,214,080 actions. Taking a mean of these (2,678,400), and looking at the current pricing, it would cost me £450 a month to run this app. Wow. I don’t care about late trains THAT much…

So – I rewrote the whole lot as a function app (actually 2 or 3 function apps, as some of the items, such as parsing the XML from National Rail to JSON are reusable elsewhere). Each execution takes about 300ms, and it executes (24 hours x 60 minutes per hour x 4 times per minute) = 5,760 per day. Over a month that’s 178,560 executions. That’s well within the “permanent free grant” provided for Functions. In fact, I’d have to run it something like 30,000,000 times per month, or about 600 times per second to even hit a cost of £1.

Quick and easy way to tweet from a function app

After my last post, I spent some time looking through this. Eventually, I found a really lightweight class which does what i need.

After spending some time adding some error handling to the api.request() method, I then wrapped a webrequest around it and created a function app. You can find it here: https://github.com/mnbf9rca/TwitterFunctionApp

the app takes a simple JSON:

{
  “oauth_token“: “<your oAuth token>”,
  “oauth_token_secret“: “<your oAuth token secret>”,
  “oauth_consumer_key“: “<your consumer key>”,
  “oauth_consumer_secret“: “<your consumer secret>”,
  “tweet”: “<the message to send>”
}

You can obtain the oAuth token and oAuth secret by following the instructions on the Twitter Developer site.

The Function App will return a JSON with the data returned by the Twitter API, or an error. Note that as long as it gets SOME response from the Twitter API it’ll return a 200 code. In future I’ll work on making this more granular (e.g. pass through 50x errors).

BMW API now requires location, no bypass available

Earlier this year, we got a new car, a BMW 3 series. It came subscribed to the BMW ConnectedDrive service, and that comes with an iPhone or Android app. So, of course, I immediately set about deconstructing the traffic to figure out what was going on, inspired by the work of Terence Eden. Unfortunately, BMW appears to have implemented certificate pinning in the time since Terrence wrote his paper, so my favourite tool, Charles Proxy, was useless – the BMW app simply dropped the connection.

So – I decided to reverse engineer the iphone app itself. I finally managed to get an OAuth appID and app secret from the code base – only to discover that the /webapi/v1/user/vehicles/:VIN/status method now requires location:

{
  "error": {
    "code": 500,
    "description": "(SmartPhoneUtil-A-101) Mandatory
parameter(s) missed or blank: dlat and dlon are required for BMW
vehicles!"
  }
}

if i add the lat + long of my home as querystring paramaters (/?dlat=x&dlon=y) it works but i don’t get a lot of other data (e.g. door status) although i guess that’s to do with the options available on my car:

{
  "vehicleStatus": {
    "vin": "(my VIN)",
    "updateTime": "2016-08-28T17:52:44+0200",
    "position": {
      "lat": "5x.xxxxx",
      "lon": "-y.yyyyy",
      "status": "OK"
    }
  }
}

Here’s the problem – if the car is more than half a KM from home, when i get:

{
  "vehicleStatus": {
    "vin": "(my VIN)",
    "updateTime": "2016-08-28T17:52:44+0200",
    "position": {
      "status": "TOO_FAR_AWAY"
    }
  }
}

(not sure what the updateTime value is as that’s clearly a long time ago).

Bummer.

Twitter API without libraries – for posting as yourself (e.g. a bot)

I have to say that the twitter API documentation is absolutely abysmal. It’s impossible to navigate – calls make reference to other calls but the major problem is that there are almost no examples – they almost all recommend that you use a library. So how on earth are you supposed to learn how the API works? How do you write a bot which tweets as itself (such as my https://twitter.com/EnfieldTownBot)?

anyway – i found a StackExchange article outlining how to use Twitter’s API from Postman – which handles the hashing for you.

Once I got a good grip on the API itself – which isn’t too bad – I now have to figure out how to create an Azure Function App to create and send the tweets – as OAuth 1.0 requires all messages to be signed (hashed). Twitter has a pretty good set of instructions on how to create the hash: https://dev.twitter.com/oauth/overview/creating-signatures

But – oh my gosh. This is a very complicated process. Here are some samples of libraries or code which i’m looking through to try to implement…

http://developer.pearson.com/learningstudio/oauth-1-sample-code

https://code.msdn.microsoft.com/windowsapps/LinkedIn-OAuth-Example-c06d64f5

https://github.com/bittercoder/DevDefined.OAuth-Examples

https://github.com/bittercoder/DevDefined.OAuth

https://www.devexpress.com/Support/Center/CodeCentral/ViewExample.aspx?exampleId=E20020

anyway, I didnt want to lose these links. I’ll come back to this later.