Although builds were succeeding locally, during CI builds were failing on Linux with the error Error: Cannot find module @rollup/rollup-linux-x64-gnu. npm has a bug related to optional dependencies. The log suggested removing package-lock.json – but this obviously breaks deterministic builds.
The root cause is a missing dependency – specifically, when Rollup needs native binaries for performance optimisation. Mac-generated lockfiles won’t include Linux-specific optional dependencies. First noted in npm bug 4828, it was fixed in npm 11.3.0+, but adding an explicit optional dependency prevents edge cases.
The fix is:
add @rollup/rollup-linux-x64-gnu: "*" to optionalDependencies in package.json
Update package-lock.json to include the platform-specific dependency
This allows builds to work reliably across platforms.
This post is mainly a reminder to myself, because i’ve made the same mistake a few times in different projects.
How and why to deploy Azure functions from a package
When using Azure Functions, you can simplify deployment and maintenance of your function by deploying from a package file rather than directly in to the function. This also has the benefit of reducing cold start times, particularly where there are a large number of dependencies.
To deploy from a package (without using the Azure deployment tools), you:
Create a zip file with your code and its dependencies
deploy that zip to Azure
Set the WEBSITE_RUN_FROM_PACKAGE property on the function app settings
The WEBSITE_RUN_FROM_PACKAGE setting can either be a 1 if you’ve deployed your code to the /home/data/SitePackages folder on the function app, or a URL. I prefer to deploy my code as a zip stored in a blob, as this seems cleaner, and is easier to upgrade.
Create a container (app-container)
Upload the file to a blob called something like app-blob-datetime. By appending a timestamp to the blob name, subsequent deployments can be to a new blob (avoiding locking concerns) and then switching over is nearly instant and has almost zero downtime. If desired, it is simple to switch back to a previous version.
Generate a long-lived SAS with read privileges scoped on the container (app-container)
Construct a fully qualified URL to the blob, and set this as the WEBSITE_RUN_FROM_PACKAGE setting.
Restart the function app (typically < 1 second).
Error: 0 functions loaded
When browsing to the functions Overview page, we can get invited to “Create functions in your preferred environment”, implying that no functions exist:
Navigating to Monitoring > Logs, and selecting Traces shows something interesting (you may need to enable App Insights and restart the function app). First, we see a message 1 functions found (Custom), but immediately after it 0 functions loaded:
Fix: Ensure all packages are available
In order for the custom package to load, you need to ensure that all of the Python packages that it needs are available. To do this, you must install the packages in to the folder and then zip it. You can do this with Pip, setting the specific output folder e.g.
Then, when zip the file to include the .python_packages folder. Here’s my build script, which also ensures that we don’t include any files specified in .funcignore:
#!/bin/bash
# Remove and recreate the 'build' folder
rm -rf build
mkdir -p build
# export requirements.txt
poetry export --only main --without-hashes -f requirements.txt --output app/requirements.txt
# Copy the contents of the 'app' folder to 'build', including hidden files and folders
shopt -s dotglob
cp -r app/* build/
# Apply .funcignore to the contents of 'build'
if [ -f "app/.funcignore" ]; then
while IFS= read -r pattern; do
find build -name "$pattern" -exec rm -rf {} +
done < app/.funcignore
fi
# https://github.com/Azure/azure-functions-host/issues/9720#issuecomment-2129618480
pip install --disable-pip-version-check --target="build/.python_packages/lib/site-packages" -r app/requirements.txt
If you run in a dedicated mode, you need to turn on the Always On setting for your Function App to run properly … When running in a Consumption Plan or Premium Plan you should not enable Always On.
The Always On setting for Python is actually part of the constructor of pulumi_azure_native.web.SiteConfigArgs:
(parameter) always_on: Input[bool] | None
always_on: true if Always On is enabled; otherwise, false.
When uploading images to WordPress, you may get this error. There are plenty of blogs online offering solutions, but they only apply to self-hosted instances – mine is hosted on just-what-i-find.onyx-sites.io/.
The error is a little pop up with the text The response is not a valid JSON response at the bottom of the screen when you try and upload an image:
Failed to load resource: the server responded with a status of 403 () for URL /wp-json/wp/v2/media
or
POST https://atomic-temporary-181991729.wpcomstaging.com/wp-json/wp/v2/media?_locale=user 403 (Forbidden)
I have Cloudflare in front of my blog, with the OWASP filter set enabled. By examining the Security Events log (in Cloudflare at Security > Events), and adding a filter for the path /wp-json/wp/v2/media:
i was able to see that WAF was triggering on a specific rule, 949110: Inbound Anomaly Score Exceeded. There are lotsofposts on the Cloudflare forum about this. One answer from the Cloudflare team points out that the OWASP ruleset is not managed by Cloudflare – they simply integrate it in their WAF, so they have no way to tweak it. They do, however, point out you can bypass it. So I created a custom rule to match (http.request.uri.path eq "/wp-json/wp/v2/media"):
I then selected to “Skip specific rules from a Managed Ruleset”, and disable rule 949110: Inbound Anomaly Score Exceeded for this specific URI:
I apply the ruleset before the OWASP one in the priority list:
And now, no more errors. Of course, this will reduce the security protection of your WordPress instance – at least for this URI. See the Cloudflare documentation for more details.
When running apt-get update i was seeing these errors:
W: http://security.ubuntu.com/ubuntu/dists/jammy-security/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/postgres.gpg are ignored as the file is not readable by user '_apt' executing apt-key.
A quick inspection shows that the new keys have different permissions to the existing ones
rob@localhost:~$ ls -alh /etc/apt/trusted.gpg.d/
total 40K
drwxr-xr-x 2 root root 4.0K Oct 13 14:00 .
drwxr-xr-x 8 root root 4.0K Oct 13 13:54 ..
-rw-r--r-- 1 root root 1.2K Sep 6 05:46 akamai-ubuntu-launchpad-ubuntu-ppa.gpg
-rw-r----- 1 root root 3.5K Oct 13 14:00 postgres.gpg
-rw-r----- 1 root root 2.8K Oct 13 14:00 timescaledb.gpg
-rw-r--r-- 1 root root 2.8K Mar 26 2021 ubuntu-keyring-2012-cdimage.gpg
-rw-r--r-- 1 root root 1.7K Mar 26 2021 ubuntu-keyring-2018-archive.gpg
-rw-r--r-- 1 root root 2.3K Oct 13 08:59 ubuntu-pro-cis.gpg
-rw-r--r-- 1 root root 2.2K Oct 13 08:57 ubuntu-pro-esm-apps.gpg
-rw-r--r-- 1 root root 2.3K Oct 13 08:57 ubuntu-pro-esm-infra.gpg
The fix is pretty simple. Pick one of the pre-existing GPG keys, and copy the permissions to all the other keys in the folder. In my case, i chose the ubuntu-pro-cis.gpg key, but you can pick any that doesnt report the permissions error. Pass it as a --reference argument to the chmod command:
rob@localhost:~$ sudo chmod --reference=/etc/apt/trusted.gpg.d/ubuntu-pro-cis.gpg /etc/apt/trusted.gpg.d/*.gpg
rob@localhost:~$ ls -alh /etc/apt/trusted.gpg.d/
total 40K
drwxr-xr-x 2 root root 4.0K Oct 13 14:00 .
drwxr-xr-x 8 root root 4.0K Oct 13 13:54 ..
-rw-r--r-- 1 root root 1.2K Sep 6 05:46 akamai-ubuntu-launchpad-ubuntu-ppa.gpg
-rw-r--r-- 1 root root 3.5K Oct 13 14:00 postgres.gpg
-rw-r--r-- 1 root root 2.8K Oct 13 14:00 timescaledb.gpg
-rw-r--r-- 1 root root 2.8K Mar 26 2021 ubuntu-keyring-2012-cdimage.gpg
-rw-r--r-- 1 root root 1.7K Mar 26 2021 ubuntu-keyring-2018-archive.gpg
-rw-r--r-- 1 root root 2.3K Oct 13 08:59 ubuntu-pro-cis.gpg
-rw-r--r-- 1 root root 2.2K Oct 13 08:57 ubuntu-pro-esm-apps.gpg
-rw-r--r-- 1 root root 2.3K Oct 13 08:57 ubuntu-pro-esm-infra.gpg
W: https://packagecloud.io/timescale/timescaledb/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
Although the warning is annoying, it doesn’t stop things updating. I understand the reasons why the legacy keyring is being removed.
Migrate existing keys to the new keyring
First, list the keys:
sudo apt-key list
In my case, i’ve got two – one for PostgreSQL and one for timescaledb. You will probably see a bunch of extra keys here too.
Export the key by copying the last 8 characters of the identifier. Because I have two keys to export, i did this twice, giving each key a unique filename under /etc/apt/trusted.gpg.d/:
rob@localhost:~$ sudo apt-key export ACCC4CF8 | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/postgres.gpg
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
rob@localhost:~$ sudo apt-key export 47F24417 | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/timescaledb.gpg
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
Bingo – package updates now work! But if they don’t you might get this error:
W: http://security.ubuntu.com/ubuntu/dists/jammy-security/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/postgres.gpg are ignored as the file is not readable by user '_apt' executing apt-key.
Key Vault references are not presently able to resolve secrets stored in a key vault with network restrictions.
That seemed ok when i first read it as after all, there’s an explicit setting to bypass the firewall. But when i disabled network firewall (allow access from all networks), everything suddenly worked, and the key status is Resolved with a nice green tick:
Some repos, such a the one for the Unifi Controller, use different ‘field’ values to tie a release and require manual updates. For someone like me who has a standalone, automated controller setup designed mainly to keep the firmware up to date without much intervention, this is a hassle. It looks something like this:
robert@unifi:~$ sudo apt-get update
[sudo] password for robert:
Hit:1 http://mirrors.linode.com/ubuntu bionic InRelease
Get:2 http://mirrors.linode.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:3 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:4 http://mirrors.linode.com/ubuntu bionic-backports InRelease [74.6 kB]
Ign:5 http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 InRelease
Hit:6 http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 Release
Get:7 https://dl.ubnt.com/unifi/debian stable InRelease [3,024 B]
Reading package lists... Done
E: Repository 'https://dl.ubnt.com/unifi/debian stable InRelease' changed its 'Codename' value from 'unifi-5.12' to 'unifi-5.13'
N: This must be accepted explicitly before updates for this repository can be applied. See apt-secure(8) manpage for details.
It’s an easy fix. Just tell apt-get to ignore the codename field:
robert@unifi:~$ echo 'Acquire::AllowReleaseInfoChange::Codename "true";' | sudo tee /etc/apt/apt.conf.d/99releaseinfochange
Acquire::AllowReleaseInfoChange::Codename "true";
The ZFS Pool on my server was showing degraded state. After checking the SMART status of the constituent drives and finding no problem, I discovered that there’s a bug in Solaris 10.5 where the system reports a growing number of errors and eventually fails the pool. dmesg shows an error unable to kmem_alloc enough memory for scatter/gather list, however, there is actually nothing wrong with the pool. Running zpool status shows degraded state:
root@fs:~# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM CAP Product /Disks IOstat mess SN/LUN
rpool ONLINE 0 0 0
c1t0d0 ONLINE 0 0 0 32.2 GB VMware Virtual S S:5 H:25 T:0 000000000000000
errors: No known data errors
pool: tank
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://illumos.org/msg/ZFS-8000-9P
scan: scrub repaired 0 in 12h15m with 0 errors on Fri Dec 21 00:08:43 2020
config:
NAME STATE READ WRITE CKSUM CAP Product /Disks IOstat mess SN/LUN
tank DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
c0t50014EE20BF0750Dd0 ONLINE 0 0 0 4 TB WDC WD40EFRX-68W S:0 H:0 T:0 WDWCC4E6NAXVAS
c0t50014EE263348A3Ed0 ONLINE 0 0 0 4 TB WDC WD40EFRX-68W S:0 H:0 T:0 WDWCC4E0FRRRRP
c0t50014EE2B69D2D68d0 DEGRADED 0 0 20 too many errors 4 TB WDC WD40EFRX-68W S:0 H:0 T:0 WDWCC4E3AN2Y99
errors: No known data errors
Running zpool clear recovers the pool:
root@fs:~# zpool clear
root@fs:~# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c1t0d0 ONLINE 0 0 0
errors: No known data errors
pool: tank
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://illumos.org/msg/ZFS-8000-9P
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c0t50014EE20BF0750Dd0 ONLINE 0 0 2
c0t50014EE263348A3Ed0 ONLINE 0 0 0
c0t50014EE2B69D2D68d0 ONLINE 0 0 0