Using an ephemeral MongoDB single node replicaset in a devcontainer or codespace

I love using devcontainers to manage my development environment. They make it super easy to ensure a consistent development stack is in place. Recently i started developing against a MongoDB instance. For node.js, i use mongodb-unit to spin up a standalone server on the local client. But there’s no equivalent package for Python.

Although there are lots of posts on stackoverflow about configuring a single node replicaset using a healthcheck, and there’s even an example given by a MongoDB employee, they didnt work for me. When setting up the server to use authentication, it also needs a keyfile, which has to be generated and secured in a specific way or you get this error:

BadValue: security.keyFile is required when authorization is enabled with replica sets

Without authentication, i was unable to create databases and collections, but the username and password in MONGODB_INITDB_ROOT_USERNAME and MONGODB_INITDB_ROOT_PASSWORD didnt get created automatically in the admin database:

{"t":{"$date":"2023-09-28T20:59:21.342+00:00"},"s":"I",  "c":"ACCESS",   "id":5286307, "ctx":"conn30","msg":"Failed to authenticate","attr":{"client":"127.0.0.1:33216","isSpeculative":true,"isClusterMember":false,"mechanism":"SCRAM-SHA-256","user":"root","db":"admin","error":"UserNotFound: Could not find user \"root\" for db \"admin\"","result":11,"metrics":{"conversation_duration":{"micros":221,"summary":{"0":{"step":1,"step_total":2,"duration_micros":206}}}},"extraInfo":{}}}

Mongo has a very clear, step by step instructions to set up a replicaset, but it requires a lot of manual steps. So i decided to automate it with a bash script, and then trigger this as a healthcheck. Here are the steps i followed:

My requirements are for an ephemeral database – that means the data is destroyed when the container is removed. To persist the data, you need to map the container folder /data/db to a local folder using a volume in the docker-compose file.

  1. Planning it out
  2. Create a script to initialize the replicaset and create the root user
  3. Create a Dockerfile to generate the replica keyfile and inject my initialisation script
  4. Create a docker-compose.yaml file to build my app landscape
  5. Create devcontainer.json to bring it all together
  6. Accessing from the local machine

Planning it out

Here’s what we’re building. 4 files and at the end, MongoDB will be running and we can connect to it from both our local machine and from inside the devcontainer.

<project workspace>
  |
  |-- .devcontainer
  |        |-- devcontainer.json
  |        |-- docker-compose.yaml
  |        |-- Dockerfile
  |        |-- mongodb_healthcheck.sh
  |
  |-- <other code and folders>

 

Create a script to initialize the replicaset and create the root user

The script is intended to be idempotent i.e. you can run it several times and it will only return 0 (success) when the replicaset is up and running, and the username/password works:

[Start]
  |
[check_replica_set]
  |
  |--[Yes]--->[check_authentication]--->[Yes]--->[Exit 0]
  |                   |
  |                   |--[No]
  |                   |
  |               [create_root_user]
  |                   |
  |                   |--[Success]--->[Exit 1]
  |                   |
  |                   |--[Failure]--->[Exit 1]
  |
  |--[No]--->[initialize_replica_set]
                      |
                      |--[Already Initialized or Success]--->[Exit 1]
                      |
                      |--[Failure]--->[Exit 1]

Here’s the full script:

#!/bin/bash

# Function to check if MongoDB replica set is ready
check_replica_set() {
    echo "init_script: Checking MongoDB replica set..."
    local is_master_result=$(mongosh --quiet --eval 'rs.isMaster().ismaster')
    echo "Result of rs.isMaster().ismaster: $is_master_result"  # Display the result

    if echo "$is_master_result" | grep -q true; then
        echo "init_script: MongoDB replica set is ready."
        return 0
    else
        echo "init_script: MongoDB replica set NOT ready."
        return 1
    fi
}

# Function to initialize MongoDB replica set if necessary
initialize_replica_set() {
    echo "init_script: Starting to initialise replica set."
    local result=$(mongosh --quiet --eval 'rs.initiate()')
    echo "Result of rs.initiate(): $result"  # Display the result

    if [[ "$result" == *"already initialized"* || "$result" == *'ok: 1'* || "$result" == *'"ok" : 1'* ]]; then
        echo "init_script: MongoDB replica set is already initialized or initialized successfully."
        exit 0
    else
        echo "init_script: Failed to initialize MongoDB replica set."
        exit 1
    fi
}

check_authentication() {
    local auth_result=$(mongosh -u "$MONGODB_INITDB_ROOT_USERNAME" -p "$MONGODB_INITDB_ROOT_PASSWORD" --quiet --eval "db.runCommand({ ping: 1 })")
    echo "Result of authentication: $auth_result"  # Display the result

    if echo "$auth_result" | grep 'ok' | grep -q '1'; then
        echo "init_script: Authentication successful."
        exit 0
    else
        echo "init_script: Authentication failed."
        return 1
    fi
}

# Function to create MongoDB root user
create_root_user() {
    echo "init_script: Creating MongoDB root user..."
    output=$(mongosh <<EOF
    admin = db.getSiblingDB("admin")
    result = admin.createUser(
      {
        user: "$MONGODB_INITDB_ROOT_USERNAME",
        pwd: "$MONGODB_INITDB_ROOT_PASSWORD",
        roles: [ { role: "root", db: "admin" } ]
      }
    )
    printjson(result)
EOF
    )
    echo "Result of createUser: $output"  # Display the result

    if echo "$output" | grep 'ok' | grep -q '1'; then
        echo "init_script: MongoDB root user created successfully."
        exit 0
    else
        echo "init_script: Failed to create admin user."
        exit 1
    fi
}

# Check if MongoDB replica set is ready and initialize if needed
if check_replica_set; then
    if check_authentication; then
        exit 0
    else
        create_root_user
    fi
else
    initialize_replica_set
fi

Create a Dockerfile to generate the replica keyfile and inject my initialisation script

I wanted to use the existing mongo image for this, and perform the minimum number of changes. So i created a simple Dockerfile which creates a new keyfile, and puts the init script in. I then trigger the init script as a ‘healthcheck’ meaning it’ll get automatically triggered by Docker after 30 seconds, and then at 10 second intervals, for up to 10,000 seconds (!!!):

FROM mongo

# Initiate replica set
RUN openssl rand -base64 756 > "/tmp/replica.key"
RUN chmod 600 /tmp/replica.key
RUN chown 999:999 /tmp/replica.key

# Copy the health check script to the container
COPY mongodb_healthcheck.sh /usr/local/bin/

# Set execute permissions for the script
RUN chmod +x /usr/local/bin/mongodb_healthcheck.sh

# Define the health check command
HEALTHCHECK --interval=10s --timeout=5s --start-period=30s --retries=100 CMD /usr/local/bin/mongodb_healthcheck.sh

CMD ["mongod", "--replSet", "rs0", "--bind_ip_all", "--keyFile", "/tmp/replica.key", "--auth"]

Create a docker-compose.yaml file to build my app landscape

This is pretty simple. I wanted to use the Microsoft Python 3.11 devcontainer, and have it access Mongo in my MongoDB container. I want to set the root username and password here too. I created a network to allow the pods to talk to each other:

version: '3'
services:
  app:
    image: mcr.microsoft.com/devcontainers/python:3.11
    command: ["sleep", "infinity"]
    volumes:
      - ..:/workspace:cached
    ports:
      - "5000:5000"
    environment:
      - "PYTHONBUFFERED=1"
      - "PYTHONUNBUFFERED=1"
    networks:
      - mynetwork

  mongodb:
    build:
      dockerfile: ./Dockerfile
    ports:
      - "27017:27017"
    environment:
      - MONGODB_INITDB_ROOT_USERNAME=root
      - MONGODB_INITDB_ROOT_PASSWORD=example
    hostname: mongodb
    networks:
      - mynetwork

networks:
  mynetwork:

Create devcontainer.json to bring it all together

devcontainer.json is an open standard. The only bit that tripepd me up here was the need to be explicit about which service was exposing which port, which you do by adding the service name (from the docker-compose.yaml file) in front of it:

{
    "name": "Python 3.11 + MongoDB",
    "dockerComposeFile": "docker-compose.yml",
    "workspaceFolder": "/workspace",
    "service": "app",
    "features": {
        "ghcr.io/devcontainers/features/node:1": {
            "version": "latest"
        },
        "ghcr.io/devcontainers-contrib/features/poetry:2": {}
    },
    "forwardPorts": [
        "app:5000",
        "mongodb:27017"
    ],
    "customizations": {
        "vscode": {
            "settings": {
                "python.defaultInterpreterPath": "/usr/local/bin/python",
                "python.linting.pylintEnabled": false,
                "python.linting.flake8Enabled": true,
                "python.linting.enabled": true,
                "editor.detectIndentation": false,
                "editor.tabSize": 4
            },
            "extensions": [
                "ms-python.python",
                "ms-python.flake8",
                "ms-python.vscode-pylance",
                "VisualStudioExptTeam.vscodeintellicode",
                "njpwerner.autodocstring",
                "GitHub.copilot",
                "GitHub.copilot-chat",
                "GitHub.copilot-labs"
            ]
        }
    }
}

Accessing from the local machine

MongoSH will now connect from the local machine or devcontainer. Without a username/password you get quite limited access, but using mongosh -u root -p example it’ll connect just fine and you can administer the database using the account we created earler. If you just try to connect using MonoDB Compass however you’ll get this error:

getaddrinfo ENOTFOUND mongodb

This can be solved by adding directConnection=true to the connection string e.g.:

mongodb://root:example@127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000

From RICE to ICE: which framework for your project?

I’ve previously explained the RICE and ICE techniques for prioritisation. Both techniques are frameworks used to evaluate and rank projects or tasks based on their potential impact, feasibility, and difficulty. However, I wanted to highlight the two key differences between them to help you chose the right tool for your project.

The ICE technique (Impact, Confidence, Ease) assigns scores to each project based on the potential impact of the project, the level of confidence in its success, and the ease of implementing it. The scores for each factor are multiplied to get a final score, which is used to rank the projects in order of priority.

The RICE technique (Reach, Impact, Confidence, Effort) takes a similar approach, but adds an additional factor: Reach. Reach refers to the number of people or customers who would benefit from the project. Each project is assigned a score out of 10 for each factor, with the scores for Reach, Impact, Confidence, and Effort multiplied to get a final score.

The main difference, therefore, between the two techniques is the inclusion of Reach which makes the technique particularly useful for marketing campaigns or projects aimed at customer acquisition i.e. where the breadth of impact is important.

Another difference is that the RICE technique places more emphasis on effort, which refers to the level of resources or time required to implement the project. This can help teams to prioritise projects that are feasible to implement given the available resources.

TechniqueFactorsCalculationPurpose
RICEReach, Impact, Confidence, Effort(Reach x Impact x Confidence) / EffortProjects with potential to reach a large audience or that require significant resources to implement
ICEImpact, Confidence, EaseImpact x Confidence x EaseSmaller projects or tasks that can be implemented more easily

I hope this helps!

Ice, Ice Baby: Chill Out and Prioritise with the ICE Technique

Yesterday, i talked about the RICE technique for prioritisation. Today, i want to introduce ICE technique, another prioritisation framework used to evaluate and prioritise tasks or projects based on three factors: Impact, Confidence, and Ease. Tomorrow, i’ll compare them both.

  • Impact refers to the potential positive outcome or benefit of completing a particular task or project, considering the potential impact of the task or project on the overall goals or objectives of the organisation or project. For example, is this going to reduce costs? Increase customer loyalty or satisfaction? Reduce developer frustration?
  • Confidence refers to the level of certainty or confidence that the task or project will be successful – factors such as available resources, expertise, and potential roadblocks or obstacles. Are we likely to be able to deliver?
  • Ease refers to the level of difficulty or complexity of completing the task or project, taking account of things like the level of effort required, the time needed, the necessary skills needed, or difficulty obtaining or using resources. Perhaps the project isn’t that hard – but we simply don’t have a developer with the right skills to implement it, or perhaps we can’t support it/keep it running over time.

To use the ICE technique, each item is assigned a score out of 10 for each factor, and the scores are then multiplied together to calculate a final score for each task or project. The higher the final score, the higher we should prioritise completing that item.

This creates a simple yet effective framework which allows us to compare the total potential impact, feasibility, and difficulty. For example, you might use this to prioritise potential new product ideas for a tech startup:

IdeaIdea DescriptionImpactConfidenceEase
1A new mobile app that helps people track their daily water intake and reminds them to stay hydrated throughout the day8 – there is a growing awareness of the importance of staying hydrated7 – the team has some experience building mobile apps but this one would require some new features8 – the basic features can be implemented quickly
2A new software tool that automates social media marketing for small businesses, allowing them to create, schedule and publish posts on multiple platforms with ease9 – social media marketing is critical for small businesses but can be time-consuming9 – the team has expertise in social media marketing and has built similar tools in the past6 – integrating with multiple social media platforms and providing advanced features will take time and resources
3A new AI-powered chatbot that can assist customers with basic support queries, reducing the load on the support team7 – many companies are looking for ways to reduce support costs and improve customer satisfaction8 – the team has some experience with chatbot development and has access to AI libraries7 – developing the chatbot and integrating it with the company’s support systems will require some time and effort)

Using the ICE technique, we would multiply the scores for each idea to get a final score:

Idea 1: 8 x 7 x 8 = 448 Idea 2: 9 x 9 x 6 = 486 Idea 3: 7 x 8 x 7 = 392

Based on these scores, we would prioritise the ideas in the following order:

  1. Idea 2 – social media marketing (486)
  2. Idea 1 – app to track daily water intake (448)
  3. Idea 3 – customer support chatbot (392)

So – our potential startup should probably focus on an app to help small businesses with their social media, then track water intake, and finally a chatbot. This doesn’t take account of the fact that there are already 10,000,000 apps for tracking water intake and i’m not sure how to make money on them, or that social media marketing is a field littered with failed apps.

You want RICE with that?

Imagine that you are a product manager at a software company, and you have three potential features to prioritise for the next development cycle. How do you pick between them? There are many ways, but one i recently learned about is the RICE model – a prioritisation framework used by product managers, teams, and organisations to prioritise projects, features, or tasks based on their potential impact, effort, and other factors. RICE stands for Reach, Impact, Confidence, and Effort, and it provides a quantitative approach to decision-making.

  1. Reach: Reach refers to the number of users, customers, or stakeholders who will be affected by the project or feature over a specific period (e.g., a month or a quarter). It is essential to estimate the reach to understand how many people will benefit from the implementation.
  2. Impact: Impact measures the potential benefit or positive effect that the project, feature, or task will have on users, customers, or stakeholders. Impact is usually measured on a scale, such as 1 (minimal impact) to 3 (significant impact), but the scale can be adjusted to suit the organization’s needs.
  3. Confidence: Confidence is an estimate of how certain the team is about the reach, impact, and effort assessments. This factor is crucial because it accounts for the inherent uncertainty in making predictions. Confidence is expressed as a percentage, typically ranging from 50% to 100%.
  4. Effort: Effort is an estimate of the amount of time, resources, or work needed to complete the project, feature, or task. Effort can be measured in person-hours, person-days, or any other metric that reflects the resources required to complete the work.

To use the RICE model, you assign values to each of the four factors (Reach, Impact, Confidence, and Effort) for every project, feature, or task under consideration. Then, calculate the RICE score using the following formula:

RICE score = (Reach * Impact * Confidence) / Effort

Projects or features with the highest RICE scores should be prioritised over those with lower scores. This method helps ensure that the team is working on the most valuable and impactful initiatives, while also taking into account the resources and level of certainty associated with each project.

For example:

Feature A: Improve the onboarding process for new users

  • Reach: 1000 users per month
  • Impact: 3 (high impact, as it can significantly improve user retention)
  • Confidence: 90% (high confidence in estimates and potential outcome)
  • Effort: 200 person-hours

Feature B: Implement a dark mode theme

  • Reach: 300 users per month
  • Impact: 2 (moderate impact, as it enhances user experience)
  • Confidence: 80% (fairly confident in the estimates)
  • Effort: 100 person-hours

Feature C: Optimise backend performance

  • Reach: 500 users per month
  • Impact: 1 (low impact, as most users won’t notice the difference)
  • Confidence: 70% (uncertain about the exact impact and effort)
  • Effort: 150 person-hours

Now calculate the RICE scores for each feature:

Feature A RICE score = (1000 * 3 * 0.9) / 200 = 13.5 Feature B RICE score = (300 * 2 * 0.8) / 100 = 4.8 Feature C RICE score = (500 * 1 * 0.7) / 150 = 2.33

Based on the RICE scores, the priority order for these features should be:

  1. Feature A: Improve the onboarding process for new users (13.5)
  2. Feature B: Implement a dark mode theme (4.8)
  3. Feature C: Optimize backend performance (2.33)

Using the RICE model, you can see that Feature A should be the top priority, as it has the highest potential impact on users with a reasonable amount of effort.

Tomorrow, i’ll explain the ICE technique.

Are you A senior developer, or THE lead developer

In our world, we organise in Pods – an autonomous group of 6-9 people with all the skills needed to solve a problem. Multiple Pods form a Team. Within a Pod, there can be multiple Senior Developers, but only a single Lead Developer. They have different and overlapping responsibilities and accountabilities.

Every project must have exactly one Lead Developer, and has one or more Senior Developers.

It is the accountability of the project or product manager to ensure that these roles exist in a team, and that the roles are filled with skilled team members able and willing to fulfil the role.

A Senior Developer

Every project must have at least one Senior Developer who has:

  • high competence in the core technologies used in the project
  • reasonable competence in all technologies used in the project
  • a willingness to learn, and takes action to generate opportunities for learning
  • an understanding and ability to explain high level architectural principles
  • the ability to generate implementable steps through the selection of appropriate patterns.

They are responsible for the following activities:

  • performing code and design reviews against industry and company best practice, defined implementation plans etc.
  • demonstrating “technical common sense” to ensure the team is producing clean, supportable, sustainable products
  • embody best practice around software engineering and software delivery, including testing, automation, deployment, etc. As an example, what technical standards are to be followed? How will we handle branching? Code reviews?
  • actively participate in creation of detailed designs
  • helps the wider organisation through activities like lunch and learns, pattern generation, training etc
  • actively mentors less experienced developers, typically spending 10-30% of their time on this alone.

You are probably a “Senior Developer” if team members keep asking you how to do things. We have the same expectations of Staff and Contractor Senior Developers, including that they spend significant time coaching and developing others.

The Lead Developer

In our projects, we expect the most senior developer to take on the role of “Lead Developer”. This role entails more leadership activities – the Lead Developer is accountable for:

  • Being a “Senior Developer” i.e. the Lead Developer also does all of the things that a Senior Developer does.
  • Ensuring that the Developers and Senior Developers are fulfilling their responsibilities.
  • The creation of the detailed technical designs necessary for implementation
  • Work closely with architecture to ensure continuity and coherence between detailed technical designs and high level solution and reference architecture
  • assist with planning and scoping of work, including helping design the delivery team
  • Assist with interviewing
  • meet with senior management (IT or commercial) to ensure proper understanding of the project, delivery, etc. on both sides
  • Lead the development team and create clarity of vision, design and expectations

The Lead Developer role requires the person to spend less time actually writing code – in some weeks, you might spend 30% or less of your time actually writing code, depending on the stage of the work.

Names change, and so do email addresses

We’ve recently been rolling out a new internal application. At our organisation, users have an email address which is generally firstname.lastname@company.com, or something like that. When a user logs in to the application, the app will look them up using their email address and figure out what parts of the application the user should be able to use.

The problem

One day we got a ticket for a user who was adamant that they had access but when we looked in the application, we couldn’t even find them in the system! Probing a bit further, it turns out that they had recently changed their name, and as a result, their email had changed.

There are lots of common misconceptions around names, including that names never change. But people change their names for a variety of reasons:

  • In the UK and US, many people choose to change their last name when they get married, or if they changed their name and subsequently divorce, may change it back.
  • People who identify as trans or non-binary may choose to change their name to better reflect their gender identity.
  • People may choose to select or return to a name which they feel better reflects their cultural identity
  • They may not like their name and want a better one

When we designed the system, we didn’t think of this. My email address hasn’t changed for years. But in retrospect, it’s so obvious that we should have.

Inclusive design

Inclusive design means thinking about all of our users, even ones who don’t yet exist or that we have yet to identify. Inclusive design makes the experience better for everyone, and it takes almost no effort or cost. Even if you don’t think you’ll benefit, you never know who might.

The Two Email Rule: When to Escalate from Email to Real Conversation

When i think back to the deepest of the many deep holes i’ve dug myself in to over the years, they almost all start with an email.

When working through my inbox, it’s all too easy to just bash out a reply and hit send. Usually, that’s fine – a quick email is all it takes, and the issue is closed. But sometimes, that email triggers a reply, and that reply another, and it’s hard to predict when but eventually I’m having a complex conversation about a complex issue and it all goes wrong and before I know it, we’re at loggerheads and 37 people on the CC list think I’m a jerk.

Email is a low bandwidth communication channel. And this means that it’s hard to get across what we mean without misunderstanding. In an email, no one can tell if you’re smiling. You can’t acknowledge a change of mind, or moderate your language when you detect frustration on the other end.

And that’s why I have the “two email rule”. Whenever I find myself reading or writing a third email on a topic, I know I need to escalate to bandwidth – send an IM, call them, book a meeting. Obviously, in distributed teams, or during lockdown, this is hard. But it’s necessary.

So if I suddenly stop responding to your email, it’s not because I don’t care, it’s because I want to properly discuss this with you.

How to: LetsEncrypt in standalone mode for Unifi on Ubuntu 20.04 LTS

This is an update of my previous post, now that cert-manager is more mature, and i’ve rebuilt my server on Ubuntu 20.04 (from 18.04).

  1. install certbot
  2. install script to update unifi certificate
  3. Test
  4. Issue full certificate
  5. Install cron jobs to automate renewal

Install certbot

Certbot installation instructions are at online of course but here’s a summary:

  1. Update package list:
    sudo apt update
  2. install:
    sudo apt install -y certbot

Create a new certificate using LetsEncrypt

We’re going to use standalone mode, and first we’ll get a test certificate just to validate that everything’s working (so that we don’t trigger LetsEncrypt’s rate limits).

  1. open port 80 in ufw:
sudo ufw allow http
  1. Test certificate issuance:
sudo certbot certonly --standalone -d <hostname> -n --test-cert --agree-tos -m <email>

You should see something like this:

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for <hostname>
Waiting for verification...
Cleaning up challenges

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/<hostname>/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/<hostname>/privkey.pem
   Your cert will expire on 2021-04-08. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot
   again. To non-interactively renew *all* of your certificates, run
   "certbot renew"
 - Your account credentials have been saved in your Certbot
   configuration directory at /etc/letsencrypt. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Certbot so
   making regular backups of this folder is ideal.
  1. If that’s worked, close the firewall (sudo ufw deny http) and move on to the next step and install the certificate in unifi. Later, we’ll come back and get a ‘real’ (not staging) certificate.

Install certificate in unifi

I use an amazing certificate installation script from Steve Jenkins.

  1. Get the script:
wget https://raw.githubusercontent.com/stevejenkins/unifi-linux-utils/master/unifi_ssl_import.sh
  1. Edit the config settings in the script to add hostname, switch from Fedora/RedHat/CentOS to Debian/Ubuntu, enable LE_MODE, and disable key paths:
# CONFIGURATION OPTIONS
UNIFI_HOSTNAME=<hostname>   
UNIFI_SERVICE=unifi

# Uncomment following three lines for Fedora/RedHat/CentOS
# UNIFI_DIR=/opt/UniFi
# JAVA_DIR=${UNIFI_DIR}
# KEYSTORE=${UNIFI_DIR}/data/keystore

# Uncomment following three lines for Debian/Ubuntu
UNIFI_DIR=/var/lib/unifi
JAVA_DIR=/usr/lib/unifi
KEYSTORE=${UNIFI_DIR}/keystore

# Uncomment following three lines for CloudKey
#UNIFI_DIR=/var/lib/unifi
#JAVA_DIR=/usr/lib/unifi
#KEYSTORE=${JAVA_DIR}/data/keystore

# FOR LET'S ENCRYPT SSL CERTIFICATES ONLY
# Generate your Let's Encrtypt key & cert with certbot before running this script
LE_MODE=yes
LE_LIVE_DIR=/etc/letsencrypt/live

# THE FOLLOWING OPTIONS NOT REQUIRED IF LE_MODE IS ENABLED
# PRIV_KEY=/etc/ssl/private/hostname.example.com.key
# SIGNED_CRT=/etc/ssl/certs/hostname.example.com.crt
# CHAIN_FILE=/etc/ssl/certs/startssl-chain.crt
  1. copy to /usr/local/bin and make executable:
sudo cp unifi_ssl_import.sh /usr/local/bin/
sudo chmod +x /usr/local/bin/unifi_ssl_import.sh
  1. Run the script to import the certificate. Look for any errors:
sudo /usr/local/bin/unifi_ssl_import.sh
  1. Navigate to your server (https://<hostname>:8443). If it worked, you’ll see a warning that the certificate isnt trusted, but when you examine the cert, it’s issued by a ‘fake’ Lets Encrypt issuer, for example:
Certificate showing a chain back to a root called 'Fake LE Intermediate X1'

Get the real LetsEncrypt certificate

Simply run the same certbot command as before, but leave off the --test-cert flag, and add the --force-renew flag to force it to replace the (unexpired) test certificate:

sudo certbot certonly --standalone -d <hostname> -n --force-renew --agree-tos -m <email>

and rerun the installation script:

sudo /usr/local/bin/unifi_ssl_import.sh

Close the browser window and reopen it, then navigate to your server again. You should now see the valid certificate:

A trusted certificate chain for the host

Automate renewal and issuance

Set up a crontab to renew the cert. Pick a randomish time. It should run every day – if the certificate is still valid, it’ll just skip

  1. load crontab – you may be asked to pick an editor – i suggest nano:
    sudo crontab -e
  2. add the schedule – use crontab guru if you arent familiar with crontab schedule expressions, and set up tasks to:
    1. request a new certificate, and
    2. install the updated certificate. I chose a time just over an hour after certificate issue.

It should look like this:

# renew any certificates due to expire soon at 05:20 each day
20 5 * * * /usr/bin/certbot renew --standalone -n --agree-tos -m <email> --pre-hook 'ufw allow http' --post-hook 'ufw deny http'
# install any updated certificates at 06:29 each day
29 6 * * * /usr/local/bin/unifi_ssl_import.sh

The --pre-hook and --post-hook commands tell UFW to open up port 80 and then close it again afterwards.

Can you do that new job?

Generally when evaluating someone for a role, I look for 5 things:

  1. Behaviours – how do they operate in a team? Do they admit to mistakes and learn from them? Do they help others? Communicate and live to their personal values? Are those values ones I want people in the team to live to?
  2. Accountability – can this person handle the magnitude of the role? Are they able to manage stakeholders of the right level of seniority?
  3. Domain – how deep is their knowledge of this business, industry, sector etc.? And how deep does it need to be?
  4. Function – what is their level of skill in this type of role? For example, if hiring a business analyst, how good a business analyst are they?
  5. Organisation – perhaps summarised as “knowing how things are done around here” – processes, culture etc. – does this person have the knowledge to make things happen?

Obviously, number 1 is a given – no one wants a brilliant jerk on the team. But most people have some of each of the others. The question is whether it’s enough to set them up for success in the new role. Usually, I’d expect someone to have strength in 1 or 2 of the others, and to have one or at most two which give headroom to grow as:

  • No headroom in role = boring job
  • Too many development areas = set up for failure

When a candidate is moving roles internally, they probably have number 5. So a step up to greater accountability, or moving to an entirely new business domain (if the company is big enough) might represent a solid plan. Doing both at once is probably too much for most people.

External candidates probably don’t possess organisational knowledge, so we should assume that’s a growth area. And that means they need to be fairly strong in two of the other areas. In my experience, people usually move company to step up. So i would expect external candidates to have strong domain knowledge and functional skill.

Fix: AccessToKeyVaultDenied when using KeyVault with Function App application setting

After following the instructions on the MS website to establish a KeyVault reference and place that in my App Settings, I set up a Managed Service Identity and grant that identity access to my KeyVault key. Next, wishing to follow Microsoft’s advice and secured a firewall around the KeyVault, ensuring that I checked the Allow trusted Microsoft services to bypass this firewall? setting, however, I was still receiving an AccessToKeyVaultDenied error:

Screen shot showing that System assigned managed identity is receiving the AccessToKeyVaultDenied error with the explanation 'Key Vault reference was not able to be resolved because site was denied access to Key Vault reference's vault.'

I even checked and yes, App Service is supposed to be able to bypass the firewall – so what was going on? Well, on the KeyVault resolver reference page it has this text:

Key Vault references are not presently able to resolve secrets stored in a key vault with network restrictions.

That seemed ok when i first read it as after all, there’s an explicit setting to bypass the firewall. But when i disabled network firewall (allow access from all networks), everything suddenly worked, and the key status is Resolved with a nice green tick:

Screen shot showing that the KeyVault key status is "Resolved"