The Multiplier Effect: How The Aggregation of Marginal Delays Derails Projects

The Aggregation of Marginal Gains is an improvement model attributed to Dave Brailsford. When he took over as the head of British Cycling in 2002, the team was near the bottom of the rankings. Brailsford’s approach was to look at their processes in tiny detail, and improve each part by 1%, the logic being that these would all add up to a significant improvement. He talks about shipping mattresses between hotels so that athletes get a better nights’ sleep, or fastidiously cleaning the team truck to prevent dust and debris undermining the fine tuning of the bikes. And it was a success – in the 2008 and 2012 olympics, the team won 70% of the gold medals for track cycling.

I want to propose a contrasting notion – the Aggregation of Marginal Delays – the slow accumulation of tiny lags and delays in a project that add up to a significant slip in delivery performance. These delays are often so small and (at the time) inconsequential that team members just brush them off. Perhaps you need to get approval from three people with busy calendars – it might take you a few days to get in their diary. Frustrating – but we’re all busy, right? Maybe you need to request something from another which takes half a day to released to you. Annoying – but the other team’s process is clearly laid out on their website – didn’t you read it? That person you needed to get advice from has taken the afternoon off to watch their kids nativity play. Who’d begrudge them that?

But these delays, each one small and explainable, add up, both quantitatively, and culturally. None of them is worth escalating – by the time you get this in front of someone who could change it, the delay is in the past. But the cumulative impact of a few hours here, half a day there, across dozens of events, across months of work, is significant. And it sets the tone for how things are done – we, as an organisation, start to feel that it’s an acceptable state of affairs – like I said, all of the causes of the delays are reasonable, none are malicious, or the result of incompetence. And because of that, when there are delays which could be avoided, they’re often not.

I’m afraid i don’t have a silver bullet here. We’ve tried lots of things to make the impact of these delays visible, but none have worked. In most cases, the cure is worse than the disease, creating massive overhead. Here’s what we’ve tried:

  • Immediate escalation – but this made people nervous, and often there was nothing to be done – are we really going to summon that team member back from their child’s school play to answer our question?
  • Flagging potential bottlenecks up front – although it did help somewhat to remind team members to consider when they might have to book things in advance, too much forward planning is somewhat wasteful, and hinders the agility of the team.
  • Pushing accountability down to teams – we already do this as much as we can. But the culture in our part of the organisation, which wants to move fast, is at odds with central IT services providing IT to the wider company, which needs to deliver a secure and reliable service which meets everyone’s needs. So we can’t make all the decisions.
  • Capture data on areas with consistent delays – we tried to use this to systematically improve the processes in those areas, but often the teams which own them weren’t interested in change. They felt our proposed ‘improvements’ would introduce too much risk. And it was quite burdensome to track every request too.

So what can you do? In short, treat it like any other Continuous Improvement exercise – gather data, plan, execute, review.

  • Introduce a system to track delays – partial data is better than none.
  • Hold other teams to their SLAs, and use your data to demonstrate this.
  • Run a “Delay Spotlight” in your team meetings, where team members can raise frustrating delays, and teams can brainstorm improvements. Focus pilots on areas you can control, rather than trying to change central teams.

The Aggregation of Marginal Delays is an inherent challenge in large , complex organisations, but one that we can chip away at with through collective analysis, data, communication and a mindset of continuous improvement. Remember that Marginal Gains accumulate too.

FIX: The key(s) in the keyring /etc/apt/trusted.gpg.d/???.gpg are ignored as the file is not readable by user ‘_apt’ executing apt-key

When running apt-get update i was seeing these errors:

W: http://security.ubuntu.com/ubuntu/dists/jammy-security/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/postgres.gpg are ignored as the file is not readable by user '_apt' executing apt-key.

I was getting this error after migrating keys from the old, legacy store to a the shiny new one.

A quick inspection shows that the new keys have different permissions to the existing ones

rob@localhost:~$ ls -alh /etc/apt/trusted.gpg.d/
total 40K
drwxr-xr-x 2 root root 4.0K Oct 13 14:00 .
drwxr-xr-x 8 root root 4.0K Oct 13 13:54 ..
-rw-r--r-- 1 root root 1.2K Sep  6 05:46 akamai-ubuntu-launchpad-ubuntu-ppa.gpg
-rw-r----- 1 root root 3.5K Oct 13 14:00 postgres.gpg
-rw-r----- 1 root root 2.8K Oct 13 14:00 timescaledb.gpg
-rw-r--r-- 1 root root 2.8K Mar 26  2021 ubuntu-keyring-2012-cdimage.gpg
-rw-r--r-- 1 root root 1.7K Mar 26  2021 ubuntu-keyring-2018-archive.gpg
-rw-r--r-- 1 root root 2.3K Oct 13 08:59 ubuntu-pro-cis.gpg
-rw-r--r-- 1 root root 2.2K Oct 13 08:57 ubuntu-pro-esm-apps.gpg
-rw-r--r-- 1 root root 2.3K Oct 13 08:57 ubuntu-pro-esm-infra.gpg

The fix is pretty simple. Pick one of the pre-existing GPG keys, and copy the permissions to all the other keys in the folder. In my case, i chose the ubuntu-pro-cis.gpg key, but you can pick any that doesnt report the permissions error. Pass it as a --reference argument to the chmod command:

rob@localhost:~$ sudo chmod --reference=/etc/apt/trusted.gpg.d/ubuntu-pro-cis.gpg /etc/apt/trusted.gpg.d/*.gpg
rob@localhost:~$ ls -alh /etc/apt/trusted.gpg.d/
total 40K
drwxr-xr-x 2 root root 4.0K Oct 13 14:00 .
drwxr-xr-x 8 root root 4.0K Oct 13 13:54 ..
-rw-r--r-- 1 root root 1.2K Sep  6 05:46 akamai-ubuntu-launchpad-ubuntu-ppa.gpg
-rw-r--r-- 1 root root 3.5K Oct 13 14:00 postgres.gpg
-rw-r--r-- 1 root root 2.8K Oct 13 14:00 timescaledb.gpg
-rw-r--r-- 1 root root 2.8K Mar 26  2021 ubuntu-keyring-2012-cdimage.gpg
-rw-r--r-- 1 root root 1.7K Mar 26  2021 ubuntu-keyring-2018-archive.gpg
-rw-r--r-- 1 root root 2.3K Oct 13 08:59 ubuntu-pro-cis.gpg
-rw-r--r-- 1 root root 2.2K Oct 13 08:57 ubuntu-pro-esm-apps.gpg
-rw-r--r-- 1 root root 2.3K Oct 13 08:57 ubuntu-pro-esm-infra.gpg

Problem solved!

FIX: Key is stored in legacy trusted.gpg keyring

While running apt-get update I was seeing errors:

W: https://packagecloud.io/timescale/timescaledb/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.

Although the warning is annoying, it doesn’t stop things updating. I understand the reasons why the legacy keyring is being removed.

Migrate existing keys to the new keyring

First, list the keys:

sudo apt-key list

In my case, i’ve got two – one for PostgreSQL and one for timescaledb. You will probably see a bunch of extra keys here too.

rob@localhost:~$ sudo apt-key list
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
/etc/apt/trusted.gpg
--------------------
pub   rsa4096 2011-10-13 [SC]
      B97B 0AFC AA1A 47F0 44F2  44A0 7FCC 7D46 ACCC 4CF8
uid           [ unknown] PostgreSQL Debian Repository

pub   rsa4096 2018-10-19 [SCEA]
      1005 FB68 604C E9B8 F687  9CF7 59F1 8EDF 47F2 4417
uid           [ unknown] https://packagecloud.io/timescale/timescaledb (https://packagecloud.io/docs#gpg_signing) <support@packagecloud.io>
sub   rsa4096 2018-10-19 [SEA]

Export the key by copying the last 8 characters of the identifier. Because I have two keys to export, i did this twice, giving each key a unique filename under /etc/apt/trusted.gpg.d/:

rob@localhost:~$ sudo apt-key export ACCC4CF8 | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/postgres.gpg
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
rob@localhost:~$ sudo apt-key export 47F24417 | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/timescaledb.gpg
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).

Bingo – package updates now work! But if they don’t you might get this error:

W: http://security.ubuntu.com/ubuntu/dists/jammy-security/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/postgres.gpg are ignored as the file is not readable by user '_apt' executing apt-key.

In which case, check out this other post explaining how to solve it.

Using an ephemeral MongoDB single node replicaset in a devcontainer or codespace

I love using devcontainers to manage my development environment. They make it super easy to ensure a consistent development stack is in place. Recently i started developing against a MongoDB instance. For node.js, i use mongodb-unit to spin up a standalone server on the local client. But there’s no equivalent package for Python.

Although there are lots of posts on stackoverflow about configuring a single node replicaset using a healthcheck, and there’s even an example given by a MongoDB employee, they didnt work for me. When setting up the server to use authentication, it also needs a keyfile, which has to be generated and secured in a specific way or you get this error:

BadValue: security.keyFile is required when authorization is enabled with replica sets

Without authentication, i was unable to create databases and collections, but the username and password in MONGODB_INITDB_ROOT_USERNAME and MONGODB_INITDB_ROOT_PASSWORD didnt get created automatically in the admin database:

{"t":{"$date":"2023-09-28T20:59:21.342+00:00"},"s":"I",  "c":"ACCESS",   "id":5286307, "ctx":"conn30","msg":"Failed to authenticate","attr":{"client":"127.0.0.1:33216","isSpeculative":true,"isClusterMember":false,"mechanism":"SCRAM-SHA-256","user":"root","db":"admin","error":"UserNotFound: Could not find user \"root\" for db \"admin\"","result":11,"metrics":{"conversation_duration":{"micros":221,"summary":{"0":{"step":1,"step_total":2,"duration_micros":206}}}},"extraInfo":{}}}

Mongo has a very clear, step by step instructions to set up a replicaset, but it requires a lot of manual steps. So i decided to automate it with a bash script, and then trigger this as a healthcheck. Here are the steps i followed:

My requirements are for an ephemeral database – that means the data is destroyed when the container is removed. To persist the data, you need to map the container folder /data/db to a local folder using a volume in the docker-compose file.

  1. Planning it out
  2. Create a script to initialize the replicaset and create the root user
  3. Create a Dockerfile to generate the replica keyfile and inject my initialisation script
  4. Create a docker-compose.yaml file to build my app landscape
  5. Create devcontainer.json to bring it all together
  6. Accessing from the local machine

Planning it out

Here’s what we’re building. 4 files and at the end, MongoDB will be running and we can connect to it from both our local machine and from inside the devcontainer.

<project workspace>
  |
  |-- .devcontainer
  |        |-- devcontainer.json
  |        |-- docker-compose.yaml
  |        |-- Dockerfile
  |        |-- mongodb_healthcheck.sh
  |
  |-- <other code and folders>

 

Create a script to initialize the replicaset and create the root user

The script is intended to be idempotent i.e. you can run it several times and it will only return 0 (success) when the replicaset is up and running, and the username/password works:

[Start]
  |
[check_replica_set]
  |
  |--[Yes]--->[check_authentication]--->[Yes]--->[Exit 0]
  |                   |
  |                   |--[No]
  |                   |
  |               [create_root_user]
  |                   |
  |                   |--[Success]--->[Exit 1]
  |                   |
  |                   |--[Failure]--->[Exit 1]
  |
  |--[No]--->[initialize_replica_set]
                      |
                      |--[Already Initialized or Success]--->[Exit 1]
                      |
                      |--[Failure]--->[Exit 1]

Here’s the full script:

#!/bin/bash

# Function to check if MongoDB replica set is ready
check_replica_set() {
    echo "init_script: Checking MongoDB replica set..."
    local is_master_result=$(mongosh --quiet --eval 'rs.isMaster().ismaster')
    echo "Result of rs.isMaster().ismaster: $is_master_result"  # Display the result

    if echo "$is_master_result" | grep -q true; then
        echo "init_script: MongoDB replica set is ready."
        return 0
    else
        echo "init_script: MongoDB replica set NOT ready."
        return 1
    fi
}

# Function to initialize MongoDB replica set if necessary
initialize_replica_set() {
    echo "init_script: Starting to initialise replica set."
    local result=$(mongosh --quiet --eval 'rs.initiate()')
    echo "Result of rs.initiate(): $result"  # Display the result

    if [[ "$result" == *"already initialized"* || "$result" == *'ok: 1'* || "$result" == *'"ok" : 1'* ]]; then
        echo "init_script: MongoDB replica set is already initialized or initialized successfully."
        exit 0
    else
        echo "init_script: Failed to initialize MongoDB replica set."
        exit 1
    fi
}

check_authentication() {
    local auth_result=$(mongosh -u "$MONGODB_INITDB_ROOT_USERNAME" -p "$MONGODB_INITDB_ROOT_PASSWORD" --quiet --eval "db.runCommand({ ping: 1 })")
    echo "Result of authentication: $auth_result"  # Display the result

    if echo "$auth_result" | grep 'ok' | grep -q '1'; then
        echo "init_script: Authentication successful."
        exit 0
    else
        echo "init_script: Authentication failed."
        return 1
    fi
}

# Function to create MongoDB root user
create_root_user() {
    echo "init_script: Creating MongoDB root user..."
    output=$(mongosh <<EOF
    admin = db.getSiblingDB("admin")
    result = admin.createUser(
      {
        user: "$MONGODB_INITDB_ROOT_USERNAME",
        pwd: "$MONGODB_INITDB_ROOT_PASSWORD",
        roles: [ { role: "root", db: "admin" } ]
      }
    )
    printjson(result)
EOF
    )
    echo "Result of createUser: $output"  # Display the result

    if echo "$output" | grep 'ok' | grep -q '1'; then
        echo "init_script: MongoDB root user created successfully."
        exit 0
    else
        echo "init_script: Failed to create admin user."
        exit 1
    fi
}

# Check if MongoDB replica set is ready and initialize if needed
if check_replica_set; then
    if check_authentication; then
        exit 0
    else
        create_root_user
    fi
else
    initialize_replica_set
fi

Create a Dockerfile to generate the replica keyfile and inject my initialisation script

I wanted to use the existing mongo image for this, and perform the minimum number of changes. So i created a simple Dockerfile which creates a new keyfile, and puts the init script in. I then trigger the init script as a ‘healthcheck’ meaning it’ll get automatically triggered by Docker after 30 seconds, and then at 10 second intervals, for up to 10,000 seconds (!!!):

FROM mongo

# Initiate replica set
RUN openssl rand -base64 756 > "/tmp/replica.key"
RUN chmod 600 /tmp/replica.key
RUN chown 999:999 /tmp/replica.key

# Copy the health check script to the container
COPY mongodb_healthcheck.sh /usr/local/bin/

# Set execute permissions for the script
RUN chmod +x /usr/local/bin/mongodb_healthcheck.sh

# Define the health check command
HEALTHCHECK --interval=10s --timeout=5s --start-period=30s --retries=100 CMD /usr/local/bin/mongodb_healthcheck.sh

CMD ["mongod", "--replSet", "rs0", "--bind_ip_all", "--keyFile", "/tmp/replica.key", "--auth"]

Create a docker-compose.yaml file to build my app landscape

This is pretty simple. I wanted to use the Microsoft Python 3.11 devcontainer, and have it access Mongo in my MongoDB container. I want to set the root username and password here too. I created a network to allow the pods to talk to each other:

version: '3'
services:
  app:
    image: mcr.microsoft.com/devcontainers/python:3.11
    command: ["sleep", "infinity"]
    volumes:
      - ..:/workspace:cached
    ports:
      - "5000:5000"
    environment:
      - "PYTHONBUFFERED=1"
      - "PYTHONUNBUFFERED=1"
    networks:
      - mynetwork

  mongodb:
    build:
      dockerfile: ./Dockerfile
    ports:
      - "27017:27017"
    environment:
      - MONGODB_INITDB_ROOT_USERNAME=root
      - MONGODB_INITDB_ROOT_PASSWORD=example
    hostname: mongodb
    networks:
      - mynetwork

networks:
  mynetwork:

Create devcontainer.json to bring it all together

devcontainer.json is an open standard. The only bit that tripepd me up here was the need to be explicit about which service was exposing which port, which you do by adding the service name (from the docker-compose.yaml file) in front of it:

{
    "name": "Python 3.11 + MongoDB",
    "dockerComposeFile": "docker-compose.yml",
    "workspaceFolder": "/workspace",
    "service": "app",
    "features": {
        "ghcr.io/devcontainers/features/node:1": {
            "version": "latest"
        },
        "ghcr.io/devcontainers-contrib/features/poetry:2": {}
    },
    "forwardPorts": [
        "app:5000",
        "mongodb:27017"
    ],
    "customizations": {
        "vscode": {
            "settings": {
                "python.defaultInterpreterPath": "/usr/local/bin/python",
                "python.linting.pylintEnabled": false,
                "python.linting.flake8Enabled": true,
                "python.linting.enabled": true,
                "editor.detectIndentation": false,
                "editor.tabSize": 4
            },
            "extensions": [
                "ms-python.python",
                "ms-python.flake8",
                "ms-python.vscode-pylance",
                "VisualStudioExptTeam.vscodeintellicode",
                "njpwerner.autodocstring",
                "GitHub.copilot",
                "GitHub.copilot-chat",
                "GitHub.copilot-labs"
            ]
        }
    }
}

Accessing from the local machine

MongoSH will now connect from the local machine or devcontainer. Without a username/password you get quite limited access, but using mongosh -u root -p example it’ll connect just fine and you can administer the database using the account we created earler. If you just try to connect using MonoDB Compass however you’ll get this error:

getaddrinfo ENOTFOUND mongodb

This can be solved by adding directConnection=true to the connection string e.g.:

mongodb://root:example@127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000

From RICE to ICE: which framework for your project?

I’ve previously explained the RICE and ICE techniques for prioritisation. Both techniques are frameworks used to evaluate and rank projects or tasks based on their potential impact, feasibility, and difficulty. However, I wanted to highlight the two key differences between them to help you chose the right tool for your project.

The ICE technique (Impact, Confidence, Ease) assigns scores to each project based on the potential impact of the project, the level of confidence in its success, and the ease of implementing it. The scores for each factor are multiplied to get a final score, which is used to rank the projects in order of priority.

The RICE technique (Reach, Impact, Confidence, Effort) takes a similar approach, but adds an additional factor: Reach. Reach refers to the number of people or customers who would benefit from the project. Each project is assigned a score out of 10 for each factor, with the scores for Reach, Impact, Confidence, and Effort multiplied to get a final score.

The main difference, therefore, between the two techniques is the inclusion of Reach which makes the technique particularly useful for marketing campaigns or projects aimed at customer acquisition i.e. where the breadth of impact is important.

Another difference is that the RICE technique places more emphasis on effort, which refers to the level of resources or time required to implement the project. This can help teams to prioritise projects that are feasible to implement given the available resources.

TechniqueFactorsCalculationPurpose
RICEReach, Impact, Confidence, Effort(Reach x Impact x Confidence) / EffortProjects with potential to reach a large audience or that require significant resources to implement
ICEImpact, Confidence, EaseImpact x Confidence x EaseSmaller projects or tasks that can be implemented more easily

I hope this helps!

Ice, Ice Baby: Chill Out and Prioritise with the ICE Technique

Yesterday, i talked about the RICE technique for prioritisation. Today, i want to introduce ICE technique, another prioritisation framework used to evaluate and prioritise tasks or projects based on three factors: Impact, Confidence, and Ease. Tomorrow, i’ll compare them both.

  • Impact refers to the potential positive outcome or benefit of completing a particular task or project, considering the potential impact of the task or project on the overall goals or objectives of the organisation or project. For example, is this going to reduce costs? Increase customer loyalty or satisfaction? Reduce developer frustration?
  • Confidence refers to the level of certainty or confidence that the task or project will be successful – factors such as available resources, expertise, and potential roadblocks or obstacles. Are we likely to be able to deliver?
  • Ease refers to the level of difficulty or complexity of completing the task or project, taking account of things like the level of effort required, the time needed, the necessary skills needed, or difficulty obtaining or using resources. Perhaps the project isn’t that hard – but we simply don’t have a developer with the right skills to implement it, or perhaps we can’t support it/keep it running over time.

To use the ICE technique, each item is assigned a score out of 10 for each factor, and the scores are then multiplied together to calculate a final score for each task or project. The higher the final score, the higher we should prioritise completing that item.

This creates a simple yet effective framework which allows us to compare the total potential impact, feasibility, and difficulty. For example, you might use this to prioritise potential new product ideas for a tech startup:

IdeaIdea DescriptionImpactConfidenceEase
1A new mobile app that helps people track their daily water intake and reminds them to stay hydrated throughout the day8 – there is a growing awareness of the importance of staying hydrated7 – the team has some experience building mobile apps but this one would require some new features8 – the basic features can be implemented quickly
2A new software tool that automates social media marketing for small businesses, allowing them to create, schedule and publish posts on multiple platforms with ease9 – social media marketing is critical for small businesses but can be time-consuming9 – the team has expertise in social media marketing and has built similar tools in the past6 – integrating with multiple social media platforms and providing advanced features will take time and resources
3A new AI-powered chatbot that can assist customers with basic support queries, reducing the load on the support team7 – many companies are looking for ways to reduce support costs and improve customer satisfaction8 – the team has some experience with chatbot development and has access to AI libraries7 – developing the chatbot and integrating it with the company’s support systems will require some time and effort)

Using the ICE technique, we would multiply the scores for each idea to get a final score:

Idea 1: 8 x 7 x 8 = 448 Idea 2: 9 x 9 x 6 = 486 Idea 3: 7 x 8 x 7 = 392

Based on these scores, we would prioritise the ideas in the following order:

  1. Idea 2 – social media marketing (486)
  2. Idea 1 – app to track daily water intake (448)
  3. Idea 3 – customer support chatbot (392)

So – our potential startup should probably focus on an app to help small businesses with their social media, then track water intake, and finally a chatbot. This doesn’t take account of the fact that there are already 10,000,000 apps for tracking water intake and i’m not sure how to make money on them, or that social media marketing is a field littered with failed apps.

You want RICE with that?

Imagine that you are a product manager at a software company, and you have three potential features to prioritise for the next development cycle. How do you pick between them? There are many ways, but one i recently learned about is the RICE model – a prioritisation framework used by product managers, teams, and organisations to prioritise projects, features, or tasks based on their potential impact, effort, and other factors. RICE stands for Reach, Impact, Confidence, and Effort, and it provides a quantitative approach to decision-making.

  1. Reach: Reach refers to the number of users, customers, or stakeholders who will be affected by the project or feature over a specific period (e.g., a month or a quarter). It is essential to estimate the reach to understand how many people will benefit from the implementation.
  2. Impact: Impact measures the potential benefit or positive effect that the project, feature, or task will have on users, customers, or stakeholders. Impact is usually measured on a scale, such as 1 (minimal impact) to 3 (significant impact), but the scale can be adjusted to suit the organization’s needs.
  3. Confidence: Confidence is an estimate of how certain the team is about the reach, impact, and effort assessments. This factor is crucial because it accounts for the inherent uncertainty in making predictions. Confidence is expressed as a percentage, typically ranging from 50% to 100%.
  4. Effort: Effort is an estimate of the amount of time, resources, or work needed to complete the project, feature, or task. Effort can be measured in person-hours, person-days, or any other metric that reflects the resources required to complete the work.

To use the RICE model, you assign values to each of the four factors (Reach, Impact, Confidence, and Effort) for every project, feature, or task under consideration. Then, calculate the RICE score using the following formula:

RICE score = (Reach * Impact * Confidence) / Effort

Projects or features with the highest RICE scores should be prioritised over those with lower scores. This method helps ensure that the team is working on the most valuable and impactful initiatives, while also taking into account the resources and level of certainty associated with each project.

For example:

Feature A: Improve the onboarding process for new users

  • Reach: 1000 users per month
  • Impact: 3 (high impact, as it can significantly improve user retention)
  • Confidence: 90% (high confidence in estimates and potential outcome)
  • Effort: 200 person-hours

Feature B: Implement a dark mode theme

  • Reach: 300 users per month
  • Impact: 2 (moderate impact, as it enhances user experience)
  • Confidence: 80% (fairly confident in the estimates)
  • Effort: 100 person-hours

Feature C: Optimise backend performance

  • Reach: 500 users per month
  • Impact: 1 (low impact, as most users won’t notice the difference)
  • Confidence: 70% (uncertain about the exact impact and effort)
  • Effort: 150 person-hours

Now calculate the RICE scores for each feature:

Feature A RICE score = (1000 * 3 * 0.9) / 200 = 13.5 Feature B RICE score = (300 * 2 * 0.8) / 100 = 4.8 Feature C RICE score = (500 * 1 * 0.7) / 150 = 2.33

Based on the RICE scores, the priority order for these features should be:

  1. Feature A: Improve the onboarding process for new users (13.5)
  2. Feature B: Implement a dark mode theme (4.8)
  3. Feature C: Optimize backend performance (2.33)

Using the RICE model, you can see that Feature A should be the top priority, as it has the highest potential impact on users with a reasonable amount of effort.

Tomorrow, i’ll explain the ICE technique.

Are you A senior developer, or THE lead developer

In our world, we organise in Pods – an autonomous group of 6-9 people with all the skills needed to solve a problem. Multiple Pods form a Team. Within a Pod, there can be multiple Senior Developers, but only a single Lead Developer. They have different and overlapping responsibilities and accountabilities.

Every project must have exactly one Lead Developer, and has one or more Senior Developers.

It is the accountability of the project or product manager to ensure that these roles exist in a team, and that the roles are filled with skilled team members able and willing to fulfil the role.

A Senior Developer

Every project must have at least one Senior Developer who has:

  • high competence in the core technologies used in the project
  • reasonable competence in all technologies used in the project
  • a willingness to learn, and takes action to generate opportunities for learning
  • an understanding and ability to explain high level architectural principles
  • the ability to generate implementable steps through the selection of appropriate patterns.

They are responsible for the following activities:

  • performing code and design reviews against industry and company best practice, defined implementation plans etc.
  • demonstrating “technical common sense” to ensure the team is producing clean, supportable, sustainable products
  • embody best practice around software engineering and software delivery, including testing, automation, deployment, etc. As an example, what technical standards are to be followed? How will we handle branching? Code reviews?
  • actively participate in creation of detailed designs
  • helps the wider organisation through activities like lunch and learns, pattern generation, training etc
  • actively mentors less experienced developers, typically spending 10-30% of their time on this alone.

You are probably a “Senior Developer” if team members keep asking you how to do things. We have the same expectations of Staff and Contractor Senior Developers, including that they spend significant time coaching and developing others.

The Lead Developer

In our projects, we expect the most senior developer to take on the role of “Lead Developer”. This role entails more leadership activities – the Lead Developer is accountable for:

  • Being a “Senior Developer” i.e. the Lead Developer also does all of the things that a Senior Developer does.
  • Ensuring that the Developers and Senior Developers are fulfilling their responsibilities.
  • The creation of the detailed technical designs necessary for implementation
  • Work closely with architecture to ensure continuity and coherence between detailed technical designs and high level solution and reference architecture
  • assist with planning and scoping of work, including helping design the delivery team
  • Assist with interviewing
  • meet with senior management (IT or commercial) to ensure proper understanding of the project, delivery, etc. on both sides
  • Lead the development team and create clarity of vision, design and expectations

The Lead Developer role requires the person to spend less time actually writing code – in some weeks, you might spend 30% or less of your time actually writing code, depending on the stage of the work.

Names change, and so do email addresses

We’ve recently been rolling out a new internal application. At our organisation, users have an email address which is generally firstname.lastname@company.com, or something like that. When a user logs in to the application, the app will look them up using their email address and figure out what parts of the application the user should be able to use.

The problem

One day we got a ticket for a user who was adamant that they had access but when we looked in the application, we couldn’t even find them in the system! Probing a bit further, it turns out that they had recently changed their name, and as a result, their email had changed.

There are lots of common misconceptions around names, including that names never change. But people change their names for a variety of reasons:

  • In the UK and US, many people choose to change their last name when they get married, or if they changed their name and subsequently divorce, may change it back.
  • People who identify as trans or non-binary may choose to change their name to better reflect their gender identity.
  • People may choose to select or return to a name which they feel better reflects their cultural identity
  • They may not like their name and want a better one

When we designed the system, we didn’t think of this. My email address hasn’t changed for years. But in retrospect, it’s so obvious that we should have.

Inclusive design

Inclusive design means thinking about all of our users, even ones who don’t yet exist or that we have yet to identify. Inclusive design makes the experience better for everyone, and it takes almost no effort or cost. Even if you don’t think you’ll benefit, you never know who might.

The Two Email Rule: When to Escalate from Email to Real Conversation

When i think back to the deepest of the many deep holes i’ve dug myself in to over the years, they almost all start with an email.

When working through my inbox, it’s all too easy to just bash out a reply and hit send. Usually, that’s fine – a quick email is all it takes, and the issue is closed. But sometimes, that email triggers a reply, and that reply another, and it’s hard to predict when but eventually I’m having a complex conversation about a complex issue and it all goes wrong and before I know it, we’re at loggerheads and 37 people on the CC list think I’m a jerk.

Email is a low bandwidth communication channel. And this means that it’s hard to get across what we mean without misunderstanding. In an email, no one can tell if you’re smiling. You can’t acknowledge a change of mind, or moderate your language when you detect frustration on the other end.

And that’s why I have the “two email rule”. Whenever I find myself reading or writing a third email on a topic, I know I need to escalate to bandwidth – send an IM, call them, book a meeting. Obviously, in distributed teams, or during lockdown, this is hard. But it’s necessary.

So if I suddenly stop responding to your email, it’s not because I don’t care, it’s because I want to properly discuss this with you.