Navigating the risks of artificial intelligence and machine learning in low-income countries

How do you ensure your technically interesting project is truly a force for good, not merely further entrenching existing biases, stereotypes, and social problems? This great set of rules, based on experiences working on AI solutions in low-income countries, can help, regardless of where you're working:
1. Ask who's not at the table – are you truly inclusive?
2. Let others check your work – fairness is subjective
3. Doubt your data – does your data suffer from collection bias?
4. Respect context – a model developed in one context may fail in others
5. Automate with care – take baby steps, don't take people out of the loop too soon
On a recent work trip, I found myself in a swanky-but-still-hip office of a private tech firm. I was drinking a freshly frothed cappuccino, eyeing a mini-fridge stocked with local beer and standing…