Skip to content
Notes from the field

Notes from the field

Navigating the risks of artificial intelligence and machine learning in low-income countries

Navigating the risks of artificial intelligence and machine learning in low-income countries

How do you ensure your technically interesting project is truly a force for good, not merely further entrenching existing biases, stereotypes, and social problems? This great set of rules, based on experiences working on AI solutions in low-income countries, can help, regardless of where you're working:
1. Ask who's not at the table – are you truly inclusive?
2. Let others check your work – fairness is subjective
3. Doubt your data – does your data suffer from collection bias?
4. Respect context – a model developed in one context may fail in others
5. Automate with care – take baby steps, don't take people out of the loop too soon
https://rob.al/2syMCYa
On a recent work trip, I found myself in a swanky-but-still-hip office of a private tech firm. I was drinking a freshly frothed cappuccino, eyeing a mini-fridge stocked with local beer and standing…

Share this:

  • Click to email a link to a friend (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on WhatsApp (Opens in new window)
2018-05-31

linkedin cross-post

Post navigation

PREVIOUS
Facebook Is Designing Its Own Chips to Help Filter Live Videos
NEXT
Eric Schmidt thinks Elon Musk is “exactly wrong” on artificial intelligence
Comments are closed.

Archives

The standard disclaimer…

The views, thoughts, and opinions expressed in the text belong solely to the me, and not necessarily to the my employer, organization, committee or other group that I belong to or am associated with.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
© 2023 Rob Aleck, licensed under CC BY-NC 4.0
Go to mobile version