All 4 of the "big players" personal assistants – Apple's Siri, Google Assistant, Amazon's Alexa and Microsoft's Cortana started off female (although they now have male voices). LivePerson CEO, Robert LoCascio, "believes the male-dominated AI industry brings its own unconscious bias to the decision of what gender to make a virtual assistant". Are the tech giants reflecting biases already present in society?
Siri, Alexa and Cortana all started out as female. Now a group of marketing executives, tech experts and academics are trying to make virtual assistants more egalitarian.
There's a massive ethical problem here – people expecting medical notes, receipts with personal data, or their emails to only be "read" by a machine may not have given that consent if it was clear a human would read that they ordered takeaway for 2 to their hotel room while on that business trip without their partner. But when i read the Guardian's article on "fake AI", i have to say i wasn't surprised. It reminded me of Andrew Mason's interview on how he started Groupon (https://rob.al/2mhzT9h) – the big question was how the business should work, and building technology which may not be usable later was a waste.
Using what one expert calls a ‘Wizard of Oz technique’, some companies keep their reliance on humans a secret from investors
First, deepfakes swapped our faces (https://rob.al/2LjX7GM), now a US company is developing the technology to recreate voices. The therapeutic uses are clear – there are dozens of situations which can lead to a person losing their voice (https://rob.al/2motUjj) – and clearly having a computer sound like me as well as speak my words will help maintain a sense of identity. But the potential for malicious use is also clear – the BBC had to find a reporter's twin to fool HSBC's voice ID system (https://rob.al/2LmLYFn). With this all you need is a few clips from facebook or recorded in secret.
It probably sings better than you, too.
TNW has a brief summary of the ways that machine learning is being used to improve authentication and authorization, with a rundown of a number of approaches used by different companies.
To some, the future of authentication might look a little creepy. But the explosion of data and connectivity will provide plenty of ways for AI algorithms to distinguish between imposters and real…
The search for "generalisation" in AI is somewhat hindered by an inability to test for it, so a recent paper by Google's Deep Mind team provides an interesting insight in to the thought process of teams pursuing this goal. The team generated a number of tests which contain patterns with abstract relationships between elements in the pattern, and between sets of patterns. Within the sets, specific elements are missing, and the researchers found that pattern completion performance was strongly correlated with core model performance. Whether this provides a way to test models remains to be seen and is the subject of further work.
In a new paper, researchers at Google subsidiary DeepMind tested the ability of machine learning models to reason abstractly, like humans.
while an interesting use case for sure, i'm not sure that I would pay $1,000 for a dustbin unless it automatically managed the inventory in my kitchen cupboards for me…
Autonomous, an ergonomic office furniture company, announced a Kickstarter campaign for Oscar, a smart home appliance. The AI-based device sorts recyclables and garbage. Environmentally-conscious…
In a very interesting and wide ranging talk on the history (including a very early mechanical perceptron) of AI, Carlos Guestrin outlined 4 trends he sees in the future:
1. shift from parallelism (e.g. spark) to HPC (e.g. deep learning on massive GPU clusters) workloads driving insights from huge (rather than "massive") data
2. a return to specialized hardware (e.g. custom chips, FPGAs etc. for HPC) and the engineering challenges to fix it (e.g. AutoTVM)
3. commodification of deep learning tools to reduce barriers of adoption (e.g. pretrained models, higher level libraries)
4. ensuring that models are inclusive by making models interpretable
The rise of machine learning has been one of the most exciting developments in modern technology, but according to Carlos Guestrin, one of the oldest principles of computing still applies: garbage in…
In a couple of short but interesting videos, researchers at Honda show us how they're designing robots to be tolerant of physical risks, such as knocks, which may cause the robot to topple over and sustain damage to itself or others, by hopping or "running" (several fast steps), depending on the direction of the push. The researchers indicate that future developments might include the ability to switch from vertical to horizontal movement i.e. if you push Asimo too hard it'll drop to the floor and crawl away.
Honda is teaching its robots to take longer and faster steps to recover from shoves by transitioning to a running gait, which is exactly what humans do if we need to
I still believe that we have to change the way we're consuming the earth's resources if we want to leave to our children anything like the planet we inherited from our parents. But the products like this need to work on the marketing. Who wants to eat algae 🙂
Algae could be the environmentally-friendly superfood we’ve all been waiting for. But will anyone actually eat it?
It's interesting to see how Apple and Facebook are approaching the problem of "fake news" differently. Facebook is sticking to it's "algorithmic" approach, which i guess would be far more scalable if (and it's a big if) it can be demonstrated to work. Apple's employing more people to review the news that's presented through the hardware/software giant's own apps, teaching us:
1. the scale of the problem facing Facebook's is simply orders of magnitude larger (Apple only curates content from a chosen set of sources, not any joe schmoe writing a blog), and
2. Apple's convinced it can avoid human bias, while Facebook believes it can code it out.
Apple News launched a human-curated elections section, sparking a debate about tech companies and automation.