I'm not sure what to make of this report. I remember reading about the history of "online dating" and how it is all basically smoke and mirrors (if people believe "the computer" paired them for a reason their dates are more likely to be successful https://rob.al/2L03Fyg), so does AI really help?
It’s betting that machine learning can find a mutual match.It’s betting that machine learning can find a mutual match.
A recent experiment by Facebook pitted humans against AI to see which was better at helping another robot to navigate a (virtual) walk around an area of Hells Kitchen in New York. The bot had to describe its location using natural language ("I can see the bank on the corner"). Although the AI only scored 50% in this mode, when using "symbols" instead, it was able to beat humans 87.08% to 76.74%. The two AI were able to communicate far more efficiently than humans could. But as a test for their new "MASC (Masked Attention for Spatial Convolution)" model, it was a success.
Virtual guides help a ‘lost’ AI find its way.
Often when prescribing multiple drugs, doctors have very little information to judge side effects or drug interactions, and it can take years for drug side effects to be identified, as their discovery is usually purely by chance. Stanford University researchers trained their system on over 19,000 proteins and their drug interactions, which was able to successfully predict drug interactions based solely on the prescribing combinations.
When a doctor prescribes a patient more than one drug they have no way to predict whether that combination will have an adverse side effect. A new system from Stanford University presents a novel…
There's a nice breakdown of the usability and scaling challenges the Google Photos team went through with the redesign of their app. Creating a "scrubbable" infinite scrolling page, maximising screen real estate, while maintaining photo aspect ratio, with instant loading and rendering, with libraries of 250,000 photos or more. The compromises and engineering challenges they encountered are laid out with clear explanations. An interesting read.
A peek under the hood
Although it was first to market, Siri is so awful that i hardly ever use it. But with the reorganisation to bring Core ML and Siri under the same part of the company within Apple, and with a new "chief of machine learning and AI strategy", perhaps we'll see some improvement.
John Giannandrea is tasked with educating Apple’s assistant
All 4 of the "big players" personal assistants – Apple's Siri, Google Assistant, Amazon's Alexa and Microsoft's Cortana started off female (although they now have male voices). LivePerson CEO, Robert LoCascio, "believes the male-dominated AI industry brings its own unconscious bias to the decision of what gender to make a virtual assistant". Are the tech giants reflecting biases already present in society?
Siri, Alexa and Cortana all started out as female. Now a group of marketing executives, tech experts and academics are trying to make virtual assistants more egalitarian.
There's a massive ethical problem here – people expecting medical notes, receipts with personal data, or their emails to only be "read" by a machine may not have given that consent if it was clear a human would read that they ordered takeaway for 2 to their hotel room while on that business trip without their partner. But when i read the Guardian's article on "fake AI", i have to say i wasn't surprised. It reminded me of Andrew Mason's interview on how he started Groupon (https://rob.al/2mhzT9h) – the big question was how the business should work, and building technology which may not be usable later was a waste.
Using what one expert calls a ‘Wizard of Oz technique’, some companies keep their reliance on humans a secret from investors
First, deepfakes swapped our faces (https://rob.al/2LjX7GM), now a US company is developing the technology to recreate voices. The therapeutic uses are clear – there are dozens of situations which can lead to a person losing their voice (https://rob.al/2motUjj) – and clearly having a computer sound like me as well as speak my words will help maintain a sense of identity. But the potential for malicious use is also clear – the BBC had to find a reporter's twin to fool HSBC's voice ID system (https://rob.al/2LmLYFn). With this all you need is a few clips from facebook or recorded in secret.
It probably sings better than you, too.
TNW has a brief summary of the ways that machine learning is being used to improve authentication and authorization, with a rundown of a number of approaches used by different companies.
To some, the future of authentication might look a little creepy. But the explosion of data and connectivity will provide plenty of ways for AI algorithms to distinguish between imposters and real…
The search for "generalisation" in AI is somewhat hindered by an inability to test for it, so a recent paper by Google's Deep Mind team provides an interesting insight in to the thought process of teams pursuing this goal. The team generated a number of tests which contain patterns with abstract relationships between elements in the pattern, and between sets of patterns. Within the sets, specific elements are missing, and the researchers found that pattern completion performance was strongly correlated with core model performance. Whether this provides a way to test models remains to be seen and is the subject of further work.
In a new paper, researchers at Google subsidiary DeepMind tested the ability of machine learning models to reason abstractly, like humans.