In choosing to frame previous industrial revolutions in terms of warfare and the balance of power between state actors (instead of, say, improvements to the lives of individuals, redistribution of wealth to reduce global inequality, or the generation of new forms of capital and "worthwhile" work), the authors are able to articulate a clear set of actions for the US (or any) government who wants to build and cement a technological advantage in to a global projection of power.
Developing strong, pragmatic and principled national security and defense policies.
I'm not sure I'd call facebook (media), Amazon (retail, logistics), uber (cabs) etc. technology companies now either. Yes, they are heavily technology-leveraged, but ultimately the machine under the covers matters less than what it achieves. Walmart, like many large companies, is identifying that significant new business opportunities can be released by better leveraging technology. Does that make it a "tech company"?
Its latest partnerships and new services show the retailer’s continued evolution toward becoming a tech-focused business.
I'm not sure what to make of this report. I remember reading about the history of "online dating" and how it is all basically smoke and mirrors (if people believe "the computer" paired them for a reason their dates are more likely to be successful https://rob.al/2L03Fyg), so does AI really help?
It’s betting that machine learning can find a mutual match.It’s betting that machine learning can find a mutual match.
A recent experiment by Facebook pitted humans against AI to see which was better at helping another robot to navigate a (virtual) walk around an area of Hells Kitchen in New York. The bot had to describe its location using natural language ("I can see the bank on the corner"). Although the AI only scored 50% in this mode, when using "symbols" instead, it was able to beat humans 87.08% to 76.74%. The two AI were able to communicate far more efficiently than humans could. But as a test for their new "MASC (Masked Attention for Spatial Convolution)" model, it was a success.
Virtual guides help a ‘lost’ AI find its way.
Often when prescribing multiple drugs, doctors have very little information to judge side effects or drug interactions, and it can take years for drug side effects to be identified, as their discovery is usually purely by chance. Stanford University researchers trained their system on over 19,000 proteins and their drug interactions, which was able to successfully predict drug interactions based solely on the prescribing combinations.
When a doctor prescribes a patient more than one drug they have no way to predict whether that combination will have an adverse side effect. A new system from Stanford University presents a novel…
There's a nice breakdown of the usability and scaling challenges the Google Photos team went through with the redesign of their app. Creating a "scrubbable" infinite scrolling page, maximising screen real estate, while maintaining photo aspect ratio, with instant loading and rendering, with libraries of 250,000 photos or more. The compromises and engineering challenges they encountered are laid out with clear explanations. An interesting read.
A peek under the hood
Although it was first to market, Siri is so awful that i hardly ever use it. But with the reorganisation to bring Core ML and Siri under the same part of the company within Apple, and with a new "chief of machine learning and AI strategy", perhaps we'll see some improvement.
John Giannandrea is tasked with educating Apple’s assistant
All 4 of the "big players" personal assistants – Apple's Siri, Google Assistant, Amazon's Alexa and Microsoft's Cortana started off female (although they now have male voices). LivePerson CEO, Robert LoCascio, "believes the male-dominated AI industry brings its own unconscious bias to the decision of what gender to make a virtual assistant". Are the tech giants reflecting biases already present in society?
Siri, Alexa and Cortana all started out as female. Now a group of marketing executives, tech experts and academics are trying to make virtual assistants more egalitarian.
There's a massive ethical problem here – people expecting medical notes, receipts with personal data, or their emails to only be "read" by a machine may not have given that consent if it was clear a human would read that they ordered takeaway for 2 to their hotel room while on that business trip without their partner. But when i read the Guardian's article on "fake AI", i have to say i wasn't surprised. It reminded me of Andrew Mason's interview on how he started Groupon (https://rob.al/2mhzT9h) – the big question was how the business should work, and building technology which may not be usable later was a waste.
Using what one expert calls a ‘Wizard of Oz technique’, some companies keep their reliance on humans a secret from investors
First, deepfakes swapped our faces (https://rob.al/2LjX7GM), now a US company is developing the technology to recreate voices. The therapeutic uses are clear – there are dozens of situations which can lead to a person losing their voice (https://rob.al/2motUjj) – and clearly having a computer sound like me as well as speak my words will help maintain a sense of identity. But the potential for malicious use is also clear – the BBC had to find a reporter's twin to fool HSBC's voice ID system (https://rob.al/2LmLYFn). With this all you need is a few clips from facebook or recorded in secret.
It probably sings better than you, too.