It’s becoming increasingly apparent that Level 2 “self driving” cars are quite simply dangerous. The most recent incident (involving a Tesla model S which crashed in to a parked police car) has highlighted that partially automating the complex task of “driving” is potentially worse than not automating it at all (https://rob.al/2kGyxUS). But the “paradox of automation” is not a new phenomena – economist Tim Harford has previously written about this problem (https://rob.al/2kFZB6O), and Alphabet has chosen to skip Level 2 automation entirely, heading straight for vehicles which need no human intervention at all (“level 4” and beyond) https://rob.al/2sujOjo
The impact of self-driving cars will be felt far and wide. Aside from the obvious (insurance industry, petrol stations, professional drivers, crash repair centres), CB Insights points out that seemingly disconnected industries – like fast food, real estate, media and healthcare – are also set to be jolted from their comfort zones. Not all of these are negative – if you could watch movies while being shuttled around that's a boon for those who tell internet access and streaming services. Others are more subtle – how the price of real estate will be affected is as yet unclear (and how will that impact public transport?)
Fast food, real estate, military operations, even home improvement — many large industries will have to shift their strategies in the wake of driverless cars.
"Improvements in compute have been a key component of AI progress" – with compute capacity used by AI doubling every 3.5 months for the last 16 years
Since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month doubling time (by comparison, Moore’s Law had an 18 month doubling period).
A future of truly intelligent machines requires causal reasoning, not simply "nontrivial curve fitting" (the probabilistic association of cause and effect), argues Judea Pearl. Development of true reasoning – why a given action has a certain outcome, not just that they're correlated – would allow machines to "ask counterfactual questions" – in effect, to predict how a change creates a likely outcome that has never been seen before – and potentially even develop agency and free will. He puts lack of progress in this area down to a missing "calculus for asymmetrical relations" (knowing that the sun causes the grass to grow and not vice versa).
Judea Pearl helped artificial intelligence gain a strong grasp on probability, but laments that it still can’t compute cause and effect.
Although Apple seems to be moving their focus for marketing the Watch towards health conscious consumers, there's a significant number of people who find wearing the device at work absolutely necessary to stay in touch. Many service industry workers are prohibited from checking phones during the workday or while on shift, but checking a watch is acceptable, allowing them to break the monotony of a quiet afternoon without breaking company policies or seeming inattentive to customers – a scenario I hadn't previously considered (we didn't even have smartphones when I was working on a shop floor, and i'm not that old!).
If you work on your feet, you know.
I have to admit – Instagram's switch from chronological to algorithmic content sequencing has left me feeling like i'm missing something if I stop browsing for a minute – but the main culprit for that is bad user interface design (the app instantly swings back to the top of the feed when it relaunches if there's any new content, meaning you never get to see the older posts), so I'm not convinced that telling people they're "all caught up" is really going to fix the "compulsive, passive, zombie browsing" they're worried about.
Without a chronological feed, it can be tough to tell if you’ve seen all the posts Instagram will show you. That can lead to more of the compulsive, passive, zombie browsing that research sug…
I remember years ago hearing someone describe Google's biggest rival not as another search engine, but Amazon – people "view Google as a tool to research products, while Amazon is the place they go to buy". While it seems likely that Google's Shopping Actions programme will drive business, I do not see how this will break Amazon's stranglehold on online ordering, and especially not when Google's apparent long-term target here is to have people order through a voice UI, potentially having never seen the product. I trust Amazon to fix it when things go wrong. If i use this service, who do i go to? Google? The merchant? No-one?
If you look beyond other major online competitors, such as Walmart or Target, who can compete against Amazon? The answer may be … Google.
While I admire Elon Musk's ability to launch big idea after big idea, I have to agree with Schmidt and Zuckerberg – his concerns about AI stink of moral panic. Yes, we need to have the difficult debates around misuse and fairness, but these debates will only be triggered by continuing to explore the possibilities, not by shutting off the tap.
Eric Schmidt is the latest person to criticise Elon Musk’s Terminator AI vision. Schmidt said that AI will prove to benefit humanity as populations see ageing populations resulting in fewer workers.
How do you ensure your technically interesting project is truly a force for good, not merely further entrenching existing biases, stereotypes, and social problems? This great set of rules, based on experiences working on AI solutions in low-income countries, can help, regardless of where you're working:
1. Ask who's not at the table – are you truly inclusive?
2. Let others check your work – fairness is subjective
3. Doubt your data – does your data suffer from collection bias?
4. Respect context – a model developed in one context may fail in others
5. Automate with care – take baby steps, don't take people out of the loop too soon
On a recent work trip, I found myself in a swanky-but-still-hip office of a private tech firm. I was drinking a freshly frothed cappuccino, eyeing a mini-fridge stocked with local beer and standing…
I find it fascinating that there are companies out there large enough, and with specific enough use cases, to justify creating custom hardware to solve their problems. Facebook's recently confirmed that they're working on chips dedicated to analyzing live video, to allow them to respond more quickly to unacceptable or inappropriate content (such as suicide or murder being streamed live). Such analysis requires "a huge amount of compute power", but is certainly an interesting technical, and ethical, challenge. Who helps Facebook determine what content to flag? What happens if it gets it wrong?
Facebook Inc. is working on designing computer-chips that are more energy-efficient at analyzing and filtering live video content, its chief artificial intelligence scientist Yann LeCun said.