"Improvements in compute have been a key component of AI progress" – with compute capacity used by AI doubling every 3.5 months for the last 16 years
Since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month doubling time (by comparison, Moore’s Law had an 18 month doubling period).
A future of truly intelligent machines requires causal reasoning, not simply "nontrivial curve fitting" (the probabilistic association of cause and effect), argues Judea Pearl. Development of true reasoning – why a given action has a certain outcome, not just that they're correlated – would allow machines to "ask counterfactual questions" – in effect, to predict how a change creates a likely outcome that has never been seen before – and potentially even develop agency and free will. He puts lack of progress in this area down to a missing "calculus for asymmetrical relations" (knowing that the sun causes the grass to grow and not vice versa).
Judea Pearl helped artificial intelligence gain a strong grasp on probability, but laments that it still can’t compute cause and effect.
Although Apple seems to be moving their focus for marketing the Watch towards health conscious consumers, there's a significant number of people who find wearing the device at work absolutely necessary to stay in touch. Many service industry workers are prohibited from checking phones during the workday or while on shift, but checking a watch is acceptable, allowing them to break the monotony of a quiet afternoon without breaking company policies or seeming inattentive to customers – a scenario I hadn't previously considered (we didn't even have smartphones when I was working on a shop floor, and i'm not that old!).
If you work on your feet, you know.
I have to admit – Instagram's switch from chronological to algorithmic content sequencing has left me feeling like i'm missing something if I stop browsing for a minute – but the main culprit for that is bad user interface design (the app instantly swings back to the top of the feed when it relaunches if there's any new content, meaning you never get to see the older posts), so I'm not convinced that telling people they're "all caught up" is really going to fix the "compulsive, passive, zombie browsing" they're worried about.
Without a chronological feed, it can be tough to tell if you’ve seen all the posts Instagram will show you. That can lead to more of the compulsive, passive, zombie browsing that research sug…
I remember years ago hearing someone describe Google's biggest rival not as another search engine, but Amazon – people "view Google as a tool to research products, while Amazon is the place they go to buy". While it seems likely that Google's Shopping Actions programme will drive business, I do not see how this will break Amazon's stranglehold on online ordering, and especially not when Google's apparent long-term target here is to have people order through a voice UI, potentially having never seen the product. I trust Amazon to fix it when things go wrong. If i use this service, who do i go to? Google? The merchant? No-one?
If you look beyond other major online competitors, such as Walmart or Target, who can compete against Amazon? The answer may be … Google.
While I admire Elon Musk's ability to launch big idea after big idea, I have to agree with Schmidt and Zuckerberg – his concerns about AI stink of moral panic. Yes, we need to have the difficult debates around misuse and fairness, but these debates will only be triggered by continuing to explore the possibilities, not by shutting off the tap.
Eric Schmidt is the latest person to criticise Elon Musk’s Terminator AI vision. Schmidt said that AI will prove to benefit humanity as populations see ageing populations resulting in fewer workers.
How do you ensure your technically interesting project is truly a force for good, not merely further entrenching existing biases, stereotypes, and social problems? This great set of rules, based on experiences working on AI solutions in low-income countries, can help, regardless of where you're working:
1. Ask who's not at the table – are you truly inclusive?
2. Let others check your work – fairness is subjective
3. Doubt your data – does your data suffer from collection bias?
4. Respect context – a model developed in one context may fail in others
5. Automate with care – take baby steps, don't take people out of the loop too soon
On a recent work trip, I found myself in a swanky-but-still-hip office of a private tech firm. I was drinking a freshly frothed cappuccino, eyeing a mini-fridge stocked with local beer and standing…
I find it fascinating that there are companies out there large enough, and with specific enough use cases, to justify creating custom hardware to solve their problems. Facebook's recently confirmed that they're working on chips dedicated to analyzing live video, to allow them to respond more quickly to unacceptable or inappropriate content (such as suicide or murder being streamed live). Such analysis requires "a huge amount of compute power", but is certainly an interesting technical, and ethical, challenge. Who helps Facebook determine what content to flag? What happens if it gets it wrong?
Facebook Inc. is working on designing computer-chips that are more energy-efficient at analyzing and filtering live video content, its chief artificial intelligence scientist Yann LeCun said.
Further demonstrating that just having the technology isn't enough – you have to keep innovating to stay relevant – Stitch Fix first started using AI and machine learning back in 2011, and it's given them a significant "first mover advantage" – but the commoditisation of these capabilities means that the things they once held as their own are now readily available to anyone who can code and who can collect the data. Trunk Club, Amazon Wardrobe and The Chapar are all hot on their tail.
Online styling subscription service Stitch Fix uses AI in many aspects of its operation. In collaboration with human stylists, Stitch Fix’s algorithms aim to get relevant fashion into the hands of…
While "replacing 300 CPU-only servers on deep learning training" is hardly a benchmark, 15,500 images per second on ResNet-50 is – just a couple of years ago, training throughput would be 1-2 orders of magnitude slower. Also of interest is the approach that Nvidia is taking here – a single compute "node" will be capable of delivering both AI and HPC workloads with extreme performance (the reference implementation claims two petaflops).
The platform’s unique high-precision computing capabilities are designed for the growing number of applications that combine high-performance computing with AI.