Although Apple seems to be moving their focus for marketing the Watch towards health conscious consumers, there's a significant number of people who find wearing the device at work absolutely necessary to stay in touch. Many service industry workers are prohibited from checking phones during the workday or while on shift, but checking a watch is acceptable, allowing them to break the monotony of a quiet afternoon without breaking company policies or seeming inattentive to customers – a scenario I hadn't previously considered (we didn't even have smartphones when I was working on a shop floor, and i'm not that old!).
If you work on your feet, you know.
I have to admit – Instagram's switch from chronological to algorithmic content sequencing has left me feeling like i'm missing something if I stop browsing for a minute – but the main culprit for that is bad user interface design (the app instantly swings back to the top of the feed when it relaunches if there's any new content, meaning you never get to see the older posts), so I'm not convinced that telling people they're "all caught up" is really going to fix the "compulsive, passive, zombie browsing" they're worried about.
Without a chronological feed, it can be tough to tell if you’ve seen all the posts Instagram will show you. That can lead to more of the compulsive, passive, zombie browsing that research sug…
I remember years ago hearing someone describe Google's biggest rival not as another search engine, but Amazon – people "view Google as a tool to research products, while Amazon is the place they go to buy". While it seems likely that Google's Shopping Actions programme will drive business, I do not see how this will break Amazon's stranglehold on online ordering, and especially not when Google's apparent long-term target here is to have people order through a voice UI, potentially having never seen the product. I trust Amazon to fix it when things go wrong. If i use this service, who do i go to? Google? The merchant? No-one?
If you look beyond other major online competitors, such as Walmart or Target, who can compete against Amazon? The answer may be … Google.
While I admire Elon Musk's ability to launch big idea after big idea, I have to agree with Schmidt and Zuckerberg – his concerns about AI stink of moral panic. Yes, we need to have the difficult debates around misuse and fairness, but these debates will only be triggered by continuing to explore the possibilities, not by shutting off the tap.
Eric Schmidt is the latest person to criticise Elon Musk’s Terminator AI vision. Schmidt said that AI will prove to benefit humanity as populations see ageing populations resulting in fewer workers.
How do you ensure your technically interesting project is truly a force for good, not merely further entrenching existing biases, stereotypes, and social problems? This great set of rules, based on experiences working on AI solutions in low-income countries, can help, regardless of where you're working:
1. Ask who's not at the table – are you truly inclusive?
2. Let others check your work – fairness is subjective
3. Doubt your data – does your data suffer from collection bias?
4. Respect context – a model developed in one context may fail in others
5. Automate with care – take baby steps, don't take people out of the loop too soon
On a recent work trip, I found myself in a swanky-but-still-hip office of a private tech firm. I was drinking a freshly frothed cappuccino, eyeing a mini-fridge stocked with local beer and standing…
I find it fascinating that there are companies out there large enough, and with specific enough use cases, to justify creating custom hardware to solve their problems. Facebook's recently confirmed that they're working on chips dedicated to analyzing live video, to allow them to respond more quickly to unacceptable or inappropriate content (such as suicide or murder being streamed live). Such analysis requires "a huge amount of compute power", but is certainly an interesting technical, and ethical, challenge. Who helps Facebook determine what content to flag? What happens if it gets it wrong?
Facebook Inc. is working on designing computer-chips that are more energy-efficient at analyzing and filtering live video content, its chief artificial intelligence scientist Yann LeCun said.
Further demonstrating that just having the technology isn't enough – you have to keep innovating to stay relevant – Stitch Fix first started using AI and machine learning back in 2011, and it's given them a significant "first mover advantage" – but the commoditisation of these capabilities means that the things they once held as their own are now readily available to anyone who can code and who can collect the data. Trunk Club, Amazon Wardrobe and The Chapar are all hot on their tail.
Online styling subscription service Stitch Fix uses AI in many aspects of its operation. In collaboration with human stylists, Stitch Fix’s algorithms aim to get relevant fashion into the hands of…
While "replacing 300 CPU-only servers on deep learning training" is hardly a benchmark, 15,500 images per second on ResNet-50 is – just a couple of years ago, training throughput would be 1-2 orders of magnitude slower. Also of interest is the approach that Nvidia is taking here – a single compute "node" will be capable of delivering both AI and HPC workloads with extreme performance (the reference implementation claims two petaflops).
The platform’s unique high-precision computing capabilities are designed for the growing number of applications that combine high-performance computing with AI.
While not the first to develop a virtual world to provide suitable simulations to accelerate reinforcement learning, this world certainly seems to be one of the most complex. It'll be interesting to see how effective these simulations actually are in the real world; so far though, there are no robots complex enough to actually fully enact what they've "learnt".
The goal of VirtualHome is to help robots learn tasks by first experiencing them in a virtual system. In the current system, an avatar can perform 1,000 separate actions, broken down as subtasks, in…
The ethical questions raised by Google's (small) contract with the US military to develop AI are difficult and controversial, but they have to be asked, and i'd much rather they're asked in a form open to public debate than behind closed doors.
Where will Google draw the line on weaponized AI?