AI models are complex and take time and compute power to generate – meaning they're expensive and valuable. IBM has been developing a way to "watermark" deep learning models by embedding specific information in to the model during training such that it's impossible (or very hard) to remove later, allowing definitive identification of model theft.
How can you tell if someone stole your AI models? IBM proposes a watermarking technique to protect AI developers and their intellectual property.
HBR's list of steps to making your AI projects more likely to be successful:
– ensure your purpose is clear. AI only adds value in the context of your business model and processes.
– chose carefully what you automate – the value is in expanding human effectiveness, not replacing humans entirely
– pick the right data – more data isnt necessarially better – you need the right data.
– finally, move people to higher value tasks – AI doesn't really reduce labour costs or headcount, it allows you to better use those people.
Start by having a clear sense of its goals.
Indian researchers have created a solution capable of detecting motorcycle drivers riding without a helmet. When linked to existing surveillance systems and the police, the technology could identify riders and then alert police teams further up the road to intercept them. Hopefully, it'll be used to improve road safety.
IIT researchers have developed a solution for the automatic detection of those riding without helmets.
In a scary development, China expects to deploy AI powered submarines capable of executing weapon strikes on targets within the next 2-3 years.
China is planning to upgrade its naval power with unmanned AI submarines that aim to provide an edge over the fleets of their global counterparts.
Although some are pitching it as "replacing" human call centre workers, a more likely scenario is that mundane or routine interactions can be completely automated, allowing human workers to spend more time on more meaningful, complex or sensitive activities. Yes, this will probably lead to individual call centres hiring fewer people, it could well pan out the same way as ATMs (https://rob.al/2LOqmFT) – it made the cost of banking cheaper, meaning banks could extend their reach further and open more branches.
This is the second time in a matter of just a few weeks we’re back writing about how Google is working on artificial intelligence software to replace human call center workers.
Planning how to deploy people and resources (like water and fertiliser) to make the best use of yield is critical to optimising costs in highly volatile industries like wine making. Using drones to survey vineyards (which takes just 15 minutes), and analysing the results using AI, produces faster and more accurate results – the system can even identify grapes which have been too close to bushfires (which would taint the wine with a smokey taste), and can predict with high accuracy when grapes are at their optimal time for harvesting, ensuring that the exact ratio of sugars, mature and just ripe grapes is ready for picking.
Jul 27, 2018 – by Teresa Umali – Winemakers can see in real time which grapes are ready to pick using handheld device that uses near-infrared wavelengths. – opengovasia.com
Although (in theory) any randomly scrambled Rubik's Cube can be solved in 20-26 moves, current solutions are basically based on brute force searches. Given the massive number of combinations (4.3 × 10^19), one major challenge to training a system is the sparse reward mechanism – DeepMind's paper outlines the use of a new algorithm – "Autodidactic Iteration" – using 2,000,000 iterations over approximately 8 billion cubes in just 44 hours.
Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and
Driving increased "data gravity" means customers are less likely to divert new data workloads to alternative platforms, and SalesForce's two recent acquisitions – MuleSoft (a data integration and transformation company) for $6.5 billion in March (https://rob.al/2MgoXEa) and Datorama (cloud AI for marketing) for $800 million (https://rob.al/2Me5qo8) further highlight their desire to make it as easy as possible for customers to get data on to their platform and process it there.
The relationship management software company has announced its fourth acquisition this year.
Recent advances in AI mean that it would now be possible to develop autonomous weapons – systems which are capable of making automated decisions to take human life. Elon Musk, DeepMind and others have signed a pledge never to create AI weapons:
"We the undersigned agree that the decision to take a human life should never be delegated to a machine. It goes on to warn "lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual."
The signatories have promised to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.”
A big step towards making devices like Alexa or Siri (or more likely Google Assistant, given that this research comes from DeepMind) better understand humans is the development of "theory of mind". By around 4 years old, human children can understand that their beliefs may diverge from those of others, and that understanding that divergence can help predict likely future behaviour of others – applying this to human interaction might make the experience more natural. A first step, DeepMind's algorithm is able to analyse the behaviour of AI systems which are otherwise too complex for people to understand and to try to predict their behaviour.
Algorithms achieve a machine theory of mind