Deep learning breaking new ground in areas like speech and image recognition, those at the front lines of AI research are keen to point out that there’s still a lot of work to be done. Just because we have digital assistants that sound like the talking computers in movies doesn’t mean we’re much closer to creating true generalized AI. Problems include the need for vast amounts of data to power deep learning systems and our current inability to create AI that is good at more than one task. AI needs data to learn, and it requires hundreds of thousands of times more information than humans to understand concepts or recognize features. Application domains where deep learning is successful today are those where a lot of data can be acquired, such as speech and image recognition.

Big tech giants (like Google and Facebook) have access to mountains of data (for example, your voice searches on Android), making it much easier to create useful tools. Facebook and Google initially used the data they collected from users to target advertising better. But in recent years they have discovered that data can be turned into any number of ML or “cognitive” services, some of which will generate new sources of revenue. These services include translation, visual recognition and assessing someone’s personality by sifting through their writings—all of which can be sold to other firms to use in their own products. These firms are always looking for new streams of information. Facebook gets its users to train some of its algorithms, for instance when they upload and tag pictures of friends. This explains why its computers can now recognize hundreds of millions of people with 98% accuracy. Google’s digital butler, called “Assistant”, gets better at performing tasks and answering questions.