Meanwhile, a more benign view of the far future came from science fiction author Greg Egan.
And neuroscientist Christof Koch, chief scientific officer of the Allen Institute for Brain Science in Seattle, warned that although intelligent software could never be conscious, it could still harm us if not designed correctly. Looking further ahead, biotech and satellite entrepreneur Martine Rothblatt predicted that our personal data could be used to create intelligent digital doppelgangers with a kind of life of their own. One software engineer who has since joined Google cautioned that our instincts about privacy must change now that machines can decipher images. A more compact neuromorphic chip, developed by General Motors and the Boeing-owned research lab HRL, took flight in a tiny drone aircraft.Īll this rapid progress in artificial intelligence led some people to ponder the possible downsides and long-term implications of the technology. This year, IBM began producing a prototype brain-inspired chip it says could be used in large numbers to build a kind of supercomputer specialized for learning. Those chips can run machine learning algorithms more efficiently. That’s motivating work on a new type of “neuromorphic” chips modeled loosely on ideas from neuroscience. However, the most advanced machine learning software must be trained with large data sets, something that is very energy intensive, even for companies with sophisticated infrastructure. Using artificial intelligence techniques on genetic data is likely to get a lot more common now that Google, Amazon, and other large computing companies are getting into the business of storing digitized genomes. Applying machine learning to a genetic database enabled one biotech company to invent a noninvasive test that prevents unnecessary surgery. IBM is now close to seeing a version of its Jeopardy!-winning Watson software help cancer doctors use genomic data to choose personalized treatment plans for patients. Some of the most interesting applications of artificial intelligence came in health care. Machine learning was also a source of new products this year from computing giants, small startups, and companies outside the computer industry. In our feature-length profile, he explained how artificial intelligence could turn people who have never been on the Web into users of Baidu’s Web search and other services. Stanford AI researcher and onetime Google collaborator Andrew Ng was hired to lead that effort. It set up a lab in Silicon Valley to expand its existing research into deep learning, and to compete with Google and others for talent. The search company Baidu, nicknamed “China’s Google,” also spent big on artificial intelligence. When MIT Technology Review caught up with the company’s founder, Demis Hassabis, later in the year, he explained how DeepMind’s work was shaped by groundbreaking research into the human brain. Google paid more than $600 million for a machine learning startup called DeepMind at the start of the year.
Results like these have led leading computing companies to compete fiercely for AI researchers. Google showed off a system that can describe scenes using short sentences. Researchers at Facebook used that approach to make a system that can tell almost as well as a human whether two different photos depict the same person. Work in deep learning often focuses on images, which are easy for humans to understand but very difficult for software to decipher.
The most striking research results in AI came from the field of deep learning, which involves using crude simulated neurons to process data.
Companies in sectors from biotech to computing turned to these new techniques to solve tough problems or develop new products. But 2014 saw major strides in machine learning software that can gain abilities from experience. The holy grail of artificial intelligence-creating software that comes close to mimicking human intelligence-remains far off.