Google’s AI built its own AI beyond any human-developed AI
In May 2017, Google Brain researchers introduced AutoML, an artificial intelligence capable of generating its own AI. Recently it became known that AutoML has built a system that surpasses the "competitors" developed by people. It is reported by the portal Futurism.
Google experts have automated the development of machine learning models with reinforced learning. AutoML acts as a control neural network that develops a child neural network for a specialized task. For this subsidiary network (which the researchers called NASNet), the task included recognition of objects - people, cars, traffic lights, baggage, etc. - in real-time video.
At the same time, AutoML evaluates the performance of NASNet and uses this information to improve the subsidiary network; this process is repeated thousands of times. When engineers tested NASNet on ImageNet and COCO image sets, it surpassed all existing computer vision systems.
According to researchers from Google, NASNet successfully predicted images in the ImageNet control sample in 82.7% of cases. This is 1.2% better than the previous record. At the same time, the system was also 4% more efficient, with 43.1% average accuracy (mAP). In addition, the lower-cost version of NASNet surpassed the best similar models for mobile platforms by 3.1%.
There are many possible uses for AutoML and NASNet. Accurate, efficient computer vision algorithms can be used to create complex AI robots or, for example, to help visually impaired people. Such algorithms can also help improve the technology of autonomous driving: the faster an unmanned vehicle detects objects in its path, the faster it responds.
This, of course, raises ethical questions about AI concerns: what if AutoML creates systems at such a speed that society just can't keep up with them? However, many large companies are trying to consider the security problems of AI. For example, Amazon, Facebook, Apple, and several other corporations are members of the Partnership on AI to Benefit People and Society. The Institute of Engineers and Electrical Engineering (IEE) proposed ethical standards for AI, while DeepMind, for example, announced the creation of a group that will deal with moral and ethical issues related to the use of artificial intelligence.