TOP > Publications > Brain-Inspired AI Accelerator - Realizing Flexible Advanced Information Processing with Ultra-low Power Consumption -/CRDS-FY2020-SP-04
Mar. /2021
(Strategic Proposals)
Brain-Inspired AI Accelerator - Realizing Flexible Advanced Information Processing with Ultra-low Power Consumption -/CRDS-FY2020-SP-04
Executive Summary

"Brain-Inspired AI Accelerator - Realizing flexible advanced information processing with ultra-low power consumption -" is a research and development strategy to realize a dedicated processor specialized for artificial intelligence (AI) processing (brain-inspired AI accelerator) that can perform flexible and advanced information processing, such as efficient learning, recognition and understanding of the external world and environment, and decision making based on predictions, which are required for next-generation artificial intelligence technology, with ultra-lower power consumption, which is almost as low as the energy efficiency of the human brain. Here, research and development of new information processing models for the structure and function of neurons and neural circuits, algorithms based on these models, new circuits and architectures capable of high-speed and ultra-low-power operation, optimal devices and materials for such an operation, and software to efficiently operate such hardware will be promoted in combination through strong collaboration with brain science, mathematics and mathematical science, device and material technology, and information science. This will enable a brain-based AI accelerator, as well as the creation of new industries, and the strengthening of artificial intelligence research, brain science research, and materials, devices, and circuits research.

In recent years, AI technologies such as machine learning and deep learning have been used in various applications such as image recognition, speech recognition and translation, automatic driving, disease diagnosis, new material exploration, and device design. These applications require advanced information processing similar to that of human beings, such as recognition, prediction, and judgment, which are qualitatively different from simple calculations, programmed operations, image processing, and control that computers have performed to date. Therefore, AI processing is expected to advance further. In addition, while AI processing is mainly done in the cloud with abundant computing power, AI processing at the edges, such as in automobiles, smartphones, and IoT devices, will become more important in the future. Currently, cloud servers that perform AI processing consume a huge amount of power (several MW). To perform more advanced AI processing in the cloud and AI processing at the edges, where limited power is available, it is necessary to drastically reduce the power consumption of computer hardware. However, it is becoming more and more difficult to achieve both such advanced AI processing and low power consumption. Semiconductor technology/CMOS integrated circuits, which have been the driving force behind the increase in computing power and the reduction in power consumption of computers, are facing the limits of microfabrication, and it is becoming difficult to improve the performance, power consumption, and efficiency of conventional (Neumann-type) general-purpose computers. Since AI technology to date has been realized by software on these conventional general-purpose computers, it is difficult to expect dramatic improvement of performance or significant reduction of power consumption. On the other hand, machine learning and deep learning have been inspired by simplified models of brain functions and have made progress. Brain science itself is also advancing in research to elucidate the causes of brain diseases, and the functions of the brain at various levels, from the molecular level to cells (neurons), neural networks, and the entire brain, are being clarified. For this reason, it will be important in the future to use the knowledge of brain science to obtain hints for new information processing from the structure and functions of the human brain, or to mimic these hints to develop hardware (accelerators, chips) specialized for AI processing, aiming for flexible and advanced information processing and low power consumption.

In the U.S., DARPA's SyNAPSE project was launched in 2008 with the goal of building a system with the same volume, functionality, and energy consumption as the human brain using electronic circuits. In Europe, the development of neuromorphic computers and next-generation neuromorphic chips has been underway since 2013 in a sub-program of the Human Brain Project to build models of the brain. While these efforts are noteworthy to apply the findings of brain science to advanced information processing, their performance is not yet sufficient to demonstrate their superiority and attractive applications. In Japan, the Grant-in-Aid for Scientific Research on Innovative Areas, "Correspondence and Fusion of Artificial Intelligence and Brain Science" and "Brain information dynamics underlying multi-area interconnectivity and parallel processing", as well as the WPI International Research Center for Neurointelligence (IRCN) were launched in 2016. In addition, in 2019, the AI Strategy 2019 was formulated, which indicated device architecture and AI chip technology as necessary research and development target. Thus, as new trends and demands for hardware-based AI technology emerge, we should strategically promote the research and development of brain-inspired AI accelerator technologies that meet the demands for low power consumption and high efficiency of AI processing and learning in the short term, and for intuitive recognition and real-time decision making in the long term, and that demonstrate clear advantages over conventional technologies.

Research and development issues to be addressed in the future are listed below. We will build new mathematical models based on the knowledge of brain structure and information processing functions, create algorithms for brain-type AI, and lead them to the development of specific circuits and architectures, as well as devices and materials. At the same time, we will focus on devices and materials that have characteristics similar to or inspired by the functions of lower-order structures such as the molecular and cellular levels of the brain, and create new circuits, architectures, and algorithms from them. It is important to integrate these two flows into the development of a brain-inspired AI accelerator.

  • (1) Research and development of new mathematical models and algorithms for information processing using the knowledge of brain science

    We will develop new mathematical models and algorithms for information processing based on our knowledge of brain structure, activity, and function at the cellular level, neural circuit level, brain tissue level (e.g., hippocampus), and overall brain network level. In the short term, we will focus on the lower levels, neurons and synapses, and neural circuits, and in the long term, on the higher levels of the brain architecture and the entire brain, to understand structural connections, dynamic activities, and signal transmission mechanisms, and to use mathematical sciences such as dynamical systems theory, network theory, and topology to construct mathematical models and algorithms for information processing inspired by them. The models created here do not necessarily have to accurately reflect the actual information processing in the brain, but it is important that new information processing ideas and various models emerge.

  • (2) Development of circuits/architectures and device/material technologies capable of storage and operation with ultra-low power consumption

    Devices and materials to be used to mimic the cellular level of the brain need the ability to change the state of synaptic connections and hold them for a certain period of time. Device and material technologies that enable three-dimensional connections, and devices with the ability to freely change circuit connections will also be important. Research and development of circuits and architectures will require significantly lower power consumption than current digital circuits. Prototypes of neural networks using non-volatile memory and memristors have already been developed. In the short term, it is necessary to improve the characteristics of these devices, and to develop technologies for analog memories that take advantage of device characteristics, low-power analog circuits, and spike timing control for spiking neuron circuits. In the long term, to realize the device and material characteristics required by the new information processing models of the brain, we will actively utilize new physical phenomena, noise, and fluctuations, and develop low-power control circuits, three-dimensional wiring technology, and in-memory computing technology necessary for integrating memory and processing.

  • (3) Development of brain-inspired AI accelerator

    By integrating the new brain mathematical models and information processing technologies described above with ultra-low power circuits, devices, and materials technology, we will advance the hardware technology of neuromorphic computing and reservoir computing in the short term, and show its superiority over deep learning chips. In the long term, we will conduct research and development on the design of accelerators for new mathematical models and algorithms that mimic the higher functions of the brain, and software for efficient use of these accelerators. In particular, for edge applications in real circumstances, such as service robots, which are considered to be an important application field, it is necessary to develop functions that have not been available in AI processing to date, such as selecting important information from a large amount of sensing information, recognizing and judging it, and efficiently communicating the results of the judgments to multiple actuators. Furthermore, it is important to develop flagship chips for specific applications to demonstrate to society the performance and potential capabilities of brain-inspired AI accelerators to be developed in the short or long term.

To promote this research and development, it is necessary to determine an attractive application field (e.g., robotics), draw up a scenario in which Japan will take the lead from a long-term perspective, establish an environment in which researchers in different fields of brain science, mathematical science, information science, and nanotechnology/materials technology can discuss on a daily basis, and build a common research and development base. It is also necessary to establish an ecosystem for chip fabrication and application field exploration through collaboration between industry and academia as well as with overseas partners, accumulate the knowledge and technologies obtained, and formulate an open/closed strategy that includes intellectual property and international standardization of core technologies. Moreover, to broaden the base of researchers in Japan, it is important to promote cooperation among academic societies in different fields, form new communities, and develop human resources who are interested in other fields. Funding is important to boost these efforts, and it is desirable to set short-term and long-term goals and promote measures in cooperation with relevant government ministries.

Related Reports