AI hardware needs to become more brain-like to meet the growing energy demands of real-world applications, researchers say.
In a study published in Frontiers of sciencescientists from Purdue University and the Georgia Institute of Technology presented practical approaches to overcoming the limitations of modern computer hardware.
Conventional computers are based on the von Neumann architecture, with separate processors and memory units. Whenever data is needed, the information must flow between the two components. This transfer, known as the memory wall, is responsible for most of the delays and power consumption in AI processing.
The researchers say that integrating processing capability inside or alongside the memory unit would help overcome this bottleneck. Achieving this could enable the emergence of new types of algorithms that make AI applications feasible without resorting to data- and energy-intensive cloud computing.
“Language processing models have expanded 5,000 times over the past four years. This alarming and rapid expansion makes it crucial that AI is as effective as possible. This means fundamentally rethinking how computers are designed,” said Kaushik Roy, professor of electrical and computer engineering at Purdue University and lead author of the study.
Inspired by the brain
One way to avoid the memory wall problem and make AI more effective is to take inspiration from our brains.
When a neuron receives firing signals from other neurons, it builds up an electrical charge, known as a membrane potential. If this potential reaches a certain threshold, the neuron sends its own signal. In doing so, a neuron stores and processes information in one place and only communicates when something changes.
This has inspired new AI algorithms known as spiking neural networks (SNNs), which can respond effectively to irregular and occasional events. This contrasts with traditional AI networks, which excel at data-intensive tasks such as facial recognition, image classification, image analysis and 3D reconstruction.
“The capabilities of the human brain have long been a source of inspiration for AI systems. Machine learning algorithms arise from the brain’s ability to learn and generalize from input data. Now we want to take it to the next level and recreate the brain’s effective processing mechanisms,” said Adarsh Kosta, co-author and researcher at Purdue University.
AI on the fly
This neuro-inspired approach could enable AI applications to expand beyond large-scale data centers.
For example, an autonomous drone in a search and rescue scenario must sense its environment, identify and track objects, make decisions and plan its actions in real time. Relying on cloud-based computing can cause too much delay, and so these processes need to be run onboard as efficiently as possible.
In such scenarios, it is essential that IT systems are lightweight and power efficient. One efficiency gain is drones’ use of event-based cameras. Unlike video cameras that record a steady stream of images, these sensors only send data when there is a sufficient change in the pixels.
Event-based cameras use less data and power; however, their intermittent and time-dependent outputs are not well suited to traditional processing units. SNN algorithms, much like the brain, are very efficient at responding to sequences of events, making the most of these sparse signals.
This approach could allow the drone to be more efficient or have a longer range. Greater efficiency would also benefit AI applications in a wide range of areas, such as transportation or medical devices.
“AI is one of the most transformative technologies of the 21st century. However, to move it out of data centers and into the real world, we need to dramatically reduce its power consumption. With less data transfer and more efficient processing, AI can fit into small, affordable devices with batteries that last longer,” said Tanvi Sharma, co-author and researcher at Purdue University.
Hardware solutions
Successful application of SNNs will require specialized hardware capable of overcoming the memory wall.
Computing-in-memory (CIM) systems perform calculations where the data is stored, reducing costly data movement. This is ideal for SNN algorithms, which must repeatedly refer to memory to update and verify membrane potentials over time.
There are two main ways to achieve this. Analog methods use electrical currents flowing through memory cells to perform calculations. Digital methods use standard digital logic (0s and 1s) inside or adjacent to the memory array. Digital is more precise but consumes more energy than analog.
There are several potential technologies that could deliver CIM systems, but none present a clear winner in all cases. Instead, the authors emphasize the value of combining approaches and designing algorithms, circuits, and memory together, so that each application uses the most appropriate building blocks.
“Co-designing hardware and algorithms is the only way to break the memory wall and deliver fast, lightweight, low-power AI,” Roy said. “This collaborative design approach could also create much more versatile platforms by switching between traditional AI networks and neuro-inspired networks depending on the application.”
—
The article is part of the Frontiers in Science multimedia article centerTowards next-generation artificial intelligence hardware‘. The hub has a explainer, editorial, political outlook, and a version of the article for childrenother eminent experts: Prof R. Stanley Williams (Texas A&M University, USA) and Dr Vilas Dhar (Patrick J. McGovern Foundation, USA).
About
Frontiers of science is Frontiers’ open access, multidisciplinary journal focused on transformational science aimed at accelerating solutions for healthy lives on a healthy planet.
The journal publishes a number of exceptional, peer-reviewed primary articles from internationally renowned researchers whose work addresses key global challenges in human and planetary health. Each main article is enriched with diverse content that extends its reach and impact across society – from researchers and policymakers to lay audiences and children.
For more information, visit www.frontiersin.org/science and follow @FrontScience on X, Frontiers of science on LinkedIn, and @Borders on Bluesky.
REPUBLISHING GUIDELINES: Open access and sharing of research are part of Border mission. Unless otherwise noted, you may republish articles published on the Frontiers news site, as long as you include a link to the original research. Sale of items is not permitted.