September 7, 2024
Tech

Neuromorphic Computing: Mimicking the Human Brain in Silicon

This approach allows for pattern recognition and classification of data that traditional computers cannot achieve, and it also makes it easier for the ‘processor’ to do what the neurons in a real brain do: process sensory inputs. For that reason, no one should assume that whatever intelligence the machine arrives at will be remotely human in appearance. Expect machines that learn to find each other, work together in groups akin to swarms of insects or flocks of birds, and address tasks we cannot because we ‘think’ about them, whereas these machines will learn by experience. These creatively structured computers have come to be known as neuromorphic computers.

For example, just 1 million programmable neurons and 256 million synapses on IBM’s TrueNorth chip are able to enable performance that is 160 times greater and 120,000 times more efficient than a GPU.

Event-driven computation

Due to their event-driven computation, neurons and synapses process only when there is an event (a spike), which is responsible for their very low latency and thus much faster computation when compared with the von Neumann computers. Traditional CPUs consist of lots of tiny processors that cooperate in a predictable way: they get a packet of data through one data path, and use it until the task is complete. That’s suitable for computers with long-time scales but less so for tasks inspired by biological brains that have to adapt rapidly, and then carry on adapting over time. Their ability to process and recognise patterns better than modern computer processors also makes neuromorphic chips especially appropriate for IoT devices, which largely rely on real-time data from sensors. In addition, their low latency allows neuromorphic chips to quickly adapt to changing stimuli, as humans can – providing almost instant feedback from learning that current processors would burn out. Neuromorphic chips can quickly identify and respond to unexpected situations, such as a self-driving car responding to the brakes being activated on a nearby vehicle; they could also be used to quickly respond in the case of attacks or other security breaches (such as the WannaCry attack in May 2017, or the Hidden Cobra attacks by North Korea on US entertainment media in 2014).

High in adaptability and plasticity

Neuromorphic computing tools are highly flexible and plastic, so can be applied to machine-learning applications such as identifying patterns in natural-language or speech patterns; processing medical images; sensing signals from functional magnetic resonance imaging (fMRI) brain scans or electroencephalogram (EEG) tests; as well as fitting the methodologies used to process data by any form of remote sensor, drone or other device on the ‘Internet of Things’. Its fault tolerance and energy efficiency makes it ideal for Edge AI applications, which run on low power to improve battery life for mobile devices, and are made more adaptable to reduce production costs and processing times. But because neuromorphic hardware performance is limited by how well SNN-based algorithms can be computed, it might be more challenging to have SNN-based applications if the objective is to optimise for energy efficiency of mobile devices, to conserve battery usage, and/or to run non-machine learning type of applications. To address this issue, co-design methodologies must start at the hardware substrate level, and then design the algorithms and applications, before the final designs are decided.

High in energy efficiency

These neuromorphic chips consume minuscule amounts of power since they only compute when spikes from neurons arrive at neurons or synapses (ie, they turn off when not in use, unlike conventional computers that continuously draw power), and they also possess extremely low latency times, which makes them well-suited for applications that rely on real-time sensor processing such as Internet of things (IoT) devices and autonomous driving. Neuromorphic hardware is built using biological elements known as phenotype circuits that Fire explains encode data using discrete analog signals, similar to how the brain’s spiking neurons encode information according to the presence or absence of a pulse, as opposed to binary 1s and 0s in typical von Neumann computers. Unfortunately, what’s missing is the programming languages and software development platforms to take advantage of this. Until this is accomplished, the neuromorphic computing revolution will be at a standstill. For example, Intel’s neuromorphic hardware, known as the Loihi chip, has been shown to achieve 16-fold energy efficiency over existing hardware for ML inference and online processing of sensor data in real time. More recently, photonic integrated circuits and mixed-signal – neuromorphic System-on-Chip (SoC) designs featuring a hybrid approach incorporating ultra low-power analogue processing for inference and real-time learning from the SNN architecture housed on the same chip – are being investigated for a new generation of hardware.

High in memory capacity

Neuromorphic computing hardware can be instantiated using a wide range of devices, materials and topologies. One way forward exploits memristors with resistive memory to co-locate processing and storage, but other memristive devices such as phase-change or ferroelectric memristors or the quantum topological insulators could also play a role. the optimal architecture for neuromorphic computing could involve many different connectivity patterns in the fabric, as well as myriad synaptic hyperparameters such as neurone thresholds, weight values or axonal delays to function optimally. Just as the pioneers of neuromorphic computing discovered that it can be more energy-efficient than classic von Neumann machines, and that spiking neural networks can outrun a conventional conventional neural network on machine learning tasks like image classification, benchmarks and challenge problems must be developed for these specific architectures than for technologies that could be compared by application or dataset. Software and application programming interfaces should also be developed to enable these new architectures to be used across wider communities.

    Leave a Reply

    Your email address will not be published. Required fields are marked *