Super-intelligent, fast and energy-efficient: Neuromorphic computing is ushering in a new era of artificial intelligence

Chicago in the year 2035 – humanoid, intelligent robots are an integral part of everyday life: They help around the house – doing laundry, cleaning, and cooking. The 2004 science-fiction thriller “I, Robot”, starring Oscar winner and US actor Will Smith, offered a visionary, fictional glimpse into a high-tech future where androids and artificial intelligence (AI) are everywhere. What Australian director Alex Proyas portrayed more than 20 years ago as a futuristic Hollywood thriller and distant vision – robots coming to life as service-providing everyday heroes – is no longer a distant utopian vision for researchers today. Scientists are already looking ahead to a transformed world of tomorrow – where cognitive algorithms and neuromorphic computing architectures enable smart AI functions that go far beyond the limits of personal assistant tools. Inspired by nature, these systems will in the future advance into entirely new computing dimensions with tremendous speed, intelligence, and energy efficiency – lightening our daily workloads while making our lives more efficient and convenient. With its research in neuromorphic computing, Fraunhofer ENAS is making key contributions to the emerging field of AI and its transformative impact on industry, business, and society. 

“Today, it is second nature for us to use tools such as ChatGPT, translation services, or voice assistants. Powered by AI, they provide answers to all sorts of questions spanning a wide range of topics in a matter of seconds: They research, formulate and calculate, provide inspiration, or make learning effortless,” says Dr. Sven Zimmermann, head of the “Nanodevices/PVD” group at Fraunhofer ENAS, who conducts research on neuromorphic computing at the institute in Chemnitz.

 

A look back at 100 years of history: Milestones and highlights in AI

AI has long been part of our everyday lives – and not just for the past three decades, notes the researcher. Almost 100 years ago, British mathematician and computer scientist Alan Turing laid the first foundations for intelligent systems capable of processing cognitive algorithms. In the 1950s, he developed the so-called ”Imitation Game”, now known as the “Turing Test”, setting important milestones for AI.

“His method, a question-and-answer game between human and machine, was intended to determine whether computers could think like humans. If the questioner could not clearly tell whether the counterpart was a real person or a computer-based ‘conversation partner’, the computer was deemed to have passed the test – and, by the standards of the time, to be intelligent,” explains Sven Zimmermann.

Forty years later, IBM’s AI computer “Deep Blue” defeated reigning world chess champion Garry Kasparov in a sensational tournament. “This match marked a turning point and brought AI into the public eye for the very first time. A computer that could think in highly complex ways, act strategically, as well as learn and apply millions of chess moves thanks to sophisticated deep-learning algorithms was almost unimaginable at the time – and it impressively demonstrated the enormous potential of AI,” explains the Fraunhofer researcher, referring to the sensational chess match.

Twenty years after this historic victory, tech company Apple introduced a smart new feature with its voice assistant “Siri” (Speech Interpretation and Recognition Interface), launched as part of the then-latest iPhone. Siri was able to answer all kinds of everyday questions through voice commands. This made AI accessible to consumers in their daily lives.

Just a few years later, IBM returned with its AI-based “Project Debater”, competing against two human opponents – this time in a publicly staged debate against professional debaters. The AI supercomputer was able to scan millions of sources and databases on topics such as space research and telemedicine, process the information, independently gather arguments, combine them into a coherent speech, and respond to counterarguments from its opponents.

Earlier this year, it was announced that modern language models – OpenAI’s GPT and Meta’s LLaMa – had successfully passed the “Turing Test” named after Alan Turing. The researchers found no discernible difference compared to a human conversation partner in their study.

“The foundations Alan Turing laid in the 1950s have been continuously developed over the decades and have long since become part of our daily lives: Modern robot vacuum cleaners, for example, have long been helping us in everyday life – navigating through our living rooms, recognizing obstacles with the help of AI, or being activated by voice command. Autonomous driving also relies on AI, which helps vehicles to navigate traffic automatically, identify road signs, and brake safely in dangerous situations. The intelligent linking and real-time processing of sensor-generated data into an interpretable result lead to an AI-based decision – such as initiating a braking maneuver. Robots – or more precisely, AI-powered assistants like in the movie ‘I, Robot’ – have not been science fiction for years now, but have become firmly established in our reality,” says Sven Zimmermann.

 

Neuromorphic computing: Memristors as the backbone of future AI systems

For AI systems and computing architectures to become even more powerful in the future – capable of processing and interpreting ever larger volumes of data – suitable technological frameworks will be essential. “Computers as we know them today, based on the ‘von Neumann architecture’, execute commands sequentially and deliver a pre-programmed output in response to a user request. However, they are poorly suited to keeping pace with modern, highly complex AI models, which enable intuitive processing and significantly faster output of information based on vast numbers of learned patterns. Such parallel data processing with established computer architectures would require enormous amounts of energy and vast server farms – and could completely deplete today’s available energy reserves within just a few years,” explains the scientist.

Researchers at Fraunhofer ENAS are working on the next evolutionary stage of computer technology, with operating principles adapted to the requirements of efficient and energy-saving AI. Inspired by nature, they are placing neuromorphic computing at the center of future developments.

One of these promising approaches emulates the structure and functions of the human brain as a building block for the next generation of AI-based computers. This engineering model is based on artificial neural networks that, like biological nerve cells – the neurons of the human brain – receive, process, and transmit information as impulses.

“The neurons in our brain are interconnected by biological synapses. These synapses act as a form of biological memory, changing their state over time with neurological activity. This is how humans learn. The entirety of all neural connections and synaptic states in the brain is referred to as the ‘connectome’. This is unique to each individual and represents the sum of innate and learned behaviors, experiences and knowledge. It forms the foundation of a person’s individuality and character. The synaptic contact point between two neurons enables the transmission of information and communication between neurons. We are striving to replicate this mechanism technologically in neuromorphic computing,” says Sven Zimmermann.

To this end, researchers at Fraunhofer ENAS are using nanoionic components – memristors – that behave similarly to biological synapses. The term memristor is derived from the words “memory” and “resistor”. Thanks to their ability to change resistance non-volatilely depending on the intensity and duration of the current flowing through them, memristors are capable of storing and processing data.

“A key property of memristors – indispensable for storing and processing information and for learning in neuromorphic computing – is their synaptic plasticity (spike-timing-dependent plasticity, STDP), a property also found in the human brain. Through time-dependent synaptic stimulation – triggered by electrical impulses known as spikes – the activity and communication between neurons in the human brain increase, forming specific synaptic connections. These are responsible for the brain’s memory and learning capacity, enabling us to intuitively fill in missing information or navigate unfamiliar situations using learned problem-solving strategies,” explains the researcher. In an artificial neural network based on memristors, large training datasets are used to train the network on specific problems. In the subsequent application phase, the network can respond extremely quickly and with sufficient precision to input patterns using what it has learned, make meaningful decisions even under uncertain data conditions, and perform calculations far beyond the training dataset.  

At Fraunhofer ENAS, researchers are developing memristive synapses, for example using BiFeO3 technology. A distinctive feature of these memristors, based on bismuth ferrite (BiFeO₃, a bismuth iron oxide), is their pronounced biorealism and spectral sensitivity to light, which also makes it possible to replicate the human retina. This makes neuromorphic computers based on memristive synapses ideally suited for image and pattern recognition. “Neuromorphic computers equipped with these memristor-based synapses could, for example, support police work. So-called ‘super recognizers’, who work in state criminal investigation departments with their exceptional facial recognition ability, are able to identify suspicious individuals across hundreds of surveillance camera images. AI-supported image recognition could meaningfully complement the skills of facial recognition experts, making the identification of criminals significantly faster and even more accurate,” says Sven Zimmermann confidently.

 

Complex arrangements for complex tasks: Memristors as crossbar arrays

According to the researcher, a single memristor is not capable of handling complex tasks such as facial recognition. For such applications, arranging memristors in what is known as a crossbar array architecture is advantageous. In this special architecture, memristive layers are arranged between two levels of parallel, strip-shaped electrode structures arranged at a 90° angle to each other. This creates intersection points, each corresponding to a memristive cell. The crossbar array represents the densest way to arrange large numbers of memristors on a chip with limited space.

“Embedded in a neuromorphic computer, around 1,024 individual memristive cells would be needed to implement even simple speech recognition. The more demanding the requirements placed on the neural computer, the more complex the structure of the electrode layers and memristive material must be. To solve highly complex tasks, tens of thousands of such cells would need to be combined in a sophisticated architecture.”

To handle such tasks, which require computationally intensive mathematical operations, the researchers rely on a combination of crossbar array structures and programmable integrated circuits known as field-programmable gate arrays (FPGAs). This combination significantly accelerates computing speed, especially for difficult and lengthy operations that would otherwise require vast amounts of time and energy using conventional digital computing. With the help of these architectures, also known as “memristive hardware accelerators,” not only is the highly parallel processing of vast amounts of information – typical of neuromorphic computing – achieved at ultra-fast speeds, but the unique coupling of crossbar arrays with FPGAs also helps to significantly reduce energy consumption compared with previous generations of computers.

 

Nanotechnology in action: Recreating brain structures with artificial neurons

An alternative approach to artificial neural networks based on memristors is the use of so-called leaky integrate-and-fire neurons (LIF neurons). With these artificial nanotechnological neurons, the entire human brain can be replicated even more precisely than with today’s models. What makes this approach special: Not only synapses but also neurons are modeled using nanotechnological components.

What makes this approach unique is that not all impulses arriving at a neuron are transmitted directly to the next one. When LIF neurons are stimulated by incoming spikes, the electrical signal is transmitted only once a certain threshold within the neuron has been exceeded. Artificial neural networks (ANNs) that are based on this transmission of impulses are also referred to as spiking neural networks (SNNs). Because not all neurons are active in this network – only those whose firing threshold has been exceeded – this approach can save a vast amount of energy in complex computing tasks compared with previous artificial neural networks. This makes the use of SNNs particularly attractive for mobile applications such as drones. These systems are also capable of self-organization and of generating their own input stimuli through corresponding feedback mechanisms. The ability of SNN technology to perform “unsupervised learning” – detecting patterns in data without significant human intervention – is essential for the development of consciousness and the associated capacity to perceive reality within a technical system. 

 

Vision or reality? A look into the future of AI

“It will still be a long road before strong AI emerges – AI that can think independently, analyze, interpret, decide, create, learn, and develop problem-solving strategies like humans. After all, the human brain has 86 billion neurons, each with tens of thousands of synapses. That amounts to hundreds of trillions of synapses in total. Replicating and modeling this complex human system technologically will remain a challenge for future scientific research. This vision may only come within reach toward the end of this century,” says Sven Zimmermann.

 

Our offering: Your partner for AI research and development

With its long-standing and comprehensive expertise, Fraunhofer ENAS supports technology development in the field of AI-based systems and services. As an innovation partner, the institute conducts research on groundbreaking technological solutions that enable neuromorphic computers to operate with maximum intelligence, efficiency and speed.

Fraunhofer ENAS is developing innovative thin-film architectural technologies that replicate the human brain with nanotechnological functionalities. The institute’s expertise ranges from the fabrication of memristive components and crossbar arrays to their integration into existing technologies, and extends to the characterization and simulation of novel material concepts for neuromorphic applications.


Our offering in detail:

  • Fabrication of memristive components based on BiFeO3 and TiO2, as individual components and crossbar arrays
  • Integration of memristive components into existing technological concepts, including adaptation of material properties, layout and fabrication technologies
  • Test strategies for memristive devices for the characterization of memristors at the wafer level as well as in crossbar structures
  • Development of zero-energy sensors with memristive memory
  • Development of components for neuromorphic circuits, including circuit design
  • Investigation of new material systems with memristive behavior

If you would like to usher in a new technological era with us – with innovative components, architectural technologies or material concepts – then contact us today.

This might also interest you

Memristive Devices

Chemnitz Seminar

The Chemnitz Seminar »Advanced Functional Materials and Methods for Neuromorphic Computing and Memory« will take place on November 25 and 26, 2025. Registration is open.