||

Connecting Communities, One Page at a Time.

IISc scientists enhance AI with revolutionary neuromorphic computing

Researchers have evolved beyond traditional silicon-based technology by creating a metal-organic film memristor. The memristor can process information through networks of neurons and synapses, simulating how the biological brain functions.

EPN Desk 12 September 2024 12:31

Team behind the revolutionary invention

Team behind the revolutionary invention

Researchers at Indian Institute of Science (IISc) Bengaluru have reported a significant advancement in neuromorphic, or brain-inspired, computing technology. This development could potentially give India a competitive advantage in the ongoing global AI race and drastically change the AI computing landscape by moving away from the current "cloud computing" model, which relies on large, energy-intensive data centers, and toward a paradigm known as "edge computing," which can be accessed on a laptop, mobile phone, or personal device.

A team of researchers and students from the IISC's Centre for Nano Science and Engineering (CeNSe), under the direction of Prof. Sreetosh Goswami, made the breakthrough, which is being published in the journal Nature.

They have created a metal-organic film version of the memristor, a kind of semiconductor device, instead of relying on traditional silicon-based technology.

With the help of this substance, the memristor can process information utilizing networks of neurons and synapses, just like the biological brain does.

Prof. Sreebrata Goswami, a visiting scientist at CeNSe and the father of Sreetosh, was the "molecular mastermind" behind the discovery.

When combined with a traditional digital computer, the Memristor improves its energy and speed capabilities hundreds of times, turning it into a very energy-efficient "AI accelerator."

Prof. Navkanta Bhat created the silicon circuitry needed for the team's endeavor, which included a 64x64 memristor array integrated with a traditional 65 nm-node silicon CPU.

Large-scale and complicated AI operations, such as training large language models (LLMs), may ultimately be able to be completed on a laptop or smartphone instead of a data center, thanks to advancements in technology.

The limits of processing speed and energy efficiency are being reached by today's digital computing, which is based on silicon transistor technology and the conventional Von Neumann architecture, or the input-storage/memory-output model of computing.

One aspect of the challenge arises from the nature of digital computing, which is based on binary operations, requiring all information to be processed and transformed into 0s and 1s and breaking down massive computational tasks (the proverbial elephant in the room) into small parts.

The other portion of the problem comes from the fact that the processor and memory units are separated by a distance, requiring data to be constantly transferred between them in order to be processed, restricting computing speed and introducing a significant energy penalty.

So far, the first difficulty has been addressed via a brute force approach: creating processors capable of doing the vast number of steps or operations required at high speeds. However, the second issue, the separation of CPU and memory, is a bottleneck that affects the entire computer landscape.

It is what generates the requirement for enormous amounts of processing power, and hence the heating of computing infrastructure.

As a result, if you have to develop AI systems, which need feeding and processing tremendous amounts of data into computers, they can only be done in the 'cloud', which is simply a wonderful but misleading phrase for large data centers, which need massive amounts of energy to keep the computing infrastructure cool.

It is anticipated that if computing continues on this trend, by 2050, the electricity required to run AI data centers would exceed the world's power generation capacity.

To address these issues, computer chip manufacturers have adopted a strategy known as in-memory processing, which integrates memory silicon with computation silicon.

However, this approach has not made a substantial impact because it still relies on silicon transistors and digital processing.

Companies such as IBM and China's Tsinghua University have attempted to imitate the brain using silicon technology with their True North and Tianjic processors, respectively, but have not achieved significant results.

That is why the IISc innovation could have worldwide implications. Prof. Sreetosh's team applied the metal-organic film method to memristors, which are also in-memory computing but function similarly to the brain's neuron-synapse circuit.

“Through the 2010s, large companies tried to mimic the brain while sticking to silicon transistors. They realized that they were not making significant gains. In the 2020s, the research investments are moving back to academia because there is a realization that we need much more fundamental discoveries to actually achieve brain-inspired computing,” Prof Sreetosh Goswami said.

“If you just take a brute-force approach to use transistors and enforce certain algorithms, that’s not going to work,” he added.

Our analog brain acts differently than a digital computer. Memory and processing are not discrete functions in the brain. Furthermore, unlike digital computers, it does not handle information in the form of 0s and 1s, and it does not rely on breaking the proverbial elephant into minute parts.

Instead, it processes information by consuming large amounts of data, significantly lowering the number of steps required to arrive at an answer. The brain is thus incredibly energy-efficient.

Neuromorphic computing is quick and efficient because it combines analog computing with brain-mimicking Memristor technology.

Prof. Sreetosh Goswami and his team developed a device that stored and processed data in an astonishing 16,520 states at once, reducing the number of steps required to multiply 64x64 matrices—the fundamental math behind AI algorithms—to 64 steps, whereas a digital computer would perform the same work in 262,144 operations.

The team utilized the device, which was plugged into a conventional desktop computer, to do advanced space picture processing, including recreating NASA's classic "Pillars of Creation" image using data given out by the James Webb telescope.

It did so in a fraction of the time and energy that standard computers would require. The team reports an energy efficiency of 4.1 tera-operations per second per watt (TOPS/W), which is "220 times more efficient than an NVIDIA K80 GPU, with considerable room for further improvements."

The cutting-edge research, funded by the Ministry of Electronics and Information Technology (MEITY), also involved research students Deepak Sharma, who performed circuit and system design and electrical characterizations; Santi Prasad Rath, who handled synthesis and fabrication; Bidyabhusan Kundu, who tackled mathematical modeling; and Harivignesh S, who crafted bio-inspired neuronal response behavior.

Collaborations with Prof Stanley Williams at Texas A&M University and Prof Damien Thompson at the University of Limerick enhanced their work.

"Neuromorphic computing has had its fair share of unsolved challenges for over a decade,” Prof Sreetosh Goswami said.

“When I wrote to the editors of Nature the first time to accept our submission, I listed six such challenges and said that it would be worth publishing if we solved even one of those challenges, but with our decade-long research and discovery, we have solved all six of them and almost nailed the prefect system," he added.

The team has already shown proof of concept and the integration of molecular film Memristor technology with traditional digital systems to function as an 'AI accelerator' using a 64x64 array.

They plan to develop larger arrays, up to 256x256. MEITY's funding is also intended to advance the research to the point of developing and demonstrating a system-on-chip solution, as well as incubating a start-up to take it to market.

Prof. Goswami plans to do so within the next three years. If the IISc team succeeds, India would have a strong contender in the global AI competition.

VTT

Also Read

    Latest News

    advertisement

    Also Read


    Latest News

    advertisement

    Loading ...