New-Tech Europe Magazine | May 2018

Computation in Memory fits into a new concept of scaling that may bring huge, largely untapped energy-savings

Francky Catthoor, imec

Many emerging applications will need the computing power that was typical of supercomputers a few years ago. But they require that power to be crammed in small, unobtrusive applications that use little energy and that have a guaranteed, superfast response time. Using traditional scaling methods – scaling at the lowest levels of the hardware hierarchy as the industry has been pursuing the last decades – we can still win some, but not enough. So, we’ll have to look at higher levels, developing technology that optimizes the performance and energy use of functions or applications altogether. Called system-technology co- optimization (STCO), this approach is a huge and largely untapped territory. It will allow us to gain back the energy- efficiency that has been lost over the past decades, mainly because the industry has concentrated on the low- hanging fruit, on dimensional scaling

in conventional system architectures. It is estimated that STCO might bring energy savings of several orders of magnitude for a specific performance. But because of the complexity and diversity of solutions, getting there might take us a long time. Too much data traffic For many of today’s applications, the biggest energy drain is the shuttling back and forth of data between the processor and the various levels of storage. That is especially so for those applications that perform operations on huge datasets, think of the results of DNA sequencing, the links in a social media network, or the results of high-definition specialty cameras. For such data-dominated applications and systems, all this data shuffling imposes an energy cost that is orders of magnitude larger than the actual processing. Moreover, it may cause serious throughput problems, slowing

computation down in sometimes unpredictable ways. One way to overcome this would be to design hybrid architectures where a number of operations are performed in the same physical location where the data are stored, without having tomove the data back and forth. Examples are performing logic operations on large sparse matrices, error correction on wireless sensor data, or preprocessing raw data captured by an image sensor. Called ‘Computation in Memory’ or CIM, this idea has been around for some time; yet, now it is ready to be taken seriously. An especially enticing proposition is to make use of the physical characteristics of memory technology to do the computations, for example with resistive memory technology. Resistive memory works on the premise that a current will change the resistance of the memory element. Using two clearly distinct current levels, resistive memory can

24 l New-Tech Magazine Europe

Made with FlippingBook flipbook maker