A Low-Cost ANN-based Approach to Implement Logic Circuits on Memristor Crossbar Array

Document Type : Research Article

Authors

Department of Electrical Engineering, Faculty of Engineering, University of Kurdistan, Sanandaj, Iran

Abstract

The Memristor crossbar array structure provides a low-cost and highly efficient platform for the artificial neural network (ANN) implementation. On the other hand, the implementation of combinational logic circuits using memristor-based platforms has attracted great attention recently. However, the basic operations of a memristor are multiplication (Ohm’s law) and addition (Kirchhoff's circuit laws), which make the implementation of logical operations very complex. To overcome this problem, we propose an ANN-based synthesizer that first translates the combinational logic circuit behavior to a neural network, which would be implemented using a memristor crossbar array. The proposed synthesizer includes a feature extractor and a multilayer perceptron (MLP) to classify the input vectors into 0 or 1 groups. The results show that the delay of an ANN-crossbar circuit is considerably lower than that of the circuit implemented by memristor-based logic gates. Although the accuracy of an ANN-crossbar circuit is not 100% because of the natural behavior of ANN-based applications, an ANN-crossbar circuit could be useful regarding error-resilient systems such as image processing applications. Furthermore, these circuits are appropriate for advanced neuromorphic computers that rely on non-deterministic operations.

Keywords

Main Subjects


1. Introduction[1]

A combinational logic circuit serves as a hardware ‎realization of a Boolean function, wherein the ‎fundamental gates (such as NAND, NOR, and NOT) are ‎interconnected in a multilevel arrangement to produce ‎the correct logic values at the circuit's outputs. The ‎outputs of a combinational circuit, at any given moment, ‎solely depend on its current inputs. In Fig. 1, a ‎Boolean function is depicted alongside its corresponding ‎combinational logic circuit. Combinational logic ‎synthesis involves the conversion of circuit behavior (expressed through Hardware Description Language or ‎Boolean function) into a combinational logic circuit. ‎Various synthesizers have been developed and integrated ‎into computer-aided design (CAD) tools for integrated ‎circuits [1, 2, 3].‎

Chua introduced the memristor (a nonlinear resistor with memory) in 1971 as the fourth type of primary circuit element added to the resistor, capacitor, and inductor [4]. The symbol of a memristor, along with a hypothetical curve, is shown in Fig. 2. The resistance (conductance) of a memristor depends on the complete history of its current (voltage), as clarified in Equations (1) and (2).

Memristor crossbar arrays find application in the realization ‎of neural networks [5, 6, 7, 8, 9, 10]. ANNs comprise neurons linked ‎by weighted interconnections. The memristors within ‎the crossbar rapidly execute multiplication and ‎summation tasks for each neuron. Furthermore, the ‎efficient storage of weights in the ANN is easy to realize using memristors.‎

In the classical Von Neumann architecture, processing ‎and memory units are distinct and realized through ‎separate integrated circuits, which results in significant ‎latency and increased energy consumption when data ‎exchange occurs between the two units. Opting for the ‎memristor as the fundamental element in circuit design ‎allows the integration of both computation and storage ‎into a singular unit.

Fig. 1. (a) LUT and (b) schematic of a combinational logic circuit with 3 inputs and 1 output

Fig. 2. (a) The symbol of a memristor and (b) its hypothetical   curve

Consequently, the memristor-based ‎design enhances energy efficiency and accelerates the ‎speed of integrated circuits by eliminating the need for ‎data exchange between processing and memory units. ‎Moreover, the in-memory computing capability of ‎memristors introduces new possibilities for the efficient ‎implementation of artificial intelligent systems [11-15]. Recently, memristor-based circuits have been ‎employed to realize various applications within neural ‎networks [16, 17, 18].‎

The utilization of memristors for implementing ‎elementary logic gates faces a fundamental ‎drawback. The inherent operations in memristors ‎involve multiplication (governed by Ohm's law) and ‎addition (guided by Kirchhoff's circuit laws), thereby ‎introducing complexity into logical operations. The ‎realization of logic circuits, specifically logic gates, ‎through memristors (referred to as Conv-Mem circuits), ‎leads to substantial overhead in circuitry and ‎consequently diminishes the speed of the primitive ‎gates [19]. ‎

Stateful memristor-based circuits offer a diverse range of logic operations, prominently demonstrated through the implementation of material implication (IMPLY) logic gates. This unique approach, pioneered by Kuekes et al. [32], involves two memristors and one resistor, forming the foundation for synthesizing Boolean logic operations. The IMPLY operation, expressed as   q ← p IMPLY q, enables the conditional switching of memristors, providing a platform for constructing various logic gates. However, challenges in logic cascading arose due to the replacement of one input with the output, prompting innovative solutions like memristor-aided logic (MAGIC) by Kvatinsky et al. [20]. This separation of output from input cells mitigates the need to copy inputs, thereby improving logic cascading. Furthermore, researchers like Adam et al. and Huang et al. explored three-dimensional memristor crossbar arrays, showcasing advancements in half-adder and full-adder operations. While these stateful logic operations demonstrate progress in mitigating data overwriting issues, they often require additional logic steps, highlighting the complexity inherent in achieving efficient logic cascading [21, 22, 23, 24].

In parallel, stateful two-memristor logic operations, as generalized and combined from various approaches, unveil four symmetry-related possibilities. These operations involve the unconditional initialization of +one input bit (TRUE or FALSE), followed by a conditional SET or RESET of the second input bit based on the state of the first. Extending this to three-memristor gates increases the possibilities to eight, incorporating conditional switching operations given different initial states of the input bits. This expanded repertoire of stateful logic gates operates sequentially in the time domain, presenting a departure from conventional complementary ‎metal oxide semiconductor (CMOS) logic's fixed spatial geometry. The potential benefits of stateful logic include more efficient encoding and execution of general computations. Moreover, the inherent reconfigurability and defect tolerance of stateful logic makes it a promising avenue for the future of memristor-based circuit design [25, 26, 7, 24].

This paper introduces a novel synthesizer (the ‎ANN-crossbar circuit) centered ‎around a memristor crossbar. The core idea of this innovative ‎approach involves transforming the look-up table (LUT) ‎of a combinational circuit into a classification problem. ‎As depicted in Fig. 1, the circuit's output is limited to ‎values of 0 or 1. Therefore, the combinational circuit can ‎be substituted with a binary classifier, which evaluates ‎the input vector and determines its affiliation with ‎particular classes. It has been verified that various types ‎of ANNs can efficiently ‎address the classification problem [27, 28].‎

Conversely, a memristor crossbar serves as a cost-efficient and high-performance hardware realization of ‎ ANNs. In light of this, our ‎proposed synthesizer accepts the combinational circuit LUT as input and transforms it into an ‎ANN-based classifier. Subsequently, the resulting ‎classifier is implemented using the memristor crossbar.‎

The implemented circuit encounters some errors due ‎to non-ideal classification by the ANN, thereby ‎limiting our synthesizer's application to error-‎resilient systems such as image processing-based ‎circuits. Furthermore, advanced neuromorphic ‎computers rely on non-deterministic operations with ‎simple processing units (neurons), which means that ‎some inaccuracy is in the nature of the neuromorphic ‎computing systems. ‎

The main contributions of this paper are ‎highlighted as follows:‎

‎1) An ANN-based synthesizer is implemented using ‎a memristor crossbar to synthesize small-scale and ‎large-scale combinational logic circuits.‎

‎2) The results show that the delay of an ANN-‎crossbar circuit is considerably lower than that of the ‎circuit implemented by memristor-based logic gates ‎‎(Conv-Mem circuit).‎

‎3) All ANN-crossbar circuits have the same delay, ‎regardless of the number of inputs and outputs and ‎their complexity.‎

‎ ‎

The rest of this paper is organized as follows: ‎In section 2, the structure of different types of ANN ‎is explained, and their implementation using the ‎memristor crossbar is discussed. The implementation ‎of combinational logic circuits using memristor-based ‎logic gates is presented in section 3. The architecture ‎of the proposed ANN-based synthesizer and its ‎implementation by the memristor crossbar are ‎presented in section 4 and section 5, respectively. ‎The simulation results are reported in section 6, and ‎the paper is concluded in section 7.‎

 

  1. Preliminaries

2.1. ANN

Research on ANNs draws ‎inspiration from the human brain, characterized as a ‎highly intricate, nonlinear, and parallel computing ‎system. The human brain comprises numerous neurons ‎interconnected through synapses. Each neuron consists ‎of a cell body responsible for information processing, ‎conveying information both towards (inputs) and away ‎‎(outputs) from the brain. In an ANN, there exist ‎hundreds or thousands of artificial neurons, termed ‎processing units, interconnected through weighted ‎links. Each ANN contains multiple input and output ‎neurons. The input neurons receive diverse forms and ‎structures of information, guided by an internal weighting ‎system, and the neural network endeavors to learn ‎about this information to generate a single ‎output report [29].

A neural network used for classification problems is ‎generally composed of two types of neural layers: ‎The first type works as a feature extractor, and the ‎second one is used for classification purposes.

‎For the first part, the convolutional neural network ‎‎(CNN) is a powerful tool, wherein several kernels ‎‎(e.g., a 5×5 matrix) are defined for various features. ‎Then, each kernel is swept across the input matrix and ‎convolved (dot product) with a specific region every ‎time. Thus, the input matrix is converted to several ‎feature-mapped matrices. Then, using pooling and ‎fully connected layers that follow the convolutional ‎layer, the classification would be concluded [30]. ‎Figs. 3(a) and 3(b) show convolutional and local ‎connections, respectively. The convolutional layer ‎uses the same two weights repeatedly across the ‎entire input, as indicated by the repetition of the ‎letters labeling each edge. The model has the same ‎connectivity as the convolutional layer in the locally ‎connected layer. However, each edge of the locally ‎connected layer is associated with its weight ‎parameter [30].‎

 

Fig. 3. (a) Convolutional and

(b) local connections

 Fig. 4. The general schematic of MLP

The structure of the MLP,

as ‎one of the widely used fully connected neural ‎networks, is shown in Fig. 4. An MLP is a feed-‎forward network consisting of one input layer, one ‎output layer, and at least one hidden layer trained by ‎supervised learning. As a fully connected neural ‎network, every neuron in one layer is connected to all ‎neurons in the next layer. The output value of a ‎neuron is calculated as follows: ‎

where  is the value of neuron j in layer l, is the weight between neuron j in layer l and neuron i in layer l-1, and φ is an activation function such as ReLU and Tanh.  

  • Implementation of ANN using memristor crossbar

The first physical version of the memristor was ‎demonstrated by HP Labs in 2008, where the ‎memristive effect was achieved by manipulating the ‎doping within a TiO2 thin film [25]. Fig. 5 illustrates ‎the physical model of a memristor, which is akin to two ‎resistors connected in series, denoted as RL and RH for ‎the low-resistance state (LRS) and the high-resistance ‎state (HRS), respectively. The overall memristance can ‎be expressed as follows:‎

where M(p) is the memristance and 0 ≤ P ≤ 1 is the ‎relative doping front position. ‎

A memristor crossbar array constitutes a memory ‎structure based on memristors, utilizing a memristor ‎device at each intersection of horizontal and vertical ‎metal wires, as illustrated in Fig. 6. Due to its integration ‎of large-scale memory cells and a compact ‎interconnection network, the memristor crossbar array is ‎extensively utilized for the implementation of ANNs [5, 6, 7, 8, 9, 10]. Prior ‎approaches have addressed both the forward and ‎backward propagation aspects of neural networks. ‎However, in this paper, the emphasis is solely on the ‎implementation of forward propagation, as the output of ‎our proposed synthesizer pertains to a trained ANN. ‎

Fig. 5. The physical model of the memristor

Fig. 6. The schematic of memristor crossbar array

Fig. 7 illustrates the implementation process of ‎conventional neural networks by memristor crossbar ‎arrays. Fig. 7(a) shows a neuron connected to the ‎outputs of five neurons in the previous layer. The ‎output value of the neuron is given by ‎The memristor-based implementation of the neuron is ‎shown in Fig. 7(b). Each input is connected to a ‎virtually grounded op-amp (operational amplifier), ‎and two memristors realize each synapse (weight) as ‎Gn+ and Gn-. When Gn+ > Gn-, the pair of memristors ‎represents a positive synaptic weight and vice versa. ‎The range of synaptic weight when memristors are ‎used is two times more than when a single memristor per synapse design is used. Therefore, the output ‎value of the neuron in Fig. 7(b) is given by‎

 

  1. Memristor-Based Combinational Logic Circuits

The utilization of memristors for logical operations is ‎categorized into different methods. In some approaches, ‎the memristor is integrated with the CMOS structure to ‎represent binary logic through distinct voltage levels [31]. ‎In alternative logic families, the high and low resistances ‎of memristors are interpreted as 0 and 1 logics, ‎respectively. In these methods, memristors serve as the ‎primary building blocks of logic gates. This approach ‎opens avenues for exploring neuromorphic computing, ‎especially with the use of memristor crossbar array ‎architecture for executing logic gates. ‎

IMPLY gate is recognized as a ‎fundamental logic gate, contributing logic elements to a ‎memristor-based circuit. The IMPLY logic function, ‎alongside FALSE (representing an unconditional memristor ‎RESET), forms a computationally complete logic ‎structure. This concept originated from the 'copy with ‎inversion' operation initially introduced by Kuekes et al. ‎‎[32].

Fig. 7. (a) The output neuron is connected to all of the input neurons. (b) The memristor-based implementation of the output neuron.

In this section, the structure of the memristor-based ‎IMPLY gate is elucidated, and the implementation of ‎combinational logic circuits based on memristor-based ‎logic gates is explored following the approach proposed ‎by Kim et al. [26].‎

The logic function p → q or p IMPLY q (also known as "p IMPLIES q", "material implication", and "if p then q") is described in [21] and its truth table is illustrated in Table (1). Fig. 8(a) illustrates the schematic of the memristor-‎based IMPLY gate, featuring two memristors denoted as P ‎and Q, alongside a resistor RG (where RON < RG < ROFF), ‎serving as digital switches. The initial states of P and Q ‎are regarded as inputs, and the output of the IMPLY gate ‎is determined by the final state of Q following the ‎execution of the IMPLY operation. The electrical switching ‎characteristic for a single resistance (representing logic 0) ‎is idealized. The state of the memristor transitions from 0 ‎to 1 when the voltage level decreases and passes Vset.‎

The basic concept of the memristor-based IMPLY gate is to apply two voltages, Vcond and  Vset , to P and Q, respectively. Vcond is applied so that the state of P does not change. However, the applied Vset is supposed to change the Q state by decreasing the voltage level to Vset. In the case P = 0 and Q = 0, the voltage on the common terminal is equal to 0 and the state of Q would change due to the application of Vset - 0. In the case P = 1 and Q = 0, the voltage on the common terminal is equal to Vcond and the voltage applied to Q (Vset = Vcond) does not change the Q state. For the case Q = 1, the voltage on the common terminal is Vset and evidently, Q keeps its current state.

Fig. 8. (a) Memristor based IMPLY gate.

(b) Idealized electrical switching characteristics of the memristor.

 

Fig. 9. Sequences for executing the 16 Boolean logic gates by stateful two and three memristor logic gates. Dash (−) means that the number of steps using a three-memristor is not reduced compared to the two-memristor gate [26].

 

As outlined in [26], Fig. 9 illustrates the sequences of ‎steps necessary for implementing various logic ‎operations using two and three memristors. Refer to [26] ‎for detailed descriptions of the employed operations ‎‎(COPY, NIMP, etc.). This table's data is utilized to ‎calculate the total number of steps required to ‎complete the circuit's operations. It is important to note ‎that in a crossbar memristor-based logic circuit (as ‎opposed to CMOS-based circuits), the output of logic ‎gates can be evaluated sequentially. For instance, Fig. ‎‎10(a) depicts the schematic of a full adder comprised of ‎two Exclusive-OR (XOR) gates, two AND gates, and one ‎OR gate. The inputs and the sum output are represented ‎by a, b, and s2, and the input and output carriers are ‎indicated by cin and c2, respectively. Fig. 10(b) ‎illustrates the necessary logic steps to execute the full ‎adder operations. The total number of steps for ‎performing an addition is determined by summing all ‎the required steps (13 steps).‎

 

  1. The Proposed ANN-based Synthesizer

This section introduces an ANN-based synthesizer for ‎generating combinational logic circuits.

Fig. 10. (a) The schematic of the full adder. (b) The required logic steps to execute the full adder. 

 Table (1): IMPLY gate truth table

Table (2): The input vectors of the given LUT are divided into two groups (0 and more than 0) by 100% accuracy

As outlined in ‎Section 1, combinational logic synthesis involves ‎transforming the desired circuit behavior (Boolean ‎function) into a combinational logic circuit. To simplify, ‎we initially focus on 1-output combinational circuits.‎

In Table (2), the LUT of a circuit ‎comprising four inputs (I0, I1, I2, and I3) and one output ‎‎(Y) is presented. To classify the inputs into two groups ‎based on their outputs (0 or 1), four-dimensional input ‎vectors need to be transformed into a k-dimensional ‎space where the new inputs can be linearly classified into ‎‎0-output and 1-output groups.

Definition 1: The ‎m’th input vector of an n-input LUT is defined according to Eq. 7.‎

inp_vec(m) = (In-1 = bn-1, In-2 = bn-2,… , I2 = b2, I1 = b1, I0 = b0)

(7)

In this equation, Ix is the x’th input and bi is 0 or 1.

Definition 2: For an n-input LUT, a sub-space of all input vectors is defined as the set of input vectors wherein h inputs ({Iind(h-1, Iind(h-2),…, Iind(2), Iind(1), Iind(0) }) take a fixed binary value.

For example, for a 4-input LUT, if we select I3=1 and I1=0 as the fixed inputs, then the related sub-space contains the following four input vectors among 16 possible LUT inputs:

Sub_space(I3=1, I1=0) = {inp_vec(8) = 1000, inp_vec(9) = 1001, inp_vec(12) = 1100, inp_vec(13) = 1101}

(8)

 Definition 3: The probability of a sub-space is defined as the ratio of the input vectors in the sub-space which generates logic 1 in the LUT output to the total number of input vectors of the sub-space.

For example, for the sub-space of (8), the output of the LUT in Table (2) generates logic 1 in the output for input vector (9) and input vector (12). Therefore, the probability of the sub-space will be 0.5 (P(sub_space(I3 = 1,I1 = 0) = 2/4).

To achieve an effective feature extractor, we focus on the sub-spaces containing n-1 inputs of the n-input LUT. The total number of such sub-spaces is n and we utilize the index of the omitted input to identify the related sub-space. For example, in the 4-input LUT of Table (2), the related sub-spaces are defined according to (9). In this equation, Sub-spacej indicates the input vectors wherein the j’th input (Ij) is excluded.

Sub_space0 = sub_space(I3 = b3, I2 = b2, I1 = b1)

Sub_space1 = sub_space(I3 = b3, I2 = b2, I0 = b0)

Sub_space2 = sub_space(I3 = b3, I1 = b1, I0 = b0)

Sub_space0 = sub_space(I2 = b2, I1 = b1, I0 = b0)

(9)

 

For every input vector, the probability of these n sub-spaces is calculated. Due to the exclusion of only one input in every sub-space, there exist just 2 input vectors in the sub-space. One of them is the input vector under consideration (m) and the other one is the input vector, which differs in a bit position (i). So, the sub-space probability would be calculated according to (10).

Wherein Ym and Yi are the LUT outputs for m’th and i’th input vectors. Due to the existence of n sub-space the sum of all probabilities related to the current input vector (m) will be:

Our goal is to discriminate the case of Ym= 0 from Ym=1. According to (11) the difference in probabilities for these cases is:

Consequently, utilization of the defined sub-spaces (every sub-space excludes just one input) leads to meaningful discrimination of class Y =0 and Y=1 for every LUT's inputs.

Fig. 12. Feature extraction using the locally connected layer for a 4-input LUT

 Fig. 13. Feature extraction by locally connected layer for an n-input LUT

 Fig. 14. General schematic of the proposed ANN-based synthesizer for (a) n-input LUT and (b) 4-input LUT.

To extract the features based on the abovementioned approach, a locally connected layer is deployed in which every neuron demonstrates a specific sub-space. For example, for a 4-bit LUT (e.g., Table 2), feature extraction is performed by the locally connected layer of Fig. 12.

In Fig. 12, the 3-tuple subspace of LUT’s inputs is realized by connecting them to the inputs of four neurons. ‎For instance, the first neuron of the locally connected layer must produce P(sub-vecs (I3 = k3, I2 = k2, I1 = k1)), which ‎has different values for different input vectors. ‎

For an n-bit LUT (see Fig. 13), each neuron of the locally connected layer must produce a certain (n-1) ‎combination of input vector bits. For example, the first neuron must produce P(sub-vecs (In-1 = kn-1, In-2 = kn-2,…, ‎I1 = k1)). There are 2n-1 states for the input of this neuron as follows:‎

Therefore, the output value of the first neuron for the different states is obtained as follows:‎

In this equation, wj1 is the weight between j’th inputs and the first neuron.    

As mentioned before, every sub-space contains 2 input vectors. The possible LUT outputs for these 2 input vectors can be 00, 01, 10, and 11. The sub-space probabilities for these four cases would be 0, ½, ½, and, 1, respectively. In other words, if the ‎output of an input vector is 1, then the subspace related to the specific indices of the selected (n-1)-inputs is 1 or ‎‎½. Furthermore, if the output of an input vector is 0, then the subspace related to the specific indices of the selected (n-1)-inputs ‎is 0 or ½. To have a binary value at the output of all neurons, we put 0 instead of ½. It is very unlikely that all of the input vectors with the subspace related to a specific index of the selected (n-1)-‎inputs are 0 or 1. Therefore, this conversion (1/2 → 0) has a very small effect on the accuracy of feature ‎extraction.

For instance, for the mentioned LUT, the ‎output value of the first neuron for different states is obtained as follows:‎

Table (2) shows that the feature extraction method divides the input vectors into two groups with 100% ‎accuracy.

 

Fig. 15. An example of locally connection in ANN

Fig. 16. The activation function implemented on memristor crossbar according to [7]

The new versions of input vectors, which are produced by locally connected layers (Fig. 12 and 13), are ‎followed by an MLP for the classification task (e.g., Fig. 14).‎

  1. Implementation of the Proposed Synthesizer on Memristor Crossbar

The presented ANN-based synthesizer comprises both fully ‎connected and locally connected layers. Section 2 ‎addresses the implementation of the fully connected ‎layer using a memristor crossbar. However, in locally ‎connected layers, dropped connections are achieved by ‎setting the corresponding weights to zero. As illustrated in ‎Fig. 15, three out of the five inputs are connected to the ‎neuron. Consequently, the output of the neuron is ‎computed as follows:‎

 

(13)

 

To implement this structure, we should make G3+ = G3- and G5+ = G5-, as shown in Fig. 7(b).  Therefore, this paper uses the memristor-based implementation of a fully connected layer to implement a locally connected layer. Hasan et al. [7] proposed an approximated sigmoidal activation function (Eq. (13)), which is illustrated in Fig. 16, where utilizing VDD = 1 and VSS = 0 for the power supply rails of the op-amps would result in the implementation of Eq. 13’s activation function. The values of VDD and VSS are chosen such that no memristor gets a voltage higher than Vth across it during evaluation.

Kun Li et al. [33] enhanced the activation function depicted ‎in Fig. 16 by substituting the feedback resistor (R) with a ‎nonlinear memristor. This modification aimed to create a ‎reconfigurable and more precise implementation. ‎

The implementation of the proposed synthesizer for a ‎‎4-bit LUT (e.g. Table 2) is illustrated in Fig. 17. The ‎inputs (I3, I2, I1, and I0) are connected to the next layer ‎by a locally connected layer. Input I4 and the first ‎neuron of the second layer are not connected; therefore, ‎the positive and negative conductance between them is ‎equal to G, as shown by the red color in Fig. 17. The ‎second layer is fully connected to the third layer, and ‎the third is connected to the output neuron. ‎

  1. Results and Comparisons

6.1. ANN-based synthesizer

In this sub-section, the effectiveness (accuracy) of the ‎proposed ANN-based synthesizer is assessed using the ‎ISCAS 85 benchmark circuits. The ANN implementation is ‎executed through anaconda3 (utilizing Keras libraries) on ‎a PC equipped with a 2.4 GHz band, Core i7 CPU, and 8GB ‎memory.‎

The ISCAS 85 benchmark circuits are presented in the ‎initial column of Table (3). The subsequent columns ‎illustrate the number of logic gates, inputs, and outputs ‎for each circuit. As previously discussed, any circuit

Fig. 17. Implementation of a 4-bit LUT by memristor crossbar

with ‎n inputs and m outputs can be realized by employing m ‎parallel ANNs, each with n inputs and one output. The ‎average accuracy for ANNs featuring n inputs and one ‎output is calculated to determine the accuracy of the ‎ANN-based synthesizer.

For instance, in the case of ‎c6288, the accuracy is computed for 32 circuits, each ‎containing 32 inputs and one output. The average of ‎these accuracies is then considered as the accuracy of the ‎proposed synthesizer for c6288.‎

The number of input neurons for an ANN-based circuit equals the number of circuit inputs. As discussed in section 4, for a circuit with n inputs, there are subspaces related to the indices of the selected (n-1)-inputs. Each subspace (containing n-1 neurons) is connected to a neuron in the second layer of ANN. Therefore, the number of required neurons for the second layer equals n. The number of required neurons for the second layer (locally connected: Lc) is reported in the second column of Table (4).

In Table (4), the third column presents the count of ‎neurons in the necessary fully connected layers (Lf). ‎Subsequent columns detail the number of epochs, the ‎chosen learning optimizer, and the accuracy of each ‎ANN-based circuit.

The simulation framework has been developed in the Paython programming language. The details of the simulation steps are shown in Fig. 18. The first step is to extract the necessary information from the BLIF file of the circuit. The number of primary inputs and outputs, the number and the type of logic gates, and, the circuit graph are the results of the first step. For more details, refer to [35].

In the second step, the required dataset is constructed using the circuit graph. To do so, every input vector is applied to the circuit graph and the circuit output value is calculated by traversing the circuit graph. If the output value is 1, the applied input vector is labeled as class 1 in the dataset and vice versa.

After generation of the proper dataset for every ISCAS 85 benchmark circuit, we construct the proposed ANN structure containing, the first locally connected layer and two fully connected layers which are converged in the last output layer. The trained network is derived using supervised learning (backpropagation algorithm) based on the related dataset in the previous step. The ANN’s hyperparameters and learning conditions are mentioned in Table (4). 

In the fourth step, the constructed ANN is mapped to the memristor crossbar array. For every circuit, three crossbar arrays are arranged to realize the first locally connected and the other two fully connected ANN layers. The realization of each layer’s weights and the required activation function has been described in section 5.      

The average accuracy of the ‎proposed synthesizer is recorded as 98%, a level that is ‎deemed acceptable for a broad range of error-resilient ‎systems, including applications reliant on image ‎processing. ‎

Fig. 18. The essential steps of the proposed approach simulation

Table (3): ISCAS 85 benchmarks, the number ofinputs, outputs, and logic gates

Table (4): Proposed ANN-based synthesizer’s specifications and accuracy 

  • Delay comparison between ANN-Crossbar and Conv-Mem Approach

In this section, the ISCAS 85 benchmark circuits are ‎realized through our proposed method (ANN-crossbar) ‎and the conventional memristive-based logic gates ‎approach (Conv-Mem), followed by a comparison of their ‎performance, specifically the circuit's delay. The Conv-‎MEM circuits are implemented using the cutting-edge ‎methodology introduced in [26], which employs memristive-‎based implementations of primitive logic gates. In [26], every gate in a circuit is converted to a 2-memristor-based logic gate, and then every level of the related ISCAS 85 circuit is implemented on a layer of memristor crossbar array. For more details refer to [26].‎

Table 5 displays the total number of gates required for ‎each ISCAS 85 circuit. It is assumed that these logic ‎circuits consist solely of two-input primitive gates, and ‎any gate with more than two inputs is converted into ‎gates with two inputs. Table 6 illustrates the number of ‎steps needed to generate the output values of ISCAS 85 ‎benchmark circuits. The implementation of the three ‎memristor-based logic gates [26] is taken into account (see ‎Fig. 9).

 

Table (5): ISCAS 85 benchmarks, the types, and the number of their logic gates

Table (6): The number of required

steps to execute ISCAS 85 benchmarks

Due to the sequential execution of the operations ‎of the circuit's gates, the total number of steps is ‎calculated by multiplying each gate type by the ‎corresponding number of steps. For example, the number ‎of steps to execute c432 (which contains 31, 108, 19, 18, ‎and 40 AND, NAND, NOR, XOR, and NOT gates, ‎respectively) is obtained as follows: ‎

 

 AND       NAND      NOR        XOR       NOT

The ANN-crossbar circuits, featuring n-inputs and 1-output, ‎maintain the same structure (with an ANN comprising ‎four layers, as depicted in Fig. 14(a)). However, a logic ‎circuit with n-inputs and m-outputs is constituted of m ‎logic circuits, each possessing n-inputs and 1-output. The ‎delay for ANN-crossbar circuits remains consistent ‎regardless of their number of inputs, outputs, and logic ‎gates because they share a similar architecture, ‎comprising an ANN with five layers. All neurons within ‎the same layer execute calculations in parallel and ‎simultaneously. Consequently, the number of inputs does ‎not impact the delay of the logic circuit.

Table (7): The required time to execute ISCS’85 benchmarks for ANN-based circuits implemented by memristor crossbar and conventional memristor-based circuits of [26].

The delay for ‎ISCAS 85 benchmarks in both ANN-crossbar circuits ‎and Conv-MEM circuits [26] is reported in Table (7). Each ‎step's delay (Table 6) is considered as 2 ns according to ‎[34]. The delay of an ANN-crossbar circuit is equivalent to ‎three memristor-based neurons illustrated in Fig. 7, ‎equating to the delay of its activation function (the delay ‎of the activation function is considered equal to the delay ‎of each step, 2 ns, for a fair comparison). Consequently, ‎the delay of each ANN-crossbar circuit is 8 ns.‎

 

  1. Conclusion

In this paper, we introduced an innovative ANN-based ‎synthesizer designed for the implementation of ‎combinational logic circuits using a memristor crossbar ‎array. The ANN-crossbar circuit takes the input vectors of a ‎look-up table (which corresponds to the input vectors of ‎combinational logic circuits) in the input layer and ‎produces the corresponding outputs in the output ‎neuron. The proposed synthesizer comprises a feature ‎extractor followed by an MLP to ‎categorize the input vectors into groups of 0 or 1. ‎Various types of ANNs are efficiently implemented by ‎the memristor crossbar, while the implementation of ‎primitive logic gates by a memristor leads to high ‎overhead circuitry and reduced speed. The results ‎demonstrate that the delay of a proposed circuit ‎implemented on a memristor crossbar is significantly ‎lower than that of a conventional circuit implemented by ‎memristor-based logic gates. ‎

 

[1] Submission date:14, 05, 2022

Acceptance date: 07, 07, 2024

Corresponding author: Hadi Jahanirad, Department of Electrical Engineering, Faculty of Engineering, University of Kurdistan, Sanandaj, Iran.

  • Testa, M. Soeken, H. Riener, L. Amaru, G. De Micheli, "A logic synthesis toolbox for reducing the multiplicative complexity in logic networks", In 2020 Design, Automation & Test in Europe Conference & Exhibition (DATE), IEEE, pp. 568-573., March 2020.
  • S. Marakkalage, E. Testa, H. Riener, A. Mishchenko, M. Soeken, G. De Micheli, "Three-Input Gates for Logic Synthesis, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems", Vol. 40, No. 10, pp. 2184-2188, 2020.
  • Testa, L. Amarú, M. Soeken, A. Mishchenko, P. Vuillod, P. E. Gaillardon, G. De Micheli, "Extending Boolean Methods for Scalable Logic Synthesis", IEEE Access, Vol. 8, pp. 226828-226844, 2020.
  • Chua. "Memristor-the missing circuit element", IEEE Transactions on Circuit Theory, Vol. 18, No. 5, pp. 507-519, September 1971.
  • Li, D. Belkin, Y. Li, P. Yan, M. Hu, N. Ge, … W. Song. "Efficient and self-adaptive in-situ learning in multilayer memristor neural networks", Nature Communications, Vol. 9, No. 1, pp. 1-8, 2018.
  • Yao, H. Wu, B. Gao, J. Tang, Q. Zhang, W. Zhang, … H. Qian, "Fully hardware-implemented memristor convolutional neural network", Nature, Vol. 577, pp. 641-646, 29 January 2020.
  • Hasan, T. M. Taha, Ch. Yakopcic, "On-chip training of memristor crossbar based multilayer neural networks", Microelectronics Journal, Vol. 66, No. 5, pp. 31-40, August 2017.
  • Chen, M. R. Mahmoodi, Y. Shi, C. Mahata, B. Yuan, X. Liang, … M. Lanza, "Wafer-scale integration of two-dimensional materials in high-density memristive crossbar arrays for artificial neural networks", Nature Electronics, Vol. 3, No. 10, pp. 638-645, 2020.
  • Shi, Z. Zeng, "Design of In-Situ Learning Bidirectional Associative Memory Neural Network Circuit With Memristor Synapse", IEEE Transactions on Emerging Topics in Computational Intelligence, 2020.
  • Q. Pan, J. Chen, R. Kuang, Y. Li, Y. H. He, G. R. Feng, ... X. S. Miao, "Strategies to improve the accuracy of memristor-based convolutional neural networks", IEEE Transactions on Electron Devices, Vol. 67, No. 3, pp. 895-901, 2020.
  • Sebastian, M. Le Gallo, R. Khaddam-Aljameh, E. Eleftheriou, "Memory devices and applications for in-memory computing", Nature Nanotechnology, Vol. 15, No. 7, pp. 529-544, 2020.
  • Wang, M. A. Zidan, W. D. Lu, "A crossbar-based in-memory computing architecture", IEEE Transactions on Circuits and Systems I: Regular Papers, Vol. 67, No. 12, pp. 4224-4232, 2020.
  • S. Sokolov, H. Abbas, Y. Abbas, C. Choi, "Towards engineering in memristors for emerging memory and neuromorphic computing: A review", Journal of Semiconductors, Vol. 42, No. 1, 2021.
  • D. Schuman, T. E. Potok, R. M. Patton, J. D. Birdwell, M. E. Dean, G. S. Rose, J. S. Plank, "A survey of neuromorphic computing and neural networks in hardware", Computer Science, 2017.
  • Boybat, M. Le Gallo, S. R. Nandakumar, T. Moraitis, T. Parnell, T. Tuma, ... E. Eleftheriou, "Neuromorphic computing with multi-memristive synapses". Nature Communications, Vol. 9, No. 1, pp. 1-12, 2018.
  • Sun, J. Han, P. Liu, Y. Wang, "Memristor-based neural network circuit of Pavlov associative memory with dual-mode switching", AEU-international Journal of Electronics and Communications, Vol. 129, 2021.
  • Sun, X. Xiao, Q. Yang, P. Liu, Y. Wang, "Memristor-based Hopfield network circuit for recognition and sequencing application", AEU-International Journal of Electronics and Communications, Vol. 134, 2021.
  • Cong, Ch. Wang, Y. Sun, Q. Hong, Q. Deng, H. Chen, "Memristor-based neural network circuit with weighted sum simultaneous perturbation training and its applications", Neurocomputing, Vol. 462, pp. 581-590, 2021.
  • Hu, M. J. Schultis, M. Kramer, A. Bagla, A. Shetty, J. S. Friedman, "Overhead requirements for stateful memristor logic", IEEE Transactions on Circuits and Systems I: Regular Papers, Vol. 66, No. 1, pp. 263-273, 2018.
  • Hoffer, V. Rana, S. Menzel, R. Waser, S. Kvatinsky, "Experimental demonstration of memristor-aided logic (MAGIC) using valence change memory (VCM)", IEEE Transactions on Electron Devices, Vol. 67, No. 8, pp. 3115-3122, 2020.
  • Borghetti, G. S. Snider, P. J. Kuekes, J. J. Yang, D. R. Stewart, R. S. Williams "‘Memristive’ switches enable ‘stateful’ logic operations via material implication", Nature, Vol. 464, No. 7290, pp. 873-876, 2010.
  • L. Thangkhiew, R. Gharpinde, K. Datta, "Efficient mapping of Boolean functions to memristor crossbar using MAGIC NOR gates", IEEE Transactions on Circuits and Systems I: Regular Papers, Vol. 65, No. 8, 2466-2476, 2018.
  • F. Tozlu, F. Kaçar, Y. Babacan, "Electronically controllable neuristor based logic gates and their applications", AEU-International Journal of Electronics and Communications, Vol. 138, pp. 153834.1 – 153834.11, 2021.
  • H. Fouad, A. G. Radwan, "Memristor-based quinary half adder", AEU-International Journal of Electronics and Communications, Vol. 98, pp. 123-130, 2019.
  • B. Strukov, G. S. Snider, D. R. Stewart, R. S. Williams, "The missing memristor found", Nature, Vol. 453, No. 7191, pp. 80-83, 2008.
  • M. Kim, R. S. Williams, "A family of stateful memristor gates for complete cascading logic", IEEE Transactions on Circuits and Systems I: Regular Papers, Vol. 66, No. 11, pp. 4348-4355, 2019.
  • Xiao, Y. Jia, X. Jiang, S. Wang, "Circular Complex-Valued GMDH-Type Neural Network for Real-Valued Classification Problems", IEEE Transactions on Neural Networks and Learning Systems", Vol. 31, No. 12, pp. 5285-5299, 2020.
  • A. Agnes, J. Anitha, S. I. A. Pandian, J. D. Peter, "Classification of mammogram images using multiscale all convolutional neural network (MA-CNN)", Journal of Medical Systems, Vol. 44, No. 1, pp. 1-9, 2020.
  • Ghanem, A. Jantan, "A new approach for intrusion detection system based on training multilayer perceptron by using enhanced Bat algorithm", Neural Computing and Applications, Vol. 32, No. 15, pp. 11665-11698, 2020.
  • Goodfellow, Y. Bengio, A. Courville, Y. Bengio, "Deep learning", Cambridge: MIT Press, 2016.
  • B. Strukov, & K. K. Likharev, "CMOL FPGA: a reconfigurable architecture for hybrid digital circuits with two-terminal nanodevices", Nanotechnology, Vol. 16, No. 6, 2005.
  • J. Kuekes, D. R. Stewart, R. S. Williams, "The crossbar latch: Logic value storage, restoration, and inversion in crossbar circuits", Journal of Applied Physics, Vol. 97, No. 3, 2005.
  • Li, Y. Sun, W. Wang, X. Zhu, B. Song, R. Cao, … Q. Li, "Configurable activation function realized by non-linear memristor for neural network", AIP Advances, Vol. 10, No. 8, 2020.
  • Xie, H. A. Du Nguyen, M. Taouil, S. Hamdioui, K. A. Bertels, "A mapping methodology of boolean logic circuits on memristor crossbar", IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Vol. 37, No. 2, pp. 311-323, 2017.
  • Jahanirad, "CC-SPRA: Correlation coefficients approach for signal probability-based reliability analysis", IEEE Transactions on Very Large Scale Integration (VLSI) Systems, Vol. 27, No. 4, pp. 927-939, 2019.