International Journal of Soft Computing

Year: 2008
Volume: 3
Issue: 3
Page No. 177 - 181

Neural Cryptography with Multiple Transfers Functions and Multiple Learning Rule

Authors : N. Prabakaran , P. Loganathan and P. Vivekanandan

Abstract: The goal of any cryptographic system is the exchange of information among the intended users. We can generate a common secret key using neural networks and cryptography. In the case of neural cryptography is based on a competition between attractive and repulsive forces. A feedback mechanism is added to neural cryptography which increased the repulsive forces. The partners A and B have to use a cryptographic key exchange protocol in order to generate a common secret key over the public channel. This can be achieved by two Tree Parity Machines (TPMs), which are trained on their mutual output synchronize to an identical time dependent weight vector. The proposed TPMs, each output vectors are compare then updates from hidden unit using Hebbian Learning Rule and dynamic unit using Random Walk Rule with feedback mechanism. We can enhance the security of the system using different learning rule with different units. A network with feedback generates a pseudorandom bit sequence which can be used to encrypt and decrypt a secret message. The advanced attacker presented here, named the Majority Flipping Attacker is the first whose does not decay with the parameters of the model. The probability of a successful attack is calculated for different model parameters using numerical simulations.

How to cite this article:

N. Prabakaran , P. Loganathan and P. Vivekanandan , 2008. Neural Cryptography with Multiple Transfers Functions and Multiple Learning Rule. International Journal of Soft Computing, 3: 177-181.

Design and power by Medwell Web Development Team. © Medwell Publishing 2024 All Rights Reserved