Synchronization is a widely studied phenomenon across various fields. In artificial neural networks, two feed-forward networks can synchronize by exchanging outputs and applying an appropriate learning rule. This process has been explored in the permutation parity machine, a binary variant of the tree parity machine, where weights are replaced rather than adjusted during learning. In this machine, weights are pseudo-randomly drawn from binary data after output exchanges, with synchronization resulting from competing stochastic forces represented by a sequence of overlaps. This sequence follows a random process with the Markov property, where synchronization corresponds to the stationarity of a first-order Markov chain. In today's context, cryptography is crucial for information security, particularly in scenarios requiring varying levels of privacy and reliability. Cryptographic algorithms leveraging neural synchronization achieve mutual learning and synchronization more rapidly than traditional learning methods. This work studies a key exchange protocol based on permutation parity machines, demonstrating that synchronization occurs despite the weak correlation of weights during learning. This lack of correlation enhances the protocol's robustness against various attacks, including simple, geometric, majority, genetic, and probabilistic attacks. While permutation parity machines employ a complex learning rule, their simplici
Oscar Mauricio Reyes Torres Ordre des livres

- 2012