In a paper published in Physical Review Letters, with title
we study artificial neural networks with nonlinear waves as a computing reservoir. We discuss universality and the conditions to learn a dataset in terms of output channels and nonlinearity. A feed-forward three-layered model, with an encoding input layer, a wave layer, and a decoding readout, behaves as a conventional neural network in approximating mathematical functions, real-world datasets, and universal Boolean gates.
The rank of the transmission matrix has a fundamental role in assessing the learning abilities of the wave.
For a given set of training points, a threshold nonlinearity for universal interpolation exists. When considering the nonlinear Schrödinger equation, the use of highly nonlinear regimes implies that solitons, rogue, and shock waves do have a leading role in training and computing. Our results may enable the realization of novel machine learning devices by using diverse physical systems, as nonlinear optics, hydrodynamics, polaritonics, and Bose-Einstein condensates. The application of these concepts to photonics opens the way to
a large class of accelerators and new computational paradigms. In complex wave systems, as multimodal fibers, integrated optical circuits, random, topological devices, and metasurfaces, nonlinear waves can be employed to perform computation and solve complex combinatorial optimization.
The paper was selected as Editors’Suggestion and Featured in Physics
Since the 80s we know how to build optical neural networks that simulate the Hopfield model, spin-glasses, and related. New developments in optical technology and light control in random media clearly demonstrate the “optical advantage,” even while limiting to the good old classical physics.
Many developments in science and engineering depend on tackling complex optimizations on large scales. The challenge motivates an intense search for specific computing hardware that takes advantage of quantum features, stochastic elements, nonlinear dissipative dynamics, in-memory operations, or photonics. A paradigmatic optimization problem is finding low-energy states in classical spin systems with fully-random interactions. To date, no alternative computing platform can address such spin-glass problems on a large scale. Here we propose and realize an optical scalable spin-glass simulator based on spatial light modulation and multiple light scattering. By tailoring optical transmission through a disordered medium, we optically accelerate the computation of the ground state of large spin networks with all-to-all random couplings. Scaling of the operation time with the problem size demonstrates an optical advantage over conventional computing. Our results provide a general route towards large-scale computing that exploits speed, parallelism, and coherence of light.
Combinatorial optimization problems are crucial for widespread applications but remain difficult to solve on a large scale with conventional hardware. Novel optical platforms, known as coherent or photonic Ising machines, are attracting considerable attention as accelerators on optimization tasks formulable as Ising models. Annealing is a well-known technique based on adiabatic evolution for finding optimal solutions in classical and quantum systems made by atoms, electrons, or photons. Although various Ising machines employ annealing in some form, adiabatic computing on optical settings has been only partially investigated. Here, we realize the adiabatic evolution of frustrated Ising models with 100 spins programmed by spatial light modulation. We use holographic and optical control to change the spin couplings adiabatically, and exploit experimental noise to explore the energy landscape. Annealing enhances the convergence to the Ising ground state and allows to find the problem solution with probability close to unity. Our results demonstrate a photonic scheme for combinatorial optimization in analogy with adiabatic quantum algorithms and enforced by optical vector-matrix multiplications and scalable photonic technology.
See also Super Duper Ising Machine
Novel machine learning computational tools open new perspectives for quantum information systems. Here we adopt the open-source programming library TensorFlow to design multi-level quantum gates, including a computing reservoir represented by a random unitary matrix. In optics, the reservoir is a disordered medium or a multi-modal fiber. We show that trainable operators at the input and the readout enable one to realize multi-level gates. We study various qudit gates, including the scaling properties of the algorithms with the size of the reservoir. Despite an initial low slop learning stage, TensorFlow turns out to be an extremely versatile resource for designing gates with complex media, including different models that use spatial light modulators with quantized modulation levels.
See also Quantum Gates by Tensorflow