Photonic brain-inspired platforms are emerging as novel analog computing devices, enabling fast and energy-efficient operations for machine learning. These artificial neural networks generally require tailored optical elements, such as integrated photonic circuits, engineered diffractive layers, nanophotonic materials, or time-delay schemes, which are challenging to train or stabilize. Here we present a neuromorphic photonic scheme – photonic extreme learning machines – that can be implemented simply by using an optical encoder and coherent wave propagation in free space. We realize the concept through spatial light modulation of a laser beam, with the far field that acts as feature mapping space. We experimentally demonstrated learning from data on various classification and regression tasks, achieving accuracies comparable to digital extreme learning machines. Our findings point out an optical machine learning device that is easy-to-train, energetically efficient, scalable and fabrication-constraint free. The scheme can be generalized to a plethora of photonic systems, opening the route to real-time neuromorphic processing of optical data.
Since the 80s we know how to build optical neural networks that simulate the Hopfield model, spin-glasses, and related. New developments in optical technology and light control in random media clearly demonstrate the “optical advantage,” even while limiting to the good old classical physics.
Many developments in science and engineering depend on tackling complex optimizations on large scales. The challenge motivates an intense search for specific computing hardware that takes advantage of quantum features, stochastic elements, nonlinear dissipative dynamics, in-memory operations, or photonics. A paradigmatic optimization problem is finding low-energy states in classical spin systems with fully-random interactions. To date, no alternative computing platform can address such spin-glass problems on a large scale. Here we propose and realize an optical scalable spin-glass simulator based on spatial light modulation and multiple light scattering. By tailoring optical transmission through a disordered medium, we optically accelerate the computation of the ground state of large spin networks with all-to-all random couplings. Scaling of the operation time with the problem size demonstrates an optical advantage over conventional computing. Our results provide a general route towards large-scale computing that exploits speed, parallelism, and coherence of light.
Combinatorial optimization problems are crucial for widespread applications but remain difficult to solve on a large scale with conventional hardware. Novel optical platforms, known as coherent or photonic Ising machines, are attracting considerable attention as accelerators on optimization tasks formulable as Ising models. Annealing is a well-known technique based on adiabatic evolution for finding optimal solutions in classical and quantum systems made by atoms, electrons, or photons. Although various Ising machines employ annealing in some form, adiabatic computing on optical settings has been only partially investigated. Here, we realize the adiabatic evolution of frustrated Ising models with 100 spins programmed by spatial light modulation. We use holographic and optical control to change the spin couplings adiabatically, and exploit experimental noise to explore the energy landscape. Annealing enhances the convergence to the Ising ground state and allows to find the problem solution with probability close to unity. Our results demonstrate a photonic scheme for combinatorial optimization in analogy with adiabatic quantum algorithms and enforced by optical vector-matrix multiplications and scalable photonic technology.
See also Super Duper Ising Machine
Ising machines are novel computing devices for the energy minimization of Ising models. These combinatorial optimization problems are of paramount importance for science and technology, but remain difficult to tackle on large scale by conventional electronics. Recently, various photonics-based Ising machines demonstrated ultra-fast computing of Ising ground state by data processing through multiple temporal or spatial optical channels. Experimental noise acts as a detrimental effect in many of these devices. On the contrary, we here demonstrate that an optimal noise level enhances the performance of spatial-photonic Ising machines on frustrated spin problems. By controlling the error rate at the detection, we introduce a noisy-feedback mechanism in an Ising machine based on spatial light modulation. We investigate the device performance on systems with hundreds of individually-addressable spins with all-to-all couplings and we found an increased success probability at a specific noise level. The optimal noise amplitude depends on graph properties and size, thus indicating an additional tunable parameter helpful in exploring complex energy landscapes and in avoiding trapping into local minima. The result points out noise as a resource for optical computing. This concept, which also holds in different nanophotonic neural networks, may be crucial in developing novel hardware with optics-enabled parallel architecture for large-scale optimizations.
Published in Nanophotonics