Phase space machine learning for multi-particle event optimization in Gaussian boson sampling

We use neural networks to represent the characteristic function of many-body Gaussian states in the quantum phase space. By a pullback mechanism, we model transformations due to unitary operators as linear layers that can be cascaded to simulate complex multi-particle processes. We use the layered neural networks for non-classical light propagation in random interferometers, and compute boson pattern probabilities by automatic differentiation. We also demonstrate that multi-particle events in Gaussian boson sampling can be optimized by a proper design and training of the neural network weights. The results are potentially useful to the creation of new sources and complex circuits for quantum technologies.

https://arxiv.org/abs/2102.12142

Official code

Topological nanophotonics and artificial neural networks

We propose the use of artificial neural networks to design and characterize photonic topological insulators. As a hallmark, the band structures of these systems show the key feature of the emergence of edge states, with energies lying within the energy gap of the bulk materials and localized at the boundary between regions of distinct topological invariants. We consider different structures such as one-dimensional photonic crystals, PT-symmetric chains and cylindrical systems and show how, through a machine learning application, one can identify the parameters of a complex topological insulator to obtain protected edge states at target frequencies. We show how artificial neural networks can be used to solve the long standing quest of inverse-problems solution and apply it to the cutting edge topic of topological nanophotonics.

Pilozzi et al 2020 Nanotechnology https://doi.org/10.1088/1361-6528/abd508

Emacs Lisp (nano) cheat sheet

Invoke function (C-x C-e to evaluate)

(f x0 x1)
(f x0 x1 x2)

Function definition

(defun my-fun (x0 x1)
  "function description"
  (+ x0 x1))

Lambda function

(setq my-f (lambda (x y) (+ x y)))
(funcall my-f 1 2)

Setting variables, with quote is the name of the variable
(setq x y) is equivalent to (set (quote x) y)

(setq name "nautilus")
(setq name value)
'x ;; is the name of x, not the value (like the pointer in C)
'(a b c)  ;; is a list
(setq x '(0 1 2 3)) ;; x is a list

To Python or not to Python, to C++ or not to C++ (with MATLAB, MPI, CUDA, and FORTRAN)

Programming in C is the best for scientific computing.

You certainly disagree; tools like MATLAB increase productivity.

I remember when I started working as a researcher, with a lot of computing. I was facing with my professors used to dinosaurs like FORTRAN or C. MATLAB was not used too much, it was more than 20 years ago!

But MATLAB is a very professional tool, it works like a charm, and many scientific papers are done by MATLAB.

Nowadays, however, most of the students love and want to use Python, or more fashionable things like Julia or R. Python is a beauty, and it is a must for machine learning. Python is free (but many universities give access to MATLAB to the students). But I do not find it very professional. You continuously need to tweak the code, or install a missing package, or -worst of all- check with the filesystem permissions or access, because it is an interpreted language. MATLAB is also interpreted, but its ecosystem is stable and well-integrated in operating systems like Windows, OSX, or Linux. Of more than 200 papers that I wrote, only in one so far I used a Python code.

At the end of the day, for professional pictures, I use MATLAB (with some help from Illustrator or Powerpoint or Gimp, etc.). Many codes in my papers are written in MATLAB, as in the recent work on neuromorphic computing with waves. Also, the deep learning toolbox of MATLAB is valuable.

I made some papers on parallel computing, mainly by the MPI protocol. In the beginning, for MPI, I used FORTRAN, but lately (nearly 15 years ago) I switched to C++. I am still writing codes with MPI and C++, and I am practicing CUDA. You can use CUDA in Python, but you understand CUDA only by C++.

But as far as I enter in the details of a code (and I age), improve or optimize, I realize that I am progressively switching back to C. Just pure C (sic!). The reason is that at a lower programming level, I have better control of what the code does, I can understand better the side effects of some routine. In C, dealing with complex variables, or arrays is more clear to me, despite being much more complicated (but using pointers makes you feel a lot smarter!).

As a side effect, the code is simpler to read and understand, but much less cool and modern. Even if, I have to admit that maintaining a code in C++ is still more efficient for me, with respect to FORTRAN or to C, Notably enough, my last FORTRAN paper is dated 2017!

I am not a boomer, so you cannot say “ok boomer”, but I think that this python-mania is not the best for scientific computing and is not the best for students. It not only an issue of speed (and obviously C is the fastest, with FORTRAN as a good competitor). It is also a matter of how to learn to write programs for scientific computing from scratch. For me, to learn and practice scientific computing, the best is still a wise combination of C and C++, with MATLAB for visualization! While Python (and TensorFlow and all of that) gives its best in machine learning.