Emacs Lisp (nano) cheat sheet

Invoke function (C-x C-e to evaluate)

(f x0 x1)
(f x0 x1 x2)

Function definition

(defun my-fun (x0 x1)
  "function description"
  (+ x0 x1))

Lambda function

(setq my-f (lambda (x y) (+ x y)))
(funcall my-f 1 2)

Setting variables, with quote is the name of the variable
(setq x y) is equivalent to (set (quote x) y)

(setq name "nautilus")
(setq name value)
'x ;; is the name of x, not the value (like the pointer in C)
'(a b c)  ;; is a list
(setq x '(0 1 2 3)) ;; x is a list

To Python or not to Python, to C++ or not to C++ (with MATLAB, MPI, CUDA, and FORTRAN)

Programming in C is the best for scientific computing.

You certainly disagree; tools like MATLAB increase productivity.

I remember when I started working as a researcher, with a lot of computing. I was facing with my professors used to dinosaurs like FORTRAN or C. MATLAB was not used too much, it was more than 20 years ago!

But MATLAB is a very professional tool, it works like a charm, and many scientific papers are done by MATLAB.

Nowadays, however, most of the students love and want to use Python, or more fashionable things like Julia or R. Python is a beauty, and it is a must for machine learning. Python is free (but many universities give access to MATLAB to the students). But I do not find it very professional. You continuously need to tweak the code, or install a missing package, or -worst of all- check with the filesystem permissions or access, because it is an interpreted language. MATLAB is also interpreted, but its ecosystem is stable and well-integrated in operating systems like Windows, OSX, or Linux. Of more than 200 papers that I wrote, only in one so far I used a Python code.

At the end of the day, for professional pictures, I use MATLAB (with some help from Illustrator or Powerpoint or Gimp, etc.). Many codes in my papers are written in MATLAB, as in the recent work on neuromorphic computing with waves. Also, the deep learning toolbox of MATLAB is valuable.

I made some papers on parallel computing, mainly by the MPI protocol. In the beginning, for MPI, I used FORTRAN, but lately (nearly 15 years ago) I switched to C++. I am still writing codes with MPI and C++, and I am practicing CUDA. You can use CUDA in Python, but you understand CUDA only by C++.

But as far as I enter in the details of a code (and I age), improve or optimize, I realize that I am progressively switching back to C. Just pure C (sic!). The reason is that at a lower programming level, I have better control of what the code does, I can understand better the side effects of some routine. In C, dealing with complex variables, or arrays is more clear to me, despite being much more complicated (but using pointers makes you feel a lot smarter!).

As a side effect, the code is simpler to read and understand, but much less cool and modern. Even if, I have to admit that maintaining a code in C++ is still more efficient for me, with respect to FORTRAN or to C, Notably enough, my last FORTRAN paper is dated 2017!

I am not a boomer, so you cannot say “ok boomer”, but I think that this python-mania is not the best for scientific computing and is not the best for students. It not only an issue of speed (and obviously C is the fastest, with FORTRAN as a good competitor). It is also a matter of how to learn to write programs for scientific computing from scratch. For me, to learn and practice scientific computing, the best is still a wise combination of C and C++, with MATLAB for visualization! While Python (and TensorFlow and all of that) gives its best in machine learning.

Emacs, latex, and all that

GNU Emacs is great, superfun

You can play a lot with things like latex, and org-mode, and coding in emacs
I use emacs in windows and linux systems

Windows 10 WSL

I was used to the windows version of emacs 25 in windows 10, but WSL is a supernice tool, thus I recently switched to WSL Ubuntu 20.04, and installed emacs27 via the PPA repository. You may install the APP ubuntu 20.04 in Windows Store. See this post

In windows 10 new need a good X10 server for opening windows, I found that X410 works nicely (it is not freeware)

You need to modify your .bashrc and add

DISPLAY=localhost:0
export DISPLAY

The windows filesystem in the ubuntu WSL terminal is automatically mounted in /mnt/c/Users/

To install emacs27 in ubuntu 20.04 WSL

sudo add-apt-repository ppa:kelleyk/emacs
sudo apt update
sudo apt install emacs27

Ubuntu 20.04

Same as above, works fine

My emacs configuration

As for many Linux things, the nice feature in emacs and all that is that you can easily configure anything. The problem is that you may become addicted in continuously changing the settings, because you do not like the window size, the font, etc. The good news is that this is also true in emacs!

To configure emacs you can either change the file .emacs, whic you can put or find in your home directory. Or you may change the file .emacs.d/init.el where .emacs.d is a config directory that is also in the home. The two methods are exclusive, meaning that either you use .emacs, or .emacs.d/init.el. I prefer to used .emacs.d/init.el as it is more recent.

For example, if you want to change the font, you have to put the following line in .emacs.d/init.el

(set-face-attribute 'default (selected-frame) :height 100)

This may seem weird, as it is not something like fontsize=12, but it is a glimpse into the supersmart world of Lisp. Indeed emacs is written in Lisp, a proper Lisp, the “emacs-lisp” and that is the reason why emacs is so cool.

to be continued

The Game of Light

In memoriam: John Horton Conway

In 1970 an article by Martin Gardner appeared in Scientific American disclosing for the first time a “game” invented by John H. Conway: a matrix of ones and zeros changes with time according to simple rules inspired by biology. Cells (ones) survive or die because of overpopulation, or starvation. The simple rules surprisingly generate a variety of binary animals, named gliders, blocks, and spaceships, among others. By pen and paper, Conway demonstrated that complex dynamics spontaneously emerge in the game. Ultimately, Conway’s Game of Life turned out to be a universal Turing machine, and it is the most famous example of Cellular Automaton.

I was deeply inspired by the possibility of generating complexity with simple rules, like many others before me. In more than 50 years, Conway’s Game of Life inspired generations of scientists. “Life” is at the inner core of ideas that pervade nowadays machine learning, evolutionary biology, quantum computing, and many other fields. It also connects to the work of Wolfram and the development of Mathematica.

I was intrigued by the interaction between light and complexity and I wanted to combine the Game of Life with electromagnetic fields. I report below my original post on the topic (dating back to 2008). The article was rejected by many journals and finally published in a book dedicated to the 50 years of the GOL ( Game of Life Cellular Automata, Springer 2010).

The Enlightened Game of Life (EGOL)

The link between light and the development of complex behavior is as subtle as evident. Examples include the moonlight triggered mass spawning of hard corals in the Great Barrier, or the light-switch hypothesis in evolutionary biology, which ascribes the Cambrian explosion of biodiversity to the development of vision. Electromagnetic (EM) radiation drastically alters complex systems, from physics (e.g., climate changes) to biology (e.g., structural colors or bioluminescence). So far the emphasis has been given to bio-physical, or digital, models of the evolution of the eye with the aim of understanding the environmental influence on highly specialized organs. In this manuscript, we consider the way the appearance of photosensitivity affects the dynamics, the emergent properties and the self-organization of a community of interacting agents, specifically, of cellular automata (CA).

Quick and dirty implementation of the EGOL in a Python Notebook

https://github.com/nonlinearxwaves/gameoflife.git

Docker, mpi, fftw, fftw-mpi

Docker enables to create containers for your program with all the libraries installed.

This avoids to reinstall all the libraries (say mpich, fftw…) to any user and in new systems

The user just needs to pull the container from a repository. For example nonlinearxwaves/base

I write C++ scientific computing programs with mpich, fftw-mpi and random numbers libraries (as sprng5), which I need to run in both windows and linux systems. Docker simplifies a lot the deployment but also the development of the code.

nonlinearxwaves/base is a container with all of that

After installing Docker you run

docker login

Then you pull the docker image

docker pull nonlinearxwaves/base:0.1

You list the available images with

docker images -a

You identify the image id (in this example it is ec56f7250d5a)

REPOSITORY             TAG                 IMAGE ID            CREATED             SIZE
nonlinearxwaves/base 0.1 ec56f7250d5a 42 hours ago 1.13GB

You run the image with (you must replace the image id with your image id)

docker run -i -t ec56f7250d5a 

And you are in a shell with all the libraries installed and you may compile and run your mpi application in the usual way. In this image you will be the user “user”

user@2ff281ad4621:~$

The number 2ff281ad4621 is the container id that is now running (similar to a virtual machine)

This works with Windows and Linux (and also Mac, but I did not test)

You may also create your images with the Dockerfile

Is docker fast ? or is it better not to use a container? we will test …