Mathematical physicist. Associate Professor. My research focuses on machine learning and AI, but I work in multiple disciplines, e.g., signal processing, mechatronics, robotics, computer science, and biophysics. I'm especially interested in neurobiophysics and the brain's tipping us off on how knowledge can be efficiently represented and processed in computers.
In February 2021, I published an article together with the neurophysiologist Henrik Jörntell at Lund University in Physical Review E, describing how the signalling between biological neurons works. The article shows a spectacular agreement between a new theoretical model and experimental measurements, despite the model's being mechanistic and having only the minimal number (three) of free parameters. "Mechanistic" means that the model is built upon underlying biophysical machinery, and not on "curve fitting". This indicates that we can view the model as an explanation of how the signalling works in reality.
The breakthrough which enabled us to find the model was the solution of the first-passage problem I published in Journal of Physics A: Mathematical and Theoretical in October 2020. This article describes a method for computing the probability for a stochastic process to pass a time-variable boundary, and can be directly applied to the neuron, where the stochastic process represents the neuron's internal voltage.
The results in Physical Review E say concretely that the behaviour of neurons is considerably simpler than previously thought. Eventually, it simplifies finding mathematical abstractions of neurons, and the road lies open to finding abstractions of entire assemblies of neurons, which potentially can accelerate machine learning and other AI techniques. This is important, because today's machine learning methods require huge amounts of data, whereas the human brain can achieve impressing results with considerably less data by instead using its structure. How this works is currently unknown, but the article is an important step towards finding out.
For more detailed information and illustrations, please see my private web page.
Please also watch the following video from a RISE seminar where I present some of my research in a popular way:
Beyond Deep Learning - What can biology teach us?
- All-Printed Electrochromic Stickers
- On the Transition of Charlier Polynomials to the Hermite Function
- Channel current fluctuations conclusively explain neuronal encoding of internal…
- What Neurons do – and don’t do
- Opålitliga nervceller?
- The moving-eigenvalue method : hitting time for Itô processes and moving bounda…
- Safety positioning for first responders to fires in underground constructions: …
- Hitting time in Erlang loss systems with moving boundaries
- Demo Abstract: Smart Antennas Made Practical: The SPIDA Way
- Evaluation of BART for measuring available bandwidth in an industrial applicati…