Curiosity
How does intelligence emerge from a system of simple components? In which way do neural networks combine memory and computation? How does the constraint of locality affect the structure of these systems?
In my research I combine several information theoretic measures to analyze both developing structure and flow of information in these systems. These tools shed light on the fundamental interplay between neural structure and computation. Ultimately insights gained in this way might allow us to understand the fundamental working principles of neural information processing.
Recently, I have been working on novel information-theoretic learning rules that allow neurons to learn locally.
Risks
While this research is interesting in itself, it becomes vital for a safe application of neural networks in the real world. From the moment we understand neural networks to a degree where hidden biases, adversarial attacks and unintuitive results can be explained and overcome we can exploit the huge potential of artificial intelligence.