I aim to develop methods for turning large volumes of data collected in Science into useful knowledge. One major challenge is scalability, but others equally important: algorithms need to be robust since corruption is an ubiquitous problem in data collection. Also, since data cannot always tell the entire story, uncertainty must be properly accounted for to avoid overconfident wrong inferences.
Additionally, we want to have control on what we are doing. In these times, our ability to explain how and why algorithms work is outpaced by the rate at which they are created. This put us in the awkward situation where we are increasingly relying on tools that not only are hard to interpret, but that are not even fully understood even by their developers. Conversely, regardless of the tremendous importance of theory, theoretical explanations by themselves are typically not sufficient since many factors are at play in practice, and because sometimes usual theoretical frameworks are not suited to the regimes of today. I aim to bridge the dialog between theory and practice, with the goal of making algorithms more understandable, while suggesting notions that may be ripe for exploration by theorists.
I did my PhD in Statistics at Columbia University, supervised by Liam Paninski. There I worked in the analysis of machine learning methods for the analysis of neural data. Particularly, I developed an algorithm for the inference of neural activity that is observed in response to electrical stimulation, where data is heavily contaminated with artifacts. We hope this tool may be helpful for the development of future retinal prosthesis where electrical impulses are used to replace the damaged function of cells. Also, I spent a summer at Google Brain developing an artificial neural network for solving permutation problems. Prior to that, I received a diploma in Mathematical Engineering at Universidad de Chile.