Assistant Professor, UC Berkeley
Chemical Engineering and EECS
Contact info: aditik1 dot berkeley dot edu
I am interested in developing methods in machine learning that are driven by the distinct challenges and opportunities in the natural sciences, with particular interest in physics-inspired machine learning methods. Some areas of exploration include approaches to incorporate physical inductive biases (such as symmetries, conservation laws) into ML models to improve generalization for scientific problems, the advantages that ML can bring to classical physics-based numerical solvers (such as through end-to-end differentiable frameworks and implicit layers), and better learning strategies for distribution shifts in the physical sciences. These methods are informed by and grounded in applications in atomistic and continuum problems, including fluid mechanics, molecular dynamics, materials design, and other related areas. This work also includes interfacing with other fields including numerical analysis, dynamical systems theory, quantum mechanical simulations, computational geometry, optimization, and category theory.
Some examples of recent work include:
Developing an SE(3)-equivariant hierarchial variational autoencoder for generating 3D molecular conformers. We use equivariant coarse-graining to pool information from fine-grained atomic coordinates to a coarse-grained subgraph level latent representation, and introduce an aggregated attention mechanism to restore the fine-grained coordinates through a flexible variable-length coarse-to-fine backmapping scheme. We also introduce new evaluation metrics to provide a more comprehensive assessment of the quality of the generated molecules (BAIR Blog Post, arXiv:2306.14852; 2023),
Developing a differentiable PDE-constrained layer to exactly enforce the relevant physics for a given problem, which can be added to any neural network and trained end-to-end via implicit differentiation. This approach provides significantly greater control and accuracy in enforcing PDE constraints, compared to approaches that enforce the relevant constraints solely through the loss function (International Conference on Learning Representations (ICLR); 2023),
Integrating numerical methods validation approaches for neural networks to model continuous dynamical systems (Comm. Physics; 2022),
Characterizing the challenges associated with incorporating fundamental physical laws into the machine learning process (i.e., ‘‘physics-informed neural networks’’), and devising strategies to overcome their failure modes by changing the learning paradigm (Neural Information Processing Systems (NeurIPS); 2021),
Representation learning through mapping data into topological descriptors (invariant to homeomorphic transformations of the domain), for better accuracy and interpretability in renewable energy applications (structure-property relationships in nanoporous materials, proteins),
A full list of publications is available on Google scholar.
I am very fortunate to advise the following PhD students and postdocs:
Danny Reidenbach (UC Berkeley MS CS) → Research Scientist at NVIDIA
Dami Fasina (visiting Applied Math PhD student, Yale University)
Nick Swenson (UC Berkeley EECS MEng) → Software engineer at Google
Joining the group:
- Incoming/current UC Berkeley PhD students and prospective postdoctoral researchers: please email me directly with your research interests and CV.
- Prospective PhD students: please apply directly to a UC Berkeley PhD program. If you apply to Chemical Engineering or EECS, you can mention my name as a faculty of interest in your application. For EECS applicants, please choose CS: AI-SCIENCE as your primary area.