Research

Research

Neural-Informed Neural Networks (NINNs) for PDEs

My research focuses on scientific machine learning for PDEs—developing NINNs and PINNs architectures and GPU-accelerated simulation. The work addresses algorithmic foundations of optimization, neural network architectures, and uses HPC clusters for large-scale computations.

Methods:

  • Develop PyTorch-based training pipelines for time-dependent physical systems using deep learning algorithms
  • Analyze error propagation, stability, and convergence under varied data and sampling regimes
  • Implement GPU-aware experiments (NumPy/PyTorch) with ablations, data science methodologies, and reproducible artifacts
  • Compare NINNs against PINNs and finite-difference solvers, focusing on convergence rates and stability

Recent work includes introducing Backward Euler–based NINN architectures using discrete Laplacian operators and compact U-Nets for two-dimensional parabolic PDEs, demonstrating competitive convergence rates and stability with classical finite difference schemes.

High-Performance Optimization (LLNL Summer 2025)

During my summer internship at Lawrence Livermore National Laboratory, I worked on GPU-enabled optimization in the HiOp (High-performance Optimization) framework.

Contributions:

  • Implemented a RAJA-based nonlinear dense constraint driver and solver with MPI support, enabling portable performance across CPU and NVIDIA GPU backends
  • Ported limited-memory quasi-Newton (QN) methods to GPU architectures by threading a memory-space option throughout solver components and replacing CPU-only LAPACK calls with GPU-ready MAGMA and cuSOLVER placeholders
  • Refactored HiOp's linear algebra layer to introduce device-agnostic kernels, RAJA parallel loops, and unified memory (UM) support for efficient host–device data movement
  • Designed and documented GPU build/test workflows on LLNL's Lassen supercomputer (IBM Power9 + NVIDIA V100), including automated ctest parallel testing and jsrun-based job launches
  • Resolved GPU-related issues using TotalView, cuda-memcheck, and RAJA execution policies
  • Followed LLNL development practices including Git feature branching, pull requests, code reviews, and Umpire-aware memory management

Earlier Explorations (2023–2024)

Prior to focusing on neural PDE solvers, I explored several numerical methods and applications:

  • Previously studied discontinuous Galerkin formulations for coupled flow and deformation in porous media.
  • Explored phase-field approaches for fracture modeling as part of early numerical method studies.

These explorations continue to inform my perspective on multiscale and multiphysics simulation challenges.

Dynamical Systems and Cosmology (Master's Research)

During my master’s studies at San José State University, my research focused on applying dynamical systems theory to cosmological models in general relativity. This work provided a mathematical framework to study the evolution of the universe and analyze the stability of its critical points.

Key Contributions:

  • Lambda Cold Dark Matter (ΛCDM) Model: Analyzed the stability of critical points in the ΛCDM model, examining transitions between radiation-dominated, matter-dominated, and dark energy-dominated phases of the universe.
  • Geometric Insights: Used dynamical systems techniques to explore the relationship between geometry and energy in cosmological equations.
  • Numerical Simulations: Conducted simulations to verify theoretical findings and visualize trajectories of the universe’s evolution.

Results:

  • Improved understanding of the long-term behavior of cosmological systems.
  • Provided tools for analyzing nonlinear dynamical systems in general relativity.

This research examined the interplay between mathematics and physics, and continues to inform my work on complex systems in applied mathematics.