Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
(`autodiff`, `finite_difference`, `spectral`, `meshless_finite_difference`,
`least_squares`); spatial derivatives are computed automatically using the
`nn.functional.derivatives` functionals.
- Ports all physics-informed examples (LDC PINNs, Darcy, Stokes MGN, DoMINO,
datacenter, xaeronet, MHD/SWE PINO) to the new `physicsnemo.sym` interface,
replacing the separate `physicsnemo-sym` package dependency. Geometry is now
handled via `physicsnemo.mesh` and PyVista.
- Added geometry functionals in `physicsnemo.nn.functional` for
`mesh_poisson_disk_sample`, `mesh_to_voxel_fraction`, and
`signed_distance_field`.
Expand Down
48 changes: 23 additions & 25 deletions FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

- [What is the recommended hardware for training using PhysicsNeMo framework?](#what-is-the-recommended-hardware-for-training-using-physicsnemo-framework)
- [What model architectures are in PhysicsNeMo?](#what-model-architectures-are-in-physicsnemo)
- [What is the difference between PhysicsNeMo Core and Symbolic?](#what-is-the-difference-between-physicsnemo-core-and-symbolic)
- [How do I use physics-informed training with PhysicsNeMo?](#how-do-i-use-physics-informed-training-with-physicsnemo)
- [What can I do if I dont see a PDE in PhysicsNeMo?](#what-can-i-do-if-i-dont-see-a-pde-in-physicsnemo)
- [What is the difference between the pip install and the container?](#what-is-the-difference-between-the-pip-install-and-the-container)

Expand All @@ -24,33 +24,31 @@ model architecture can be applied to a specific problem.
These are reference starting points for users to get started.

You can find the list of built in model architectures
[here](https://github.com/NVIDIA/physicsnemo/tree/main/physicsnemo/models) and
[here](https://github.com/NVIDIA/physicsnemo-sym/tree/main/physicsnemo/sym/models)

## What is the difference between PhysicsNeMo Core and Symbolic?

PhysicsNeMo core is the foundational module that provides the core algorithms, network
architectures and utilities that cover a broad spectrum of Physics-ML approaches.
PhysicsNeMo Symbolic provides pythonic APIs, algorithms and utilities to be used with
PhysicsNeMo core, to explicitly physics inform the model training. This includes symbolic
APIs for PDEs, domain sampling and PDE-based residuals. It also provides higher level
abstraction to compose a training loop from specification of the geometry, PDEs and
constraints like boundary conditions using simple symbolic APIs.
So if you are familiar with PyTorch and want to train model from a dataset, you start
with PhysicsNeMo core and you import PhysicsNeMo symbolic to bring in explicit domain knowledge.
Please refer to the [DeepONet example](https://github.com/physicsnemo/tree/main/examples/cfd/darcy_deeponet_physics)
that illustrates the concept.
If you are an engineer or domain expert accustomed to using numerical solvers, you can
use PhysicsNeMo Symbolic to define your problem at a higher level of abstraction. Please
refer to the [Lid Driven cavity](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-sym/user_guide/basics/lid_driven_cavity_flow.html)
that illustrates the concept.
[here](https://github.com/NVIDIA/physicsnemo/tree/main/physicsnemo/models).

## How do I use physics-informed training with PhysicsNeMo?

PhysicsNeMo includes a `physicsnemo.sym` module (install with
`pip install "nvidia-physicsnemo[sym]"`) that provides symbolic PDE definition,
automatic spatial derivative computation, and physics-informed residual evaluation.
Define your equations using SymPy, then use `PhysicsInformer` to compute PDE
residuals automatically.

See the [LDC PINNs example](examples/cfd/ldc_pinns/) and the
[Darcy physics-informed example](examples/cfd/darcy_physics_informed/) for
complete training scripts.

> **Note:** The separate [PhysicsNeMo-Sym](https://github.com/NVIDIA/physicsnemo-sym)
> repository is being archived. Its core functionality has been upstreamed into
> PhysicsNeMo. See the [migration guide](v2.0-MIGRATION-GUIDE.md#physicsnemo-sym--physicsnemosym)
> for details.

## What can I do if I dont see a PDE in PhysicsNeMo?

PhysicsNeMo Symbolic provides a well documented
[example](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-sym/user_guide/foundational/1d_wave_equation.html#writing-custom-pdes-and-boundary-initial-conditions)
that walks you through how to define a custom PDE. Please see the source [here](https://github.com/NVIDIA/physicsnemo-sym/tree/main/physicsnemo/sym/eq/pdes)
to see the built-in PDE implementation as an additional reference for your own implementation.
Define your PDE using SymPy and the `physicsnemo.sym.eq.pde.PDE` base class.
See the [LDC PINNs example](examples/cfd/ldc_pinns/train.py) for an inline
Navier-Stokes definition, or the
[MHD PINO example](examples/cfd/mhd_pino/losses/mhd_pde.py) for a custom MHD PDE.

## What is the difference between the pip install and the container?

Expand Down
9 changes: 4 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,8 +69,7 @@ Component | Description |
[**physicsnemo.datapipes**](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-core/api/physicsnemo.datapipes.html) | Optimized and scalable built-in data pipelines fine-tuned to handle engineering and scientific data structures like point clouds, meshes, etc.|
[**physicsnemo.distributed**](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-core/api/physicsnemo.distributed.html) | A distributed computing sub-module built on top of `torch.distributed` to enable parallel training with just a few steps|
[**physicsnemo.curator**](https://github.com/NVIDIA/physicsnemo-curator) | A sub-module to streamline and accelerate the process of data curation for engineering datasets|
[**physicsnemo.sym.geometry**](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-sym/user_guide/features/csg_and_tessellated_module.html) | A sub-module to handle geometry for DL training using Constructive Solid Geometry modeling and CAD files in STL format|
[**physicsnemo.sym.eq**](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-sym/user_guide/features/nodes.html) | A sub-module to use PDEs in your DL training with several implementations of commonly observed equations and easy ways for customization|
[**physicsnemo.sym**](docs/api/physicsnemo.sym.rst) | Symbolic PDE residual computation — define equations via SymPy and compute physics-informed losses with automatic spatial derivatives (install with `pip install "nvidia-physicsnemo[sym]"`)|
<!-- markdownlint-enable -->

For a complete list, refer to the PhysicsNeMo API documentation for
Expand Down Expand Up @@ -110,7 +109,7 @@ physics-informed machine learning (ML) models can be trained quickly and effecti
The framework includes support for advanced
[optimization utilities](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-core/api/physicsnemo.utils.html#module-physicsnemo.utils.capture),
[tailor-made datapipes](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-core/api/physicsnemo.datapipes.html),
and [validation utilities](https://github.com/NVIDIA/physicsnemo-sym/tree/main/physicsnemo/sym/eq)
and [symbolic PDE utilities](physicsnemo/sym/)
to enhance end-to-end training speed.

### A Suite of Physics-Informed ML Models
Expand All @@ -124,7 +123,7 @@ includes optimized implementations of families of model architectures such as
Neural Operators:

- [Fourier Neural Operators (FNOs)](physicsnemo/models/fno)
- [DeepONet](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-sym/user_guide/neural_operators/deeponet.html)
- [DeepONet](examples/cfd/darcy_physics_informed/)
- [DoMINO](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-core/examples/cfd/external_aerodynamics/domino/readme.html)
- [Graph Neural Networks (GNNs)](physicsnemo/nn/module/gnn_layers)
- [MeshGraphNet](https://github.com/NVIDIA/physicsnemo/tree/main/examples/cfd/vortex_shedding_mgn)
Expand All @@ -137,7 +136,7 @@ Neural Operators:
- [Transsolver](https://github.com/NVIDIA/physicsnemo/tree/main/examples/cfd/darcy_transolver)
- [RNNs](https://github.com/NVIDIA/physicsnemo/tree/main/physicsnemo/models)
- [SwinVRNN](https://github.com/NVIDIA/physicsnemo/tree/main/physicsnemo/models/swinvrnn)
- [Physics-Informed Neural Networks (PINNs)](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-sym/user_guide/foundational/1d_wave_equation.html)
- [Physics-Informed Neural Networks (PINNs)](examples/cfd/ldc_pinns/)

And many others.

Expand Down
4 changes: 2 additions & 2 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,8 +113,8 @@ The several examples inside PhysicsNeMo can be classified based on their domains

## Additional examples

In addition to the examples in this repo, more Physics-ML usecases and examples
can be referenced from the [PhysicsNeMo-Sym examples](https://github.com/NVIDIA/physicsnemo-sym/blob/main/examples/README.md).
Physics-informed training examples (PINNs, PINO, physics-informed fine-tuning)
use the `physicsnemo.sym` module. Install with `pip install "nvidia-physicsnemo[sym]"`.

## NVIDIA support

Expand Down
10 changes: 5 additions & 5 deletions examples/cfd/darcy_physics_informed/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,15 @@ Numerical derivatives (PINO).

This is an extension of the 2D Darcy flow data-driven problem. In addition to the
data loss, we will demonstrate the use of physics constranints, specifically
the equation residual loss. [PhysicsNeMo Sym](https://github.com/NVIDIA/physicsnemo-sym)
the equation residual loss. the `physicsnemo.sym` module (install with `pip install "nvidia-physicsnemo[sym]"`)
has utilities tailored for physics-informed machine learning. It also presents an
abstracted APIs that allows users to think and model the problem from the lens of
equations, constraints, etc. In this example, we will only levarage the physics-informed
utilities to see how we can add physics to an existing data-driven model with ease while
still maintaining the flexibility to define our own training loop and other details.
For a more abstracted definition of these type of problems, where the training loop
definition and other things is taken care of implictily, you may refer
[PhysicsNeMo Sym](https://github.com/NVIDIA/physicsnemo-sym)
the `physicsnemo.sym` module (install with `pip install "nvidia-physicsnemo[sym]"`)

## Dataset

Expand Down Expand Up @@ -50,12 +50,12 @@ the loss function and the use of one over the other can change from case-to-case
With this example, we intend to demonstrate both such cases so that the users can compare
and contrast the two approaches.

In this example we will use the `PDE` class from PhysicsNeMo-Sym to symbolically define
In this example we will use the `PDE` class from `physicsnemo.sym` to symbolically define
the PDEs and use the `PhysicsInformer` utility to introduce the PDE
constraints. Defining the PDEs sympolically is very convenient and most natural way to
define these PDEs and allows us to print the equations to check for correctness.
This also abstracts out the
complexity of converting the equation into a pytorch representation. PhysicsNeMo Sym also
complexity of converting the equation into a pytorch representation. `physicsnemo.sym` also
provides several complex, well tested PDEs like 3D Navier-Stokes, Linear elasticity,
Electromagnetics, etc. pre-defined which can be used directly in physics-informing
applications.
Expand All @@ -79,7 +79,7 @@ darcy_physics_informed_fno.py
### Note

If you are running this example outside of the PhysicsNeMo container, install
PhysicsNeMo Sym using the instructions from [here](https://github.com/NVIDIA/physicsnemo-sym?tab=readme-ov-file#pypi)
PhysicsNeMo with the sym extra: `pip install "nvidia-physicsnemo[sym]"`

## References

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,14 +27,11 @@
from physicsnemo.utils.checkpoint import save_checkpoint
from physicsnemo.models.fno import FNO
from physicsnemo.models.mlp import FullyConnected
from physicsnemo.sym.eq.pdes.diffusion import Diffusion
from physicsnemo.sym.eq.phy_informer import PhysicsInformer
from physicsnemo.sym.key import Key
from physicsnemo.sym.models.arch import Arch
from omegaconf import DictConfig
from torch.utils.data import DataLoader

from utils import HDF5MapStyleDataset
from utils import Diffusion, HDF5MapStyleDataset


def validation_step(graph, dataloader, epoch):
Expand Down Expand Up @@ -78,78 +75,42 @@ def validation_step(graph, dataloader, epoch):
return loss_epoch / len(dataloader)


class MdlsSymWrapper(Arch):
"""
Wrapper model to convert PhysicsNeMo model to PhysicsNeMo-Sym model.

PhysicsNeMo Sym relies on the inputs/outputs of the model being dictionary of tensors.
This wrapper converts the input dictionary of tensors to a tensor inputs that can
be processed by the PhysicsNeMo model that operate on tensors. Appropriate
transformations are performed in the forward pass of the model to translate between
these two input/output definitions.

These transformations can differ based on the models. For e.g. typically for a fully
connected network, the input tensors are combined by concatenating them along
appropriate dimension before passing them as an input to the PhysicsNeMo model.
During the output, the process is reversed, the output tensor from pytorch model is
split across appropriate dimensions and then converted to a dictionary with
appropriate keys to produce the final output.

Having the model wrapped in a wrapper like this allows gradient computation using
the PhysicsNeMo Sym's optimized gradient computing backend.

For more details on PhysicsNeMo Sym models, refer:
https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-core/tutorials/simple_training_example.html#using-custom-models-in-physicsnemo
For more details on Key class, refer:
https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-sym/api/physicsnemo.sym.html#module-physicsnemo.sym.key
"""
class DeepONet(torch.nn.Module):
"""Dict-in/dict-out DeepONet (branch + trunk) model.

def __init__(
self,
input_keys=[Key("k"), Key("x"), Key("y")],
output_keys=[Key("k_prime"), Key("u")],
trunk_net=None,
branch_net=None,
):
super().__init__(
input_keys=input_keys,
output_keys=output_keys,
)
Translates between the dict-of-tensors interface that PhysicsInformer
expects and the raw tensor interface of the underlying FNO + MLP.
"""

def __init__(self, output_keys, trunk_net=None, branch_net=None):
super().__init__()
self.output_keys = output_keys
self.branch_net = branch_net
self.trunk_net = trunk_net

def forward(self, dict_tensor: Dict[str, torch.Tensor]):
# Concatenate x, y inputs to feeed in the trunk network which has a MLP
xy_input_shape = dict_tensor["x"].shape
xy = self.concat_input(
{
k: dict_tensor[k].view(xy_input_shape[0], -1, 1) for k in ["x", "y"]
}, # flatten the coordinate dimensions
["x", "y"],
detach_dict=self.detach_key_dict,
dim=-1, # concat along the last dimension to form the feature vector.
xy = torch.cat(
[dict_tensor[k].view(xy_input_shape[0], -1, 1) for k in ["x", "y"]],
dim=-1,
)
fc_out = self.trunk_net(xy)

# Pass the k-prime for the FNO input
fno_out = self.branch_net(dict_tensor["k_prime"])

# reshape the fc_out
fc_out = fc_out.view(
xy_input_shape[0], -1, xy_input_shape[-2], xy_input_shape[-1]
)

# multiply the outputs of branch and trunk networks to get the final output
out = fc_out * fno_out

return self.split_output(
out, self.output_key_dict, dim=1
) # Split along the channel dimension to get a dictionary of tensors
chunks = torch.split(out, 1, dim=1)
return {k: chunks[i] for i, k in enumerate(self.output_keys)}


@hydra.main(version_base="1.3", config_path="conf", config_name="config_deeponet.yaml")
def main(cfg: DictConfig):
"""Main function for the Darcy physics-informed DeepONet."""

# CUDA support
if torch.cuda.is_available():
device = torch.device("cuda")
Expand Down Expand Up @@ -195,9 +156,8 @@ def main(cfg: DictConfig):
# Define k-prime as an auxiliary variable that is a copy of k.
# Having k as the output of the model will allow gradients of k (for pde loss)
# to be computed using Sym's gradient backend
model = MdlsSymWrapper(
input_keys=[Key("k_prime"), Key("x"), Key("y")],
output_keys=[Key("k"), Key("u")],
model = DeepONet(
output_keys=["k", "u"],
trunk_net=model_trunk,
branch_net=model_branch,
).to(device)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,11 @@
from physicsnemo.utils.logging import LaunchLogger
from physicsnemo.utils.checkpoint import save_checkpoint
from physicsnemo.models.fno import FNO
from physicsnemo.sym.eq.pdes.diffusion import Diffusion
from physicsnemo.sym.eq.phy_informer import PhysicsInformer
from omegaconf import DictConfig
from torch.utils.data import DataLoader

from utils import HDF5MapStyleDataset
from utils import Diffusion, HDF5MapStyleDataset


def validation_step(model, dataloader, epoch):
Expand Down Expand Up @@ -71,6 +70,7 @@ def validation_step(model, dataloader, epoch):

@hydra.main(version_base="1.3", config_path="conf", config_name="config_pino.yaml")
def main(cfg: DictConfig):
"""Main function for the Darcy physics-informed FNO."""
# CUDA support
if torch.cuda.is_available():
device = torch.device("cuda")
Expand Down
33 changes: 32 additions & 1 deletion examples/cfd/darcy_physics_informed/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,40 @@
import numpy as np
import scipy.io
import torch
from physicsnemo.sym.hydra import to_absolute_path
from hydra.utils import to_absolute_path
from sympy import Function, Number, Symbol
from torch.utils.data import Dataset

from physicsnemo.sym.eq.pde import PDE


class Diffusion(PDE):
"""Diffusion equation: ``dT/dt - div(D * grad(T)) = Q``.

Equivalent to ``physicsnemo-sym``'s ``Diffusion`` class for the 2-D,
steady-state case with variable diffusivity ``D`` as a SymPy Function.

Reference: https://en.wikipedia.org/wiki/Diffusion_equation
"""

def __init__(self, T="T", D="D", Q=0, dim=2, time=False):
"""Initialize with variable name *T*, diffusivity *D*, and source *Q*."""
self.dim = dim
x, y = Symbol("x"), Symbol("y")
iv = {"x": x, "y": y}
T_var = Function(T)(*iv.values())
D_var = Function(D)(*iv.values()) if isinstance(D, str) else Number(D)
Q_var = Number(Q) if isinstance(Q, (int, float)) else Q
self.equations = {
f"diffusion_{T}": (
(T_var.diff(Symbol("t")) if time else 0)
- (D_var * T_var.diff(x)).diff(x)
- (D_var * T_var.diff(y)).diff(y)
- Q_var
),
}


# list of FNO dataset url ids on drive: https://drive.google.com/drive/folders/1UnbQh2WWc6knEHbLn-ZaXrKUZhp7pjt-
_FNO_datatsets_ids = {
"Darcy_241": "1ViDqN7nc_VCnMackiXv_d7CHZANAFKzV",
Expand Down
2 changes: 1 addition & 1 deletion examples/cfd/datacenter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ mpirun -np <#GPUs> python train.py

Once the model is trained, you can use the inference.py script to compute the
model inference. For generating the Signed Distance Field and geometry for the
inference, we make use of the utilities from PhysicsNeMo-Sym.
inference, we make use of the utilities from `physicsnemo.sym`.

### Training of Physics-Informed model

Expand Down
Loading
Loading