diff --git a/CHANGELOG.md b/CHANGELOG.md index 5b557e55bc..142a8738a9 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -51,6 +51,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 (`autodiff`, `finite_difference`, `spectral`, `meshless_finite_difference`, `least_squares`); spatial derivatives are computed automatically using the `nn.functional.derivatives` functionals. +- Ports all physics-informed examples (LDC PINNs, Darcy, Stokes MGN, DoMINO, + datacenter, xaeronet, MHD/SWE PINO) to the new `physicsnemo.sym` interface, + replacing the separate `physicsnemo-sym` package dependency. Geometry is now + handled via `physicsnemo.mesh` and PyVista. - Added geometry functionals in `physicsnemo.nn.functional` for `mesh_poisson_disk_sample`, `mesh_to_voxel_fraction`, and `signed_distance_field`. diff --git a/FAQ.md b/FAQ.md index 498ee33014..e2d8138751 100644 --- a/FAQ.md +++ b/FAQ.md @@ -4,7 +4,7 @@ - [What is the recommended hardware for training using PhysicsNeMo framework?](#what-is-the-recommended-hardware-for-training-using-physicsnemo-framework) - [What model architectures are in PhysicsNeMo?](#what-model-architectures-are-in-physicsnemo) -- [What is the difference between PhysicsNeMo Core and Symbolic?](#what-is-the-difference-between-physicsnemo-core-and-symbolic) +- [How do I use physics-informed training with PhysicsNeMo?](#how-do-i-use-physics-informed-training-with-physicsnemo) - [What can I do if I dont see a PDE in PhysicsNeMo?](#what-can-i-do-if-i-dont-see-a-pde-in-physicsnemo) - [What is the difference between the pip install and the container?](#what-is-the-difference-between-the-pip-install-and-the-container) @@ -24,33 +24,31 @@ model architecture can be applied to a specific problem. These are reference starting points for users to get started. You can find the list of built in model architectures -[here](https://github.com/NVIDIA/physicsnemo/tree/main/physicsnemo/models) and -[here](https://github.com/NVIDIA/physicsnemo-sym/tree/main/physicsnemo/sym/models) - -## What is the difference between PhysicsNeMo Core and Symbolic? - -PhysicsNeMo core is the foundational module that provides the core algorithms, network -architectures and utilities that cover a broad spectrum of Physics-ML approaches. -PhysicsNeMo Symbolic provides pythonic APIs, algorithms and utilities to be used with -PhysicsNeMo core, to explicitly physics inform the model training. This includes symbolic -APIs for PDEs, domain sampling and PDE-based residuals. It also provides higher level -abstraction to compose a training loop from specification of the geometry, PDEs and -constraints like boundary conditions using simple symbolic APIs. -So if you are familiar with PyTorch and want to train model from a dataset, you start -with PhysicsNeMo core and you import PhysicsNeMo symbolic to bring in explicit domain knowledge. -Please refer to the [DeepONet example](https://github.com/physicsnemo/tree/main/examples/cfd/darcy_deeponet_physics) -that illustrates the concept. -If you are an engineer or domain expert accustomed to using numerical solvers, you can -use PhysicsNeMo Symbolic to define your problem at a higher level of abstraction. Please -refer to the [Lid Driven cavity](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-sym/user_guide/basics/lid_driven_cavity_flow.html) -that illustrates the concept. +[here](https://github.com/NVIDIA/physicsnemo/tree/main/physicsnemo/models). + +## How do I use physics-informed training with PhysicsNeMo? + +PhysicsNeMo includes a `physicsnemo.sym` module (install with +`pip install "nvidia-physicsnemo[sym]"`) that provides symbolic PDE definition, +automatic spatial derivative computation, and physics-informed residual evaluation. +Define your equations using SymPy, then use `PhysicsInformer` to compute PDE +residuals automatically. + +See the [LDC PINNs example](examples/cfd/ldc_pinns/) and the +[Darcy physics-informed example](examples/cfd/darcy_physics_informed/) for +complete training scripts. + +> **Note:** The separate [PhysicsNeMo-Sym](https://github.com/NVIDIA/physicsnemo-sym) +> repository is being archived. Its core functionality has been upstreamed into +> PhysicsNeMo. See the [migration guide](v2.0-MIGRATION-GUIDE.md#physicsnemo-sym--physicsnemosym) +> for details. ## What can I do if I dont see a PDE in PhysicsNeMo? -PhysicsNeMo Symbolic provides a well documented -[example](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-sym/user_guide/foundational/1d_wave_equation.html#writing-custom-pdes-and-boundary-initial-conditions) -that walks you through how to define a custom PDE. Please see the source [here](https://github.com/NVIDIA/physicsnemo-sym/tree/main/physicsnemo/sym/eq/pdes) -to see the built-in PDE implementation as an additional reference for your own implementation. +Define your PDE using SymPy and the `physicsnemo.sym.eq.pde.PDE` base class. +See the [LDC PINNs example](examples/cfd/ldc_pinns/train.py) for an inline +Navier-Stokes definition, or the +[MHD PINO example](examples/cfd/mhd_pino/losses/mhd_pde.py) for a custom MHD PDE. ## What is the difference between the pip install and the container? diff --git a/README.md b/README.md index 3d1277e82a..b6ba7d52a3 100644 --- a/README.md +++ b/README.md @@ -69,8 +69,7 @@ Component | Description | [**physicsnemo.datapipes**](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-core/api/physicsnemo.datapipes.html) | Optimized and scalable built-in data pipelines fine-tuned to handle engineering and scientific data structures like point clouds, meshes, etc.| [**physicsnemo.distributed**](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-core/api/physicsnemo.distributed.html) | A distributed computing sub-module built on top of `torch.distributed` to enable parallel training with just a few steps| [**physicsnemo.curator**](https://github.com/NVIDIA/physicsnemo-curator) | A sub-module to streamline and accelerate the process of data curation for engineering datasets| -[**physicsnemo.sym.geometry**](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-sym/user_guide/features/csg_and_tessellated_module.html) | A sub-module to handle geometry for DL training using Constructive Solid Geometry modeling and CAD files in STL format| -[**physicsnemo.sym.eq**](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-sym/user_guide/features/nodes.html) | A sub-module to use PDEs in your DL training with several implementations of commonly observed equations and easy ways for customization| +[**physicsnemo.sym**](docs/api/physicsnemo.sym.rst) | Symbolic PDE residual computation — define equations via SymPy and compute physics-informed losses with automatic spatial derivatives (install with `pip install "nvidia-physicsnemo[sym]"`)| For a complete list, refer to the PhysicsNeMo API documentation for @@ -110,7 +109,7 @@ physics-informed machine learning (ML) models can be trained quickly and effecti The framework includes support for advanced [optimization utilities](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-core/api/physicsnemo.utils.html#module-physicsnemo.utils.capture), [tailor-made datapipes](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-core/api/physicsnemo.datapipes.html), -and [validation utilities](https://github.com/NVIDIA/physicsnemo-sym/tree/main/physicsnemo/sym/eq) +and [symbolic PDE utilities](physicsnemo/sym/) to enhance end-to-end training speed. ### A Suite of Physics-Informed ML Models @@ -124,7 +123,7 @@ includes optimized implementations of families of model architectures such as Neural Operators: - [Fourier Neural Operators (FNOs)](physicsnemo/models/fno) -- [DeepONet](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-sym/user_guide/neural_operators/deeponet.html) +- [DeepONet](examples/cfd/darcy_physics_informed/) - [DoMINO](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-core/examples/cfd/external_aerodynamics/domino/readme.html) - [Graph Neural Networks (GNNs)](physicsnemo/nn/module/gnn_layers) - [MeshGraphNet](https://github.com/NVIDIA/physicsnemo/tree/main/examples/cfd/vortex_shedding_mgn) @@ -137,7 +136,7 @@ Neural Operators: - [Transsolver](https://github.com/NVIDIA/physicsnemo/tree/main/examples/cfd/darcy_transolver) - [RNNs](https://github.com/NVIDIA/physicsnemo/tree/main/physicsnemo/models) - [SwinVRNN](https://github.com/NVIDIA/physicsnemo/tree/main/physicsnemo/models/swinvrnn) -- [Physics-Informed Neural Networks (PINNs)](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-sym/user_guide/foundational/1d_wave_equation.html) +- [Physics-Informed Neural Networks (PINNs)](examples/cfd/ldc_pinns/) And many others. diff --git a/examples/README.md b/examples/README.md index f210eb2ba6..a48b9adff1 100644 --- a/examples/README.md +++ b/examples/README.md @@ -113,8 +113,8 @@ The several examples inside PhysicsNeMo can be classified based on their domains ## Additional examples -In addition to the examples in this repo, more Physics-ML usecases and examples -can be referenced from the [PhysicsNeMo-Sym examples](https://github.com/NVIDIA/physicsnemo-sym/blob/main/examples/README.md). +Physics-informed training examples (PINNs, PINO, physics-informed fine-tuning) +use the `physicsnemo.sym` module. Install with `pip install "nvidia-physicsnemo[sym]"`. ## NVIDIA support diff --git a/examples/cfd/darcy_physics_informed/README.md b/examples/cfd/darcy_physics_informed/README.md index 9a44d2c568..3390ab8473 100644 --- a/examples/cfd/darcy_physics_informed/README.md +++ b/examples/cfd/darcy_physics_informed/README.md @@ -8,7 +8,7 @@ Numerical derivatives (PINO). This is an extension of the 2D Darcy flow data-driven problem. In addition to the data loss, we will demonstrate the use of physics constranints, specifically -the equation residual loss. [PhysicsNeMo Sym](https://github.com/NVIDIA/physicsnemo-sym) +the equation residual loss. the `physicsnemo.sym` module (install with `pip install "nvidia-physicsnemo[sym]"`) has utilities tailored for physics-informed machine learning. It also presents an abstracted APIs that allows users to think and model the problem from the lens of equations, constraints, etc. In this example, we will only levarage the physics-informed @@ -16,7 +16,7 @@ utilities to see how we can add physics to an existing data-driven model with ea still maintaining the flexibility to define our own training loop and other details. For a more abstracted definition of these type of problems, where the training loop definition and other things is taken care of implictily, you may refer -[PhysicsNeMo Sym](https://github.com/NVIDIA/physicsnemo-sym) +the `physicsnemo.sym` module (install with `pip install "nvidia-physicsnemo[sym]"`) ## Dataset @@ -50,12 +50,12 @@ the loss function and the use of one over the other can change from case-to-case With this example, we intend to demonstrate both such cases so that the users can compare and contrast the two approaches. -In this example we will use the `PDE` class from PhysicsNeMo-Sym to symbolically define +In this example we will use the `PDE` class from `physicsnemo.sym` to symbolically define the PDEs and use the `PhysicsInformer` utility to introduce the PDE constraints. Defining the PDEs sympolically is very convenient and most natural way to define these PDEs and allows us to print the equations to check for correctness. This also abstracts out the -complexity of converting the equation into a pytorch representation. PhysicsNeMo Sym also +complexity of converting the equation into a pytorch representation. `physicsnemo.sym` also provides several complex, well tested PDEs like 3D Navier-Stokes, Linear elasticity, Electromagnetics, etc. pre-defined which can be used directly in physics-informing applications. @@ -79,7 +79,7 @@ darcy_physics_informed_fno.py ### Note If you are running this example outside of the PhysicsNeMo container, install -PhysicsNeMo Sym using the instructions from [here](https://github.com/NVIDIA/physicsnemo-sym?tab=readme-ov-file#pypi) +PhysicsNeMo with the sym extra: `pip install "nvidia-physicsnemo[sym]"` ## References diff --git a/examples/cfd/darcy_physics_informed/darcy_physics_informed_deeponet.py b/examples/cfd/darcy_physics_informed/darcy_physics_informed_deeponet.py index a528a32174..eb735ba1bd 100644 --- a/examples/cfd/darcy_physics_informed/darcy_physics_informed_deeponet.py +++ b/examples/cfd/darcy_physics_informed/darcy_physics_informed_deeponet.py @@ -27,14 +27,11 @@ from physicsnemo.utils.checkpoint import save_checkpoint from physicsnemo.models.fno import FNO from physicsnemo.models.mlp import FullyConnected -from physicsnemo.sym.eq.pdes.diffusion import Diffusion from physicsnemo.sym.eq.phy_informer import PhysicsInformer -from physicsnemo.sym.key import Key -from physicsnemo.sym.models.arch import Arch from omegaconf import DictConfig from torch.utils.data import DataLoader -from utils import HDF5MapStyleDataset +from utils import Diffusion, HDF5MapStyleDataset def validation_step(graph, dataloader, epoch): @@ -78,78 +75,42 @@ def validation_step(graph, dataloader, epoch): return loss_epoch / len(dataloader) -class MdlsSymWrapper(Arch): - """ - Wrapper model to convert PhysicsNeMo model to PhysicsNeMo-Sym model. - - PhysicsNeMo Sym relies on the inputs/outputs of the model being dictionary of tensors. - This wrapper converts the input dictionary of tensors to a tensor inputs that can - be processed by the PhysicsNeMo model that operate on tensors. Appropriate - transformations are performed in the forward pass of the model to translate between - these two input/output definitions. - - These transformations can differ based on the models. For e.g. typically for a fully - connected network, the input tensors are combined by concatenating them along - appropriate dimension before passing them as an input to the PhysicsNeMo model. - During the output, the process is reversed, the output tensor from pytorch model is - split across appropriate dimensions and then converted to a dictionary with - appropriate keys to produce the final output. - - Having the model wrapped in a wrapper like this allows gradient computation using - the PhysicsNeMo Sym's optimized gradient computing backend. - - For more details on PhysicsNeMo Sym models, refer: - https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-core/tutorials/simple_training_example.html#using-custom-models-in-physicsnemo - For more details on Key class, refer: - https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-sym/api/physicsnemo.sym.html#module-physicsnemo.sym.key - """ +class DeepONet(torch.nn.Module): + """Dict-in/dict-out DeepONet (branch + trunk) model. - def __init__( - self, - input_keys=[Key("k"), Key("x"), Key("y")], - output_keys=[Key("k_prime"), Key("u")], - trunk_net=None, - branch_net=None, - ): - super().__init__( - input_keys=input_keys, - output_keys=output_keys, - ) + Translates between the dict-of-tensors interface that PhysicsInformer + expects and the raw tensor interface of the underlying FNO + MLP. + """ + def __init__(self, output_keys, trunk_net=None, branch_net=None): + super().__init__() + self.output_keys = output_keys self.branch_net = branch_net self.trunk_net = trunk_net def forward(self, dict_tensor: Dict[str, torch.Tensor]): - # Concatenate x, y inputs to feeed in the trunk network which has a MLP xy_input_shape = dict_tensor["x"].shape - xy = self.concat_input( - { - k: dict_tensor[k].view(xy_input_shape[0], -1, 1) for k in ["x", "y"] - }, # flatten the coordinate dimensions - ["x", "y"], - detach_dict=self.detach_key_dict, - dim=-1, # concat along the last dimension to form the feature vector. + xy = torch.cat( + [dict_tensor[k].view(xy_input_shape[0], -1, 1) for k in ["x", "y"]], + dim=-1, ) fc_out = self.trunk_net(xy) - # Pass the k-prime for the FNO input fno_out = self.branch_net(dict_tensor["k_prime"]) - # reshape the fc_out fc_out = fc_out.view( xy_input_shape[0], -1, xy_input_shape[-2], xy_input_shape[-1] ) - - # multiply the outputs of branch and trunk networks to get the final output out = fc_out * fno_out - return self.split_output( - out, self.output_key_dict, dim=1 - ) # Split along the channel dimension to get a dictionary of tensors + chunks = torch.split(out, 1, dim=1) + return {k: chunks[i] for i, k in enumerate(self.output_keys)} @hydra.main(version_base="1.3", config_path="conf", config_name="config_deeponet.yaml") def main(cfg: DictConfig): + """Main function for the Darcy physics-informed DeepONet.""" + # CUDA support if torch.cuda.is_available(): device = torch.device("cuda") @@ -195,9 +156,8 @@ def main(cfg: DictConfig): # Define k-prime as an auxiliary variable that is a copy of k. # Having k as the output of the model will allow gradients of k (for pde loss) # to be computed using Sym's gradient backend - model = MdlsSymWrapper( - input_keys=[Key("k_prime"), Key("x"), Key("y")], - output_keys=[Key("k"), Key("u")], + model = DeepONet( + output_keys=["k", "u"], trunk_net=model_trunk, branch_net=model_branch, ).to(device) diff --git a/examples/cfd/darcy_physics_informed/darcy_physics_informed_fno.py b/examples/cfd/darcy_physics_informed/darcy_physics_informed_fno.py index 2035787452..1d15fbef05 100644 --- a/examples/cfd/darcy_physics_informed/darcy_physics_informed_fno.py +++ b/examples/cfd/darcy_physics_informed/darcy_physics_informed_fno.py @@ -23,12 +23,11 @@ from physicsnemo.utils.logging import LaunchLogger from physicsnemo.utils.checkpoint import save_checkpoint from physicsnemo.models.fno import FNO -from physicsnemo.sym.eq.pdes.diffusion import Diffusion from physicsnemo.sym.eq.phy_informer import PhysicsInformer from omegaconf import DictConfig from torch.utils.data import DataLoader -from utils import HDF5MapStyleDataset +from utils import Diffusion, HDF5MapStyleDataset def validation_step(model, dataloader, epoch): @@ -71,6 +70,7 @@ def validation_step(model, dataloader, epoch): @hydra.main(version_base="1.3", config_path="conf", config_name="config_pino.yaml") def main(cfg: DictConfig): + """Main function for the Darcy physics-informed FNO.""" # CUDA support if torch.cuda.is_available(): device = torch.device("cuda") diff --git a/examples/cfd/darcy_physics_informed/utils.py b/examples/cfd/darcy_physics_informed/utils.py index 6af2b91b71..b1f5ae05a8 100644 --- a/examples/cfd/darcy_physics_informed/utils.py +++ b/examples/cfd/darcy_physics_informed/utils.py @@ -28,9 +28,40 @@ import numpy as np import scipy.io import torch -from physicsnemo.sym.hydra import to_absolute_path +from hydra.utils import to_absolute_path +from sympy import Function, Number, Symbol from torch.utils.data import Dataset +from physicsnemo.sym.eq.pde import PDE + + +class Diffusion(PDE): + """Diffusion equation: ``dT/dt - div(D * grad(T)) = Q``. + + Equivalent to ``physicsnemo-sym``'s ``Diffusion`` class for the 2-D, + steady-state case with variable diffusivity ``D`` as a SymPy Function. + + Reference: https://en.wikipedia.org/wiki/Diffusion_equation + """ + + def __init__(self, T="T", D="D", Q=0, dim=2, time=False): + """Initialize with variable name *T*, diffusivity *D*, and source *Q*.""" + self.dim = dim + x, y = Symbol("x"), Symbol("y") + iv = {"x": x, "y": y} + T_var = Function(T)(*iv.values()) + D_var = Function(D)(*iv.values()) if isinstance(D, str) else Number(D) + Q_var = Number(Q) if isinstance(Q, (int, float)) else Q + self.equations = { + f"diffusion_{T}": ( + (T_var.diff(Symbol("t")) if time else 0) + - (D_var * T_var.diff(x)).diff(x) + - (D_var * T_var.diff(y)).diff(y) + - Q_var + ), + } + + # list of FNO dataset url ids on drive: https://drive.google.com/drive/folders/1UnbQh2WWc6knEHbLn-ZaXrKUZhp7pjt- _FNO_datatsets_ids = { "Darcy_241": "1ViDqN7nc_VCnMackiXv_d7CHZANAFKzV", diff --git a/examples/cfd/datacenter/README.md b/examples/cfd/datacenter/README.md index 4dd0e1819d..a941be3ab9 100644 --- a/examples/cfd/datacenter/README.md +++ b/examples/cfd/datacenter/README.md @@ -71,7 +71,7 @@ mpirun -np <#GPUs> python train.py Once the model is trained, you can use the inference.py script to compute the model inference. For generating the Signed Distance Field and geometry for the -inference, we make use of the utilities from PhysicsNeMo-Sym. +inference, we make use of the utilities from `physicsnemo.sym`. ### Training of Physics-Informed model diff --git a/examples/cfd/datacenter/inference.py b/examples/cfd/datacenter/inference.py index d3bed356b5..e8b9a5c554 100644 --- a/examples/cfd/datacenter/inference.py +++ b/examples/cfd/datacenter/inference.py @@ -30,12 +30,58 @@ from torch.nn.parallel import DistributedDataParallel from physicsnemo.utils import StaticCaptureTraining, StaticCaptureEvaluateNoGrad from apex import optimizers +import itertools import os import numpy as np from vtk.util.numpy_support import vtk_to_numpy, numpy_to_vtk -from physicsnemo.sym.geometry.primitives_3d import Box, Channel -from physicsnemo.sym.utils.io.vtk import var_to_polyvtk -import itertools + + +def _box_sdf(points, lower, upper): + """Euclidean signed distance for an axis-aligned box (positive inside).""" + cx = 0.5 * (lower[0] + upper[0]) + cy = 0.5 * (lower[1] + upper[1]) + cz = 0.5 * (lower[2] + upper[2]) + hx = 0.5 * (upper[0] - lower[0]) + hy = 0.5 * (upper[1] - lower[1]) + hz = 0.5 * (upper[2] - lower[2]) + dx = np.abs(points[:, 0] - cx) - hx + dy = np.abs(points[:, 1] - cy) - hy + dz = np.abs(points[:, 2] - cz) - hz + outside = np.sqrt( + np.maximum(dx, 0) ** 2 + np.maximum(dy, 0) ** 2 + np.maximum(dz, 0) ** 2 + ) + inside = np.minimum(np.maximum(np.maximum(dx, dy), dz), 0) + return -(outside + inside) + + +def _sdf_union(*sdfs): + """CSG union: positive where any operand is positive.""" + return np.maximum.reduce(sdfs) + + +def _sdf_subtract(a, b): + """CSG subtraction (A - B): inside A and outside B.""" + return np.minimum(a, -b) + + +def _repeated_boxes_sdf( + points, lower, upper, spacing, repeat_lower, repeat_higher, center +): + """SDF for repeated boxes: evaluate each copy and take the union (max). + + Uses the Euclidean box SDF per copy, combined via ``max`` (CSG union). + The center parameter defines the center of the original (un-repeated) box; + copies are offset by ``i * spacing`` along x from that center. + """ + combined = np.full(len(points), -np.inf) + cx = center[0] if center is not None else 0.5 * (lower[0] + upper[0]) + half_x = 0.5 * (upper[0] - lower[0]) + for i in range(repeat_lower, repeat_higher + 1): + offset = i * spacing + lo = (cx - half_x + offset, lower[1], lower[2]) + hi = (cx + half_x + offset, upper[1], upper[2]) + combined = np.maximum(combined, _box_sdf(points, lo, hi)) + return combined def reshape_fortran(x, shape): @@ -68,42 +114,26 @@ def generate_mask(points, sample): origin = (0, 0.05, 0) - w1_x = gap / 2 / 1000 # the x distance of the left wall - geo = Box( - (origin[0] + w1_x, origin[1], origin[2]), - (origin[0] + w1_x + rack_x, origin[1] + rack_y, origin[2] + rack_z), - ) - geo = geo.repeat( - gap / 1000 + rack_x, - repeat_lower=(0, 0, 0), - repeat_higher=(int(num_racks - 1), 0, 0), - center=( - origin[0] + w1_x + rack_x / 2, - origin[1] + rack_y / 2, - origin[2] + rack_z / 2, - ), - ) + w1_x = gap / 2 / 1000 + spacing = gap / 1000 + rack_x - geo_block_pos_y = Box( + # Wall blocks (pos_y and neg_y) repeated along x + sdf_block_pos_y = _repeated_boxes_sdf( + points, (origin[0] - w1_x, origin[1] - rack_y, origin[2]), (origin[0] + w1_x, origin[1] + 2, origin[2] + rack_z), + spacing=spacing, + repeat_lower=0, + repeat_higher=int(num_racks), + center=(origin[0], origin[1] - rack_y / 2 + 1, origin[2] + rack_z / 2), ) - geo_block_neg_y = Box( + sdf_block_neg_y = _repeated_boxes_sdf( + points, (origin[0] - w1_x, origin[1] - width - 2 * rack_y - 2, origin[2]), (origin[0] + w1_x, origin[1] - width - rack_y, origin[2] + rack_z), - ) - - geo_block_pos_y = geo_block_pos_y.repeat( - gap / 1000 + rack_x, - repeat_lower=(0, 0, 0), - repeat_higher=(int(num_racks), 0, 0), - center=(origin[0], origin[1] - rack_y / 2 + 1, origin[2] + rack_z / 2), - ) - - geo_block_neg_y = geo_block_neg_y.repeat( - gap / 1000 + rack_x, - repeat_lower=(0, 0, 0), - repeat_higher=(int(num_racks), 0, 0), + spacing=spacing, + repeat_lower=0, + repeat_higher=int(num_racks), center=( origin[0], origin[1] - width - 3 * rack_y / 2 - 1, @@ -111,38 +141,48 @@ def generate_mask(points, sample): ), ) - geo_block = geo_block_pos_y + geo_block_neg_y - - rack_top_pos_x = Box( + # Rack-top boxes + sdf_rack_top_pos = _box_sdf( + points, (origin[0] - 5, origin[1] - rack_y, origin[2] + rack_z), (origin[0] + length + 5, origin[1] + 2, origin[2] + height + 10), ) - rack_top_neg_x = Box( + sdf_rack_top_neg = _box_sdf( + points, (origin[0] - 5, origin[1] - width - 2 * rack_y - 2, origin[2] + rack_z), (origin[0] + length + 5, origin[1] - width - rack_y, origin[2] + height + 10), ) - geo_block = geo_block + rack_top_pos_x + rack_top_neg_x + # Union of wall blocks + rack tops (racks are NOT subtracted from the channel, + # matching the original code where the rack variable is unused in the CSG) + sdf_block = _sdf_union( + sdf_block_pos_y, sdf_block_neg_y, sdf_rack_top_pos, sdf_rack_top_neg + ) + + # Channel (no x-boundaries — Euclidean SDF on y and z only) + cy = 0.5 * ((origin[1] - width - 2) + (origin[1] + 2)) + cz = 0.5 * (origin[2] + (origin[2] + height + 10)) + hy = 0.5 * ((origin[1] + 2) - (origin[1] - width - 2)) + hz = 0.5 * ((origin[2] + height + 10) - origin[2]) + dy = np.abs(points[:, 1] - cy) - hy + dz = np.abs(points[:, 2] - cz) - hz + outside_ch = np.sqrt(np.maximum(dy, 0) ** 2 + np.maximum(dz, 0) ** 2) + inside_ch = np.minimum(np.maximum(dy, dz), 0) + sdf_channel = -(outside_ch + inside_ch) + + # hot_aisle = channel - blocks (inside channel AND outside blocks) + sdf_hot_aisle = _sdf_subtract(sdf_channel, sdf_block) hot_aisle_bounds = ( (origin[0], origin[1] - width - 2 * rack_y, origin[2]), (origin[0] + length, origin[1], origin[2] + height), ) - hot_aisle = Channel( - (origin[0] - 5, origin[1] - width - 2, origin[2]), - (origin[0] + length + 5, origin[1] + 2, origin[2] + height + 10), - ) - - hot_aisle = hot_aisle - geo_block - - # Compute SDF on the points - sdf = hot_aisle.sdf(points, params={}) - - return sdf["sdf"], hot_aisle_bounds + return sdf_hot_aisle, hot_aisle_bounds def save_to_vtu(data_dict, bounds, output_file): + """Save a dict of 3-D arrays to a VTU file on a rectilinear grid.""" num_cells_x, num_cells_y, num_cells_z = next(iter(data_dict.values())).shape x_min, x_max, y_min, y_max, z_min, z_max = bounds dx = (x_max - x_min) / (num_cells_x - 1) @@ -196,6 +236,7 @@ def save_to_vtu(data_dict, bounds, output_file): @hydra.main(version_base="1.2", config_path="conf", config_name="config_inference") def main(cfg: DictConfig) -> None: + """Run datacenter inference.""" print("Inference Started!") # initialize distributed manager diff --git a/examples/cfd/datacenter/requirements.txt b/examples/cfd/datacenter/requirements.txt index f2cdfd272d..ce298df5ee 100644 --- a/examples/cfd/datacenter/requirements.txt +++ b/examples/cfd/datacenter/requirements.txt @@ -1,4 +1,4 @@ vtk omegaconf hydra-core -nvidia-physicsnemo.sym \ No newline at end of file +nvidia-physicsnemo[sym] \ No newline at end of file diff --git a/examples/cfd/datacenter/train_physics_informed.py b/examples/cfd/datacenter/train_physics_informed.py index b6b527d771..444447f771 100644 --- a/examples/cfd/datacenter/train_physics_informed.py +++ b/examples/cfd/datacenter/train_physics_informed.py @@ -32,8 +32,49 @@ from apex import optimizers import os import numpy as np +from sympy import Function, Number, Symbol + +from physicsnemo.sym.eq.pde import PDE from physicsnemo.sym.eq.phy_informer import PhysicsInformer -from physicsnemo.sym.eq.pdes.navier_stokes import NavierStokes + + +class NavierStokes(PDE): + """Incompressible Navier-Stokes equations (steady, constant density). + + Simplified from the compressible form in physicsnemo-sym for the case + where ``rho`` is constant and ``time=False``. + + Reference: https://turbmodels.larc.nasa.gov/implementrans.html + """ + + def __init__(self, nu=0.01, rho=1.0, dim=3, time=False): + self.dim = dim + x, y, z = Symbol("x"), Symbol("y"), Symbol("z") + iv = {"x": x, "y": y, "z": z} + if dim < 3: + iv.pop("z") + + u = Function("u")(*iv.values()) + v = Function("v")(*iv.values()) + w = Function("w")(*iv.values()) if dim == 3 else Number(0) + p = Function("p")(*iv.values()) + nu, rho = Number(nu), Number(rho) + + self.equations = { + "continuity": u.diff(x) + v.diff(y) + (w.diff(z) if dim == 3 else 0), + } + for label, vel, axis in [("momentum_x", u, x), ("momentum_y", v, y)] + ( + [("momentum_z", w, z)] if dim == 3 else [] + ): + self.equations[label] = ( + u * vel.diff(x) + + v * vel.diff(y) + + (w * vel.diff(z) if dim == 3 else 0) + + (1 / rho) * p.diff(axis) + - nu * vel.diff(x, 2) + - nu * vel.diff(y, 2) + - (nu * vel.diff(z, 2) if dim == 3 else 0) + ) def dilate_mask_3d(mask, padding_size): @@ -66,6 +107,7 @@ def reshape_fortran(x, shape): def validation_step( model, dataset, pos_embed_tensor, epoch, plotting=False, device=None, name="default" ): + """Validation step for the physics-informed training.""" loss_epoch = 0.0 num_samples = 0.0 @@ -138,6 +180,7 @@ def validation_step( version_base="1.2", config_path="conf", config_name="config_physics_informed" ) def main(cfg: DictConfig) -> None: + """Main function for the physics-informed training.""" logger = PythonLogger("main") # General python logger LaunchLogger.initialize() diff --git a/examples/cfd/external_aerodynamics/domino/README.md b/examples/cfd/external_aerodynamics/domino/README.md index f728c94a25..abbc07745b 100644 --- a/examples/cfd/external_aerodynamics/domino/README.md +++ b/examples/cfd/external_aerodynamics/domino/README.md @@ -335,12 +335,12 @@ Note, if you wish to modify the PDEs used for DoMINO, please edit the #### Prerequisites for PDE residuals -The computation of Physics residuals is supported using the PhysicsNeMo-Sym +The computation of Physics residuals is supported using the `physicsnemo.sym` library. Install it using ```bash pip install "Cython" -pip install "nvidia-physicsnemo.sym>2.1.0" --no-build-isolation +pip install "nvidia-physicsnemo[sym]" ``` To execute the training using physics losses, run the `train.py` with the diff --git a/examples/cfd/external_aerodynamics/domino/src/loss.py b/examples/cfd/external_aerodynamics/domino/src/loss.py index e328af4efc..61435ffe68 100644 --- a/examples/cfd/external_aerodynamics/domino/src/loss.py +++ b/examples/cfd/external_aerodynamics/domino/src/loss.py @@ -17,13 +17,26 @@ import torch from typing import Literal, Any -from physicsnemo.models.domino.utils import unnormalize - -from typing import Literal, Any - import torch.cuda.nvtx as nvtx +from physicsnemo.models.domino.utils import unnormalize from physicsnemo.models.domino.utils import * +from physicsnemo.nn.functional.derivatives import mesh_lsq_gradient + + +def _build_csr_from_neighbors(neighbors_list, device): + """Build CSR offsets/indices from a ``{node_id: [neighbor_ids]}`` dict.""" + num_nodes = max(neighbors_list.keys()) + 1 + offsets_list = [0] + indices_list = [] + for node_id in range(num_nodes): + if node_id in neighbors_list: + indices_list.extend(neighbors_list[node_id]) + offsets_list.append(len(indices_list)) + return ( + torch.tensor(offsets_list, dtype=torch.int64, device=device), + torch.tensor(indices_list, dtype=torch.int64, device=device), + ) def compute_physics_loss( @@ -32,20 +45,21 @@ def compute_physics_loss( mask: torch.Tensor, loss_type: Literal["mse", "rmse"], dims: tuple[int, ...] | None, - first_deriv: torch.nn.Module, eqn: Any, bounding_box: torch.Tensor, vol_factors: torch.Tensor, ) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]: """Compute physics-based loss terms for Navier-Stokes equations. + Spatial derivatives are computed using ``mesh_lsq_gradient`` from + ``physicsnemo.nn.functional.derivatives``. + Args: output: Model output containing (output, coords_neighbors, output_neighbors, neighbors_list) target: Ground truth values mask: Mask for valid values loss_type: Type of loss to calculate ("mse" or "rmse") dims: Dimensions for loss calculation - first_deriv: First derivative calculator eqn: Equations bounding_box: Bounding box for normalization vol_factors: Volume factors for normalization @@ -66,49 +80,33 @@ def compute_physics_loss( coords_total, bounding_box[0], bounding_box[1] ) - # compute first order gradients on all the nodes from the neighbors_list - grad_list = {} - for parent_id, neighbor_ids in neighbors_list.items(): - neighbor_ids_tensor = torch.tensor(neighbor_ids).to( - output_total_unnormalized.device - ) - du = ( - output_total_unnormalized[:, [parent_id]] - - output_total_unnormalized[:, neighbor_ids_tensor] - ) - dv = ( - coords_total_unnormalized[:, [parent_id]] - - coords_total_unnormalized[:, neighbor_ids_tensor] - ) - grads = first_deriv.forward( - coords=None, connectivity_tensor=None, y=None, du=du, dv=dv - ) - grad = torch.cat(grads, dim=1) - grad_list[parent_id] = grad - - # compute second order gradients on only the center node - neighbor_ids_tensor = torch.tensor(neighbors_list[0]).to( - output_total_unnormalized.device - ) - grad_neighbors_center = torch.stack([v for v in grad_list.values()], dim=1) - grad_neighbors_center = grad_neighbors_center.reshape( - batch_size, len(neighbors_list[0]) + 1, -1 - ) - - du = grad_neighbors_center[:, [0]] - grad_neighbors_center[:, neighbor_ids_tensor] - dv = ( - coords_total_unnormalized[:, [0]] - - coords_total_unnormalized[:, neighbor_ids_tensor] - ) - - # second order gradients - ggrads_center = first_deriv.forward( - coords=None, connectivity_tensor=None, y=None, du=du, dv=dv - ) - ggrad_center = torch.cat(ggrads_center, dim=1) - grad_neighbors_center = grad_neighbors_center.reshape( - batch_size, len(neighbors_list[0]) + 1, 3, -1 - ) + # Build CSR adjacency from the neighbor graph + device = output_total_unnormalized.device + offsets, indices = _build_csr_from_neighbors(neighbors_list, device) + num_nodes = max(neighbors_list.keys()) + 1 + + # First-order gradients for all nodes using mesh_lsq_gradient + first_grads_list = [] + for b in range(batch_size): + coords_b = coords_total_unnormalized[b].detach() + values_b = output_total_unnormalized[b] + grads_b = mesh_lsq_gradient(coords_b, values_b, offsets, indices) + first_grads_list.append(grads_b) + grad_neighbors_center = torch.stack(first_grads_list) + + # Second-order gradients at center node (node 0) via mesh_lsq_gradient + # on the first-order gradient results (compose first-order twice) + grad_flat = grad_neighbors_center.reshape(batch_size, num_nodes, -1) + second_grads_list = [] + for b in range(batch_size): + coords_b = coords_total_unnormalized[b].detach() + values_b = grad_flat[b] + sg_b = mesh_lsq_gradient(coords_b, values_b, offsets, indices) + second_grads_list.append(sg_b) + ggrad_all = torch.stack(second_grads_list) + ggrad_center = ggrad_all[:, 0, :, :] + + grad_neighbors_center = grad_neighbors_center.reshape(batch_size, num_nodes, 3, -1) # Get the outputs on the original nodes fields_center_unnormalized = output_total_unnormalized[:, 0, :] @@ -242,7 +240,6 @@ def loss_fn_with_physics( target: torch.Tensor, loss_type: Literal["mse", "rmse"], padded_value: float = -10, - first_deriv: torch.nn.Module = None, eqn: Any = None, bounding_box: torch.Tensor = None, vol_factors: torch.Tensor = None, @@ -254,7 +251,6 @@ def loss_fn_with_physics( target: Ground truth values loss_type: Type of loss to calculate ("mse" or "rmse") padded_value: Value used for padding in the tensor - first_deriv: First derivative calculator eqn: Equations bounding_box: Bounding box for normalization vol_factors: Volume factors for normalization @@ -276,7 +272,6 @@ def loss_fn_with_physics( mask=mask, loss_type=loss_type, dims=dims, - first_deriv=first_deriv, eqn=eqn, bounding_box=bounding_box, vol_factors=vol_factors, @@ -381,6 +376,7 @@ def loss_fn_area( def integral_loss_fn( output, target, area, normals, stream_velocity=None, padded_value=-10 ): + """Compute combined drag + lift integral loss.""" drag_loss = drag_loss_fn( output, target, area, normals, stream_velocity=stream_velocity, padded_value=-10 ) @@ -391,6 +387,7 @@ def integral_loss_fn( def lift_loss_fn(output, target, area, normals, stream_velocity=None, padded_value=-10): + """Compute lift coefficient loss from surface pressure and wall shear.""" vel_inlet = stream_velocity # Get this from the dataset mask = abs(target - padded_value) > 1e-3 @@ -417,6 +414,7 @@ def lift_loss_fn(output, target, area, normals, stream_velocity=None, padded_val def drag_loss_fn(output, target, area, normals, stream_velocity=None, padded_value=-10): + """Compute drag coefficient loss from surface pressure and wall shear.""" vel_inlet = stream_velocity # Get this from the dataset mask = abs(target - padded_value) > 1e-3 output_true = target * mask * area * (vel_inlet) ** 2.0 @@ -444,7 +442,6 @@ def compute_loss_dict( integral_scaling_factor: float, surf_loss_scaling: float, vol_loss_scaling: float, - first_deriv: torch.nn.Module | None = None, eqn: Any = None, bounding_box: torch.Tensor | None = None, vol_factors: torch.Tensor | None = None, @@ -476,7 +473,6 @@ def compute_loss_dict( target_vol, loss_fn_type.loss_type, padded_value=-10, - first_deriv=first_deriv, eqn=eqn, bounding_box=bounding_box, vol_factors=vol_factors, diff --git a/examples/cfd/external_aerodynamics/domino/src/train.py b/examples/cfd/external_aerodynamics/domino/src/train.py index 613662c279..524c926c79 100644 --- a/examples/cfd/external_aerodynamics/domino/src/train.py +++ b/examples/cfd/external_aerodynamics/domino/src/train.py @@ -98,13 +98,13 @@ def validation_step( loss_fn_type=None, vol_loss_scaling=None, surf_loss_scaling=None, - first_deriv: torch.nn.Module | None = None, eqn: Any = None, bounding_box: torch.Tensor | None = None, vol_factors: torch.Tensor | None = None, add_physics_loss=False, autocast_enabled=None, ): + """Run one validation epoch and return aggregate metrics.""" dm = DistributedManager() running_vloss = 0.0 with torch.no_grad(): @@ -127,7 +127,6 @@ def validation_step( integral_scaling_factor, surf_loss_scaling, vol_loss_scaling, - first_deriv, eqn, bounding_box, vol_factors, @@ -185,7 +184,6 @@ def train_epoch( loss_fn_type, vol_loss_scaling=None, surf_loss_scaling=None, - first_deriv: torch.nn.Module | None = None, eqn: Any = None, bounding_box: torch.Tensor | None = None, vol_factors: torch.Tensor | None = None, @@ -195,6 +193,7 @@ def train_epoch( grad_clip_enabled=None, grad_max_norm=None, ): + """Run one training epoch with optional physics loss.""" dm = DistributedManager() running_loss = 0.0 @@ -228,7 +227,6 @@ def train_epoch( integral_scaling_factor, surf_loss_scaling, vol_loss_scaling, - first_deriv, eqn, bounding_box, vol_factors, @@ -324,6 +322,7 @@ def train_epoch( @hydra.main(version_base="1.3", config_path="conf", config_name="config") def main(cfg: DictConfig) -> None: + """Entry point for DoMINO training.""" ###################################################### # initialize distributed manager ###################################################### @@ -385,18 +384,80 @@ def main(cfg: DictConfig) -> None: if add_physics_loss: from physicsnemo.sym.eq.pde import PDE - from physicsnemo.sym.eq.ls.grads import FirstDeriv - from physicsnemo.sym.eq.pdes.navier_stokes import IncompressibleNavierStokes - else: - PDE = FirstDeriv = IncompressibleNavierStokes = None + from sympy import Function, Number, Symbol + + class IncompressibleNavierStokes(PDE): + """Incompressible Navier-Stokes with variable viscosity (stress tensor form). + + Reference: https://web.stanford.edu/class/me469b/handouts/incompressible.pdf + """ + + def __init__(self, rho=1.0, nu="nu", dim=3, time=False): + """Initialize with density *rho* and viscosity *nu*.""" + self.dim = dim + x, y, z = Symbol("x"), Symbol("y"), Symbol("z") + iv = {"x": x, "y": y, "z": z} + if dim == 2: + iv.pop("z") + u = Function("u")(*iv.values()) + v = Function("v")(*iv.values()) + w = Function("w")(*iv.values()) if dim == 3 else Number(0) + p = Function("p")(*iv.values()) + if isinstance(nu, str): + nu = Function(nu)(*iv.values()) + elif isinstance(nu, (float, int)): + nu = Number(nu) + mu = rho * nu + + tau_xx__x = 2 * mu * u.diff(x, 2) + 2 * mu.diff(x) * u.diff(x) + tau_xy__y = mu * (u.diff(y, 2) + v.diff(x).diff(y)) + mu.diff(y) * ( + u.diff(y) + v.diff(x) + ) + tau_xz__z = mu * (u.diff(z, 2) + w.diff(x).diff(z)) + mu.diff(z) * ( + u.diff(z) + w.diff(x) + ) + tau_xy__x = mu * (u.diff(y).diff(x) + v.diff(x, 2)) + mu.diff(x) * ( + u.diff(y) + v.diff(x) + ) + tau_yy__y = 2 * mu * v.diff(y, 2) + 2 * mu.diff(y) * v.diff(y) + tau_yz__z = mu * (v.diff(z, 2) + w.diff(y).diff(z)) + mu.diff(z) * ( + v.diff(z) + w.diff(y) + ) + tau_xz__x = mu * (u.diff(z).diff(x) + w.diff(x, 2)) + mu.diff(x) * ( + u.diff(z) + w.diff(x) + ) + tau_yz__y = mu * (v.diff(z).diff(y) + w.diff(y, 2)) + mu.diff(y) * ( + v.diff(z) + w.diff(y) + ) + tau_zz__z = 2 * mu * w.diff(z, 2) + 2 * mu.diff(z) * w.diff(z) + + self.equations = { + "continuity": u.diff(x) + v.diff(y) + w.diff(z), + "momentum_x": rho * (u * u.diff(x) + v * u.diff(y) + w * u.diff(z)) + + p.diff(x) + - tau_xx__x + - tau_xy__y + - tau_xz__z, + "momentum_y": rho * (u * v.diff(x) + v * v.diff(y) + w * v.diff(z)) + + p.diff(y) + - tau_xy__x + - tau_yy__y + - tau_yz__z, + "momentum_z": rho * (u * w.diff(x) + v * w.diff(y) + w * w.diff(z)) + + p.diff(z) + - tau_xz__x + - tau_yz__y + - tau_zz__z, + } + if dim == 2: + self.equations.pop("momentum_z") # Initialize physics components conditionally - first_deriv = None eqn = None if add_physics_loss: - first_deriv = FirstDeriv(dim=3, direct_input=True) - eqn = IncompressibleNavierStokes(rho=1.226, nu="nu", dim=3, time=False) - eqn = eqn.make_nodes(return_as_dict=True) + ns = IncompressibleNavierStokes(rho=1.226, nu="nu", dim=3, time=False) + computations = ns.make_computations() + eqn = {c.outputs[0]: c for c in computations} # The bounding box is used in calculating the physics loss: bounding_box = None @@ -627,7 +688,6 @@ def main(cfg: DictConfig) -> None: loss_fn_type=cfg.model.loss_function, vol_loss_scaling=cfg.model.vol_loss_scaling, surf_loss_scaling=surface_scaling_loss, - first_deriv=first_deriv, eqn=eqn, bounding_box=bounding_box, vol_factors=vol_factors, @@ -656,7 +716,6 @@ def main(cfg: DictConfig) -> None: loss_fn_type=cfg.model.loss_function, vol_loss_scaling=cfg.model.vol_loss_scaling, surf_loss_scaling=surface_scaling_loss, - first_deriv=first_deriv, eqn=eqn, bounding_box=bounding_box, vol_factors=vol_factors, diff --git a/examples/cfd/external_aerodynamics/domino_nim_finetuning/src/train.py b/examples/cfd/external_aerodynamics/domino_nim_finetuning/src/train.py index 278b3de8bf..9f9c30ff7b 100644 --- a/examples/cfd/external_aerodynamics/domino_nim_finetuning/src/train.py +++ b/examples/cfd/external_aerodynamics/domino_nim_finetuning/src/train.py @@ -77,13 +77,30 @@ # Profiler().initialize() +from physicsnemo.nn.functional.derivatives import mesh_lsq_gradient + + +def _build_csr_from_neighbors(neighbors_list, device): + """Build CSR offsets/indices from a ``{node_id: [neighbor_ids]}`` dict.""" + num_nodes = max(neighbors_list.keys()) + 1 + offsets_list = [0] + indices_list = [] + for node_id in range(num_nodes): + if node_id in neighbors_list: + indices_list.extend(neighbors_list[node_id]) + offsets_list.append(len(indices_list)) + return ( + torch.tensor(offsets_list, dtype=torch.int64, device=device), + torch.tensor(indices_list, dtype=torch.int64, device=device), + ) + + def compute_physics_loss( output: torch.Tensor, target: torch.Tensor, mask: torch.Tensor, loss_type: Literal["mse", "rmse"], dims: tuple[int, ...] | None, - first_deriv: torch.nn.Module, eqn: Any, bounding_box: torch.Tensor, vol_factors: torch.Tensor, @@ -96,7 +113,6 @@ def compute_physics_loss( mask: Mask for valid values loss_type: Type of loss to calculate ("mse" or "rmse") dims: Dimensions for loss calculation - first_deriv: First derivative calculator eqn: Equations bounding_box: Bounding box for normalization vol_factors: Volume factors for normalization @@ -117,49 +133,32 @@ def compute_physics_loss( coords_total, bounding_box[0], bounding_box[1] ) - # compute first order gradients on all the nodes from the neighbors_list - grad_list = {} - for parent_id, neighbor_ids in neighbors_list.items(): - neighbor_ids_tensor = torch.tensor(neighbor_ids).to( - output_total_unnormalized.device - ) - du = ( - output_total_unnormalized[:, [parent_id]] - - output_total_unnormalized[:, neighbor_ids_tensor] - ) - dv = ( - coords_total_unnormalized[:, [parent_id]] - - coords_total_unnormalized[:, neighbor_ids_tensor] - ) - grads = first_deriv.forward( - coords=None, connectivity_tensor=None, y=None, du=du, dv=dv - ) - grad = torch.cat(grads, dim=1) - grad_list[parent_id] = grad - - # compute second order gradients on only the center node - neighbor_ids_tensor = torch.tensor(neighbors_list[0]).to( - output_total_unnormalized.device - ) - grad_neighbors_center = torch.stack([v for v in grad_list.values()], dim=1) - grad_neighbors_center = grad_neighbors_center.reshape( - batch_size, len(neighbors_list[0]) + 1, -1 - ) - - du = grad_neighbors_center[:, [0]] - grad_neighbors_center[:, neighbor_ids_tensor] - dv = ( - coords_total_unnormalized[:, [0]] - - coords_total_unnormalized[:, neighbor_ids_tensor] - ) - - # second order gradients - ggrads_center = first_deriv.forward( - coords=None, connectivity_tensor=None, y=None, du=du, dv=dv - ) - ggrad_center = torch.cat(ggrads_center, dim=1) - grad_neighbors_center = grad_neighbors_center.reshape( - batch_size, len(neighbors_list[0]) + 1, 3, -1 - ) + # Build CSR adjacency from the neighbor graph + device = output_total_unnormalized.device + offsets, indices = _build_csr_from_neighbors(neighbors_list, device) + num_nodes = max(neighbors_list.keys()) + 1 + + # First-order gradients for all nodes using mesh_lsq_gradient + first_grads_list = [] + for b in range(batch_size): + coords_b = coords_total_unnormalized[b].detach() + values_b = output_total_unnormalized[b] + grads_b = mesh_lsq_gradient(coords_b, values_b, offsets, indices) + first_grads_list.append(grads_b) + grad_neighbors_center = torch.stack(first_grads_list) + + # Second-order gradients via mesh_lsq_gradient on first-order results + grad_flat = grad_neighbors_center.reshape(batch_size, num_nodes, -1) + second_grads_list = [] + for b in range(batch_size): + coords_b = coords_total_unnormalized[b].detach() + values_b = grad_flat[b] + sg_b = mesh_lsq_gradient(coords_b, values_b, offsets, indices) + second_grads_list.append(sg_b) + ggrad_all = torch.stack(second_grads_list) + ggrad_center = ggrad_all[:, 0, :, :] + + grad_neighbors_center = grad_neighbors_center.reshape(batch_size, num_nodes, 3, -1) # Get the outputs on the original nodes fields_center_unnormalized = output_total_unnormalized[:, 0, :] @@ -293,7 +292,6 @@ def loss_fn_with_physics( target: torch.Tensor, loss_type: Literal["mse", "rmse"], padded_value: float = -10, - first_deriv: torch.nn.Module = None, eqn: Any = None, bounding_box: torch.Tensor = None, vol_factors: torch.Tensor = None, @@ -305,7 +303,6 @@ def loss_fn_with_physics( target: Ground truth values loss_type: Type of loss to calculate ("mse" or "rmse") padded_value: Value used for padding in the tensor - first_deriv: First derivative calculator eqn: Equations bounding_box: Bounding box for normalization vol_factors: Volume factors for normalization @@ -327,7 +324,6 @@ def loss_fn_with_physics( mask=mask, loss_type=loss_type, dims=dims, - first_deriv=first_deriv, eqn=eqn, bounding_box=bounding_box, vol_factors=vol_factors, @@ -427,6 +423,7 @@ def loss_fn_area( def integral_loss_fn( output, target, area, normals, stream_velocity=None, padded_value=-10 ): + """Compute combined drag + lift integral loss.""" drag_loss = drag_loss_fn( output, target, area, normals, stream_velocity=stream_velocity, padded_value=-10 ) @@ -437,6 +434,7 @@ def integral_loss_fn( def lift_loss_fn(output, target, area, normals, stream_velocity=None, padded_value=-10): + """Compute lift coefficient loss from surface pressure and wall shear.""" vel_inlet = stream_velocity # Get this from the dataset mask = abs(target - padded_value) > 1e-3 @@ -463,6 +461,7 @@ def lift_loss_fn(output, target, area, normals, stream_velocity=None, padded_val def drag_loss_fn(output, target, area, normals, stream_velocity=None, padded_value=-10): + """Compute drag coefficient loss from surface pressure and wall shear.""" vel_inlet = stream_velocity # Get this from the dataset mask = abs(target - padded_value) > 1e-3 output_true = target * mask * area * (vel_inlet) ** 2.0 @@ -490,7 +489,6 @@ def compute_loss_dict( integral_scaling_factor: float, surf_loss_scaling: float, vol_loss_scaling: float, - first_deriv: torch.nn.Module | None = None, eqn: Any = None, bounding_box: torch.Tensor | None = None, vol_factors: torch.Tensor | None = None, @@ -522,7 +520,6 @@ def compute_loss_dict( target_vol, loss_fn_type.loss_type, padded_value=-10, - first_deriv=first_deriv, eqn=eqn, bounding_box=bounding_box, vol_factors=vol_factors, @@ -610,12 +607,12 @@ def validation_step( loss_fn_type=None, vol_loss_scaling=None, surf_loss_scaling=None, - first_deriv: torch.nn.Module | None = None, eqn: Any = None, bounding_box: torch.Tensor | None = None, vol_factors: torch.Tensor | None = None, add_physics_loss=False, ): + """Run one validation epoch and return aggregate metrics.""" running_vloss = 0.0 with torch.no_grad(): for i_batch, sample_batched in enumerate(dataloader): @@ -637,7 +634,6 @@ def validation_step( integral_scaling_factor, surf_loss_scaling, vol_loss_scaling, - first_deriv, eqn, bounding_box, vol_factors, @@ -666,12 +662,12 @@ def train_epoch( loss_fn_type, vol_loss_scaling=None, surf_loss_scaling=None, - first_deriv: torch.nn.Module | None = None, eqn: Any = None, bounding_box: torch.Tensor | None = None, vol_factors: torch.Tensor | None = None, add_physics_loss=False, ): + """Run one training epoch with optional physics loss.""" dist = DistributedManager() running_loss = 0.0 @@ -704,7 +700,6 @@ def train_epoch( integral_scaling_factor, surf_loss_scaling, vol_loss_scaling, - first_deriv, eqn, bounding_box, vol_factors, @@ -758,6 +753,7 @@ def train_epoch( @hydra.main(version_base="1.3", config_path="conf", config_name="config") def main(cfg: DictConfig) -> None: + """Entry point for DoMINO NIM fine-tuning.""" # initialize distributed manager DistributedManager.initialize() dist = DistributedManager() @@ -784,10 +780,9 @@ def main(cfg: DictConfig) -> None: if add_physics_loss: from physicsnemo.sym.eq.pde import PDE - from physicsnemo.sym.eq.ls.grads import FirstDeriv - from physicsnemo.sym.eq.pdes.navier_stokes import IncompressibleNavierStokes + from sympy import Function, Number, Symbol else: - PDE = FirstDeriv = IncompressibleNavierStokes = None + PDE = None num_vol_vars = 0 volume_variable_names = [] @@ -927,12 +922,73 @@ def main(cfg: DictConfig) -> None: ) # Initialize physics components conditionally - first_deriv = None eqn = None if add_physics_loss: - first_deriv = FirstDeriv(dim=3, direct_input=True) - eqn = IncompressibleNavierStokes(rho=1.226, nu="nu", dim=3, time=False) - eqn = eqn.make_nodes(return_as_dict=True) + + class IncompressibleNavierStokes(PDE): + """Incompressible Navier-Stokes with variable viscosity.""" + + def __init__(self, rho=1.0, nu="nu", dim=3, time=False): + """Initialize with density *rho* and viscosity *nu*.""" + self.dim = dim + x, y, z = Symbol("x"), Symbol("y"), Symbol("z") + iv = {"x": x, "y": y, "z": z} + if dim == 2: + iv.pop("z") + u = Function("u")(*iv.values()) + v = Function("v")(*iv.values()) + w = Function("w")(*iv.values()) if dim == 3 else Number(0) + p = Function("p")(*iv.values()) + if isinstance(nu, str): + nu = Function(nu)(*iv.values()) + elif isinstance(nu, (float, int)): + nu = Number(nu) + mu = rho * nu + tau_xx__x = 2 * mu * u.diff(x, 2) + 2 * mu.diff(x) * u.diff(x) + tau_xy__y = mu * (u.diff(y, 2) + v.diff(x).diff(y)) + mu.diff(y) * ( + u.diff(y) + v.diff(x) + ) + tau_xz__z = mu * (u.diff(z, 2) + w.diff(x).diff(z)) + mu.diff(z) * ( + u.diff(z) + w.diff(x) + ) + tau_xy__x = mu * (u.diff(y).diff(x) + v.diff(x, 2)) + mu.diff(x) * ( + u.diff(y) + v.diff(x) + ) + tau_yy__y = 2 * mu * v.diff(y, 2) + 2 * mu.diff(y) * v.diff(y) + tau_yz__z = mu * (v.diff(z, 2) + w.diff(y).diff(z)) + mu.diff(z) * ( + v.diff(z) + w.diff(y) + ) + tau_xz__x = mu * (u.diff(z).diff(x) + w.diff(x, 2)) + mu.diff(x) * ( + u.diff(z) + w.diff(x) + ) + tau_yz__y = mu * (v.diff(z).diff(y) + w.diff(y, 2)) + mu.diff(y) * ( + v.diff(z) + w.diff(y) + ) + tau_zz__z = 2 * mu * w.diff(z, 2) + 2 * mu.diff(z) * w.diff(z) + self.equations = { + "continuity": u.diff(x) + v.diff(y) + w.diff(z), + "momentum_x": rho * (u * u.diff(x) + v * u.diff(y) + w * u.diff(z)) + + p.diff(x) + - tau_xx__x + - tau_xy__y + - tau_xz__z, + "momentum_y": rho * (u * v.diff(x) + v * v.diff(y) + w * v.diff(z)) + + p.diff(y) + - tau_xy__x + - tau_yy__y + - tau_yz__z, + "momentum_z": rho * (u * w.diff(x) + v * w.diff(y) + w * w.diff(z)) + + p.diff(z) + - tau_xz__x + - tau_yz__y + - tau_zz__z, + } + if dim == 2: + self.equations.pop("momentum_z") + + ns = IncompressibleNavierStokes(rho=1.226, nu="nu", dim=3, time=False) + computations = ns.make_computations() + eqn = {c.outputs[0]: c for c in computations} # Initialize the scaler for mixed precision scaler = GradScaler() @@ -1012,7 +1068,6 @@ def main(cfg: DictConfig) -> None: loss_fn_type=cfg.model.loss_function, vol_loss_scaling=cfg.model.vol_loss_scaling, surf_loss_scaling=surface_scaling_loss, - first_deriv=first_deriv, eqn=eqn, bounding_box=bounding_box, vol_factors=vol_factors_tensor, @@ -1036,7 +1091,6 @@ def main(cfg: DictConfig) -> None: loss_fn_type=cfg.model.loss_function, vol_loss_scaling=cfg.model.vol_loss_scaling, surf_loss_scaling=surface_scaling_loss, - first_deriv=first_deriv, eqn=eqn, bounding_box=bounding_box, vol_factors=vol_factors_tensor, diff --git a/examples/cfd/external_aerodynamics/xaeronet/surface/preprocessor.py b/examples/cfd/external_aerodynamics/xaeronet/surface/preprocessor.py index 04c6f060dd..d5bc3492a3 100644 --- a/examples/cfd/external_aerodynamics/xaeronet/surface/preprocessor.py +++ b/examples/cfd/external_aerodynamics/xaeronet/surface/preprocessor.py @@ -43,7 +43,49 @@ from omegaconf import DictConfig from physicsnemo.datapipes.cae.readers import read_vtp -from physicsnemo.sym.geometry.tessellation import Tessellation +from physicsnemo.mesh.io import from_pyvista +from physicsnemo.mesh.sampling import sample_random_points_on_cells + + +def load_stl_mesh(stl_file): + """Load an STL file and return a PyVista triangular surface mesh.""" + return pv.read(stl_file) + + +def sample_boundary_from_mesh(pv_mesh, num_points): + """Area-weighted sampling on a triangulated surface using physicsnemo.mesh. + + Returns dict with ``x, y, z, normal_x, normal_y, normal_z, area`` arrays, + matching the interface of the former ``Tessellation.sample_boundary``. + """ + pv_mesh = pv_mesh.triangulate() + pv_mesh = pv_mesh.compute_normals( + cell_normals=True, point_normals=False, auto_orient_normals=True + ) + cell_normals = pv_mesh.cell_data["Normals"] + areas = pv_mesh.compute_cell_sizes(length=False, volume=False)["Area"] + total_area = areas.sum() + + mesh = from_pyvista(pv_mesh, manifold_dim=2) + + probs = torch.tensor(areas / total_area, dtype=torch.float32) + cell_indices = torch.multinomial(probs, num_points, replacement=True) + + pts = sample_random_points_on_cells(mesh, cell_indices).numpy() + + normals = cell_normals[cell_indices.numpy()] + area_per_point = np.full((num_points, 1), total_area / num_points) + + return { + "x": pts[:, 0:1], + "y": pts[:, 1:2], + "z": pts[:, 2:3], + "normal_x": normals[:, 0:1], + "normal_y": normals[:, 1:2], + "normal_z": normals[:, 2:3], + "area": area_per_point, + } + from dataloader import PartitionedGraph @@ -151,7 +193,7 @@ def process_run( try: # Load the STL and VTP files - obj = Tessellation.from_stl(stl_file, airtight=False) + obj = load_stl_mesh(stl_file) surface_mesh = read_vtp(vtp_file) surface_mesh = convert_to_triangular_mesh(surface_mesh) surface_vertices = fetch_mesh_vertices(surface_mesh) @@ -177,7 +219,7 @@ def process_run( for num_points in sorted_points: # Sample the boundary points for the current level - boundary = obj.sample_boundary(num_points) + boundary = sample_boundary_from_mesh(obj, num_points) points = np.concatenate( [boundary["x"], boundary["y"], boundary["z"]], axis=1 ) @@ -312,6 +354,7 @@ def process_all_runs( @hydra.main(version_base="1.3", config_path="conf", config_name="config") def main(cfg: DictConfig) -> None: + """Entry point for xaeronet surface preprocessing.""" process_all_runs( base_path=to_absolute_path(cfg.data_path), num_points=cfg.num_nodes, diff --git a/examples/cfd/ldc_pinns/README.md b/examples/cfd/ldc_pinns/README.md index c198af0caf..89c461f819 100644 --- a/examples/cfd/ldc_pinns/README.md +++ b/examples/cfd/ldc_pinns/README.md @@ -2,12 +2,12 @@ This example demonstrates how to set up a purely physics-driven model for solving a Lid Driven Cavity (LDC) flow using PINNs. The goal of this example is to demonstrate the -interoperability of PhysicsNeMo, PhysicsNeMo-Sym and PyTorch. This example adopts a workflow +interoperability of PhysicsNeMo, `physicsnemo.sym` and PyTorch. This example adopts a workflow where appropriate utilities are imported from `physicsnemo`, `physicsnemo.sym` and `torch` to define the training pipeline. Specifically, this example demonstrates how the geometry and physics utilities from -PhysicsNeMo-Sym can be used in custom training pipelines to handle geometry objects +`physicsnemo.sym` can be used in custom training pipelines to handle geometry objects (typically found in Computer Aided Engineering (CAE)) workflows and introduce physics residual and boundary condition losses. @@ -15,14 +15,14 @@ This example takes a non-abstracted way to define the problem. The boundary condition constraints, residual constraints, and the subsequent physics loss computation are defined explicitly. For a more abstracted version of this workflow, where some of these steps are automated and abstracted, we recommend the -[Introductory example tutorial from PhysicsNeMo-Sym](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-sym/user_guide/basics/lid_driven_cavity_flow.html). +[Introductory example tutorial from `physicsnemo.sym`](v2.0-MIGRATION-GUIDE.md#physicsnemo-sym--physicsnemosym). ## Getting Started ### Prerequisites If you are running this example outside of the PhysicsNeMo container, install -PhysicsNeMo Sym using the instructions from [here](https://github.com/NVIDIA/physicsnemo-sym?tab=readme-ov-file#pypi) +PhysicsNeMo with the sym extra: `pip install "nvidia-physicsnemo[sym]"` ### Training @@ -35,13 +35,13 @@ python train.py This should start training the model. Since this is training in a purely Physics based fashion, there is no dataset required. -Instead, we generate the geometry using the PhysicsNeMo Sym's geometry module and sample +Instead, we generate the geometry using the the `physicsnemo.mesh` module and sample point cloud using `GeometryDatapipe` utility. For more details refer documentation -[here](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-sym/api/physicsnemo.sym.geometry.html#physicsnemo.sym.geometry.geometry_dataloader.GeometryDatapipe) +[here](v2.0-MIGRATION-GUIDE.md#physicsnemo-sym--physicsnemosym) For computing the physics losses, we will use the `PhysicsInformer` utility from -PhysicsNeMo-Sym. For more details, refer documentation -[here](https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-sym/api/physicsnemo.sym.eq.html#physicsnemo.sym.eq.phy_informer.PhysicsInformer) +`physicsnemo.sym`. For more details, refer documentation +[here](v2.0-MIGRATION-GUIDE.md#physicsnemo-sym--physicsnemosym) The results would get saved in the `./outputs/` directory. diff --git a/examples/cfd/ldc_pinns/train.py b/examples/cfd/ldc_pinns/train.py index 63471bfe06..5e58efcbc0 100644 --- a/examples/cfd/ldc_pinns/train.py +++ b/examples/cfd/ldc_pinns/train.py @@ -22,19 +22,60 @@ from physicsnemo.utils.logging import PythonLogger from physicsnemo.models.fno import FNO from physicsnemo.models.mlp.fully_connected import FullyConnected -from physicsnemo.sym.eq.pdes.navier_stokes import NavierStokes +from sympy import Function, Number, Symbol + +from physicsnemo.mesh import Mesh +from physicsnemo.mesh.primitives.planar.structured_grid import ( + load as load_structured_grid, +) +from physicsnemo.mesh.sampling import sample_random_points_on_cells +from physicsnemo.sym.eq.pde import PDE from physicsnemo.sym.eq.phy_informer import PhysicsInformer -from physicsnemo.sym.geometry.geometry_dataloader import GeometryDatapipe -from physicsnemo.sym.geometry.primitives_2d import Rectangle from physicsnemo.utils import StaticCaptureEvaluateNoGrad, StaticCaptureTraining from omegaconf import DictConfig -from sympy import Abs, Eq, Symbol from torch.nn import MSELoss from torch.optim import Adam, lr_scheduler +class NavierStokes(PDE): + """Incompressible Navier-Stokes equations (steady, 2D). + + Simplified from the compressible form in physicsnemo-sym for the case + where ``rho`` is constant and ``time=False``. + + Reference: https://turbmodels.larc.nasa.gov/implementrans.html + """ + + def __init__(self, nu=0.01, rho=1.0, dim=2, time=False): + self.dim = dim + x, y = Symbol("x"), Symbol("y") + iv = {"x": x, "y": y} + u = Function("u")(*iv.values()) + v = Function("v")(*iv.values()) + p = Function("p")(*iv.values()) + nu, rho = Number(nu), Number(rho) + self.equations = { + "continuity": u.diff(x) + v.diff(y), + "momentum_x": ( + u * u.diff(x) + + v * u.diff(y) + + (1 / rho) * p.diff(x) + - nu * u.diff(x, 2) + - nu * u.diff(y, 2) + ), + "momentum_y": ( + u * v.diff(x) + + v * v.diff(y) + + (1 / rho) * p.diff(y) + - nu * v.diff(x, 2) + - nu * v.diff(y, 2) + ), + } + + @hydra.main(version_base="1.3", config_path=".", config_name="config.yaml") def ldc_trainer(cfg: DictConfig) -> None: + """Main function for the LDC PINNs.""" DistributedManager.initialize() # Only call this once in the entire script! dist = DistributedManager() # call if required elsewhere @@ -42,10 +83,43 @@ def ldc_trainer(cfg: DictConfig) -> None: log = PythonLogger(name="ldc") log.file_logging() - # make geometry + # domain geometry using physicsnemo.mesh height = 0.1 width = 0.1 - rec = Rectangle((-width / 2, -height / 2), (width / 2, height / 2)) + x_min, x_max = -width / 2, width / 2 + y_min, y_max = -height / 2, height / 2 + + interior_mesh = load_structured_grid( + x_min=x_min, + x_max=x_max, + y_min=y_min, + y_max=y_max, + n_x=50, + n_y=50, + device=dist.device, + ) + boundary_mesh = interior_mesh.get_boundary_mesh() + + def sample_boundary(n_points, device): + """Sample on the rectangle boundary using physicsnemo.mesh.""" + cell_indices = torch.randint( + 0, boundary_mesh.n_cells, (n_points,), device=device + ) + pts = sample_random_points_on_cells(boundary_mesh, cell_indices) + return {"x": pts[:, 0], "y": pts[:, 1]} + + def sample_interior(n_points, device): + """Sample inside the rectangle using physicsnemo.mesh, with analytical SDF.""" + cell_indices = torch.randint( + 0, interior_mesh.n_cells, (n_points,), device=device + ) + pts = sample_random_points_on_cells(interior_mesh, cell_indices) + x, y = pts[:, 0], pts[:, 1] + sdf = torch.min( + torch.stack([x - x_min, x_max - x, y - y_min, y_max - y], dim=-1), + dim=-1, + ).values + return {"x": x, "y": y, "sdf": sdf} model = FullyConnected( in_features=2, out_features=3, num_layers=6, layer_size=512 @@ -73,92 +147,65 @@ def ldc_trainer(cfg: DictConfig) -> None: torch.from_numpy(yy).to(torch.float).to(dist.device), ) - # bc dataloader - bc_dataloader = GeometryDatapipe( - geom_objects=[rec], - batch_size=1, - num_points=2000, - sample_type="surface", - device=dist.device, - num_workers=1, - requested_vars=["x", "y"], - ) - - # interior dataloader - interior_dataloader = GeometryDatapipe( - geom_objects=[rec], - batch_size=1, - num_points=4000, - sample_type="volume", - device=dist.device, - num_workers=1, - requested_vars=["x", "y", "sdf"], - ) - for i in range(10000): - for bc_data, int_data in zip(bc_dataloader, interior_dataloader): - optimizer.zero_grad() - - # subsample points: - no_slip = {} - top_wall = {} - y_vals = bc_data[0]["y"] - mask_no_slip = y_vals < height / 2 - mask_top_wall = y_vals == height / 2 - - for k in bc_data[0].keys(): - no_slip[k] = (bc_data[0][k][mask_no_slip]).reshape(-1, 1) - top_wall[k] = (bc_data[0][k][mask_top_wall]).reshape(-1, 1) - - interior = {} - for k, v in int_data[0].items(): - # set requires_grad to true to enable gradient computation using autodiff - if k in ["x", "y"]: - requires_grad = True - else: - requires_grad = False - interior[k] = v.reshape(-1, 1).requires_grad_(requires_grad) - - # apply BC constraints - coords = torch.cat([interior["x"], interior["y"]], dim=1) - no_slip_out = model(torch.cat([no_slip["x"], no_slip["y"]], dim=1)) - top_wall_out = model(torch.cat([top_wall["x"], top_wall["y"]], dim=1)) - interior_out = model(coords) - - v_no_slip = torch.mean(no_slip_out[:, 1:2] ** 2) - u_no_slip = torch.mean(no_slip_out[:, 0:1] ** 2) - u_slip = torch.mean( - ((top_wall_out[:, 0:1] - 1.0) ** 2) - * (1 - 20 * torch.abs(top_wall["x"])) - ) # weight the edges zero. - v_slip = torch.mean(top_wall_out[:, 1:2] ** 2) - - # apply interior constraints - phy_loss_dict = phy_inf.forward( - { - "coordinates": coords, - "u": interior_out[:, 0:1], - "v": interior_out[:, 1:2], - "p": interior_out[:, 2:3], - } - ) - - cont = phy_loss_dict["continuity"] * interior["sdf"] - mom_x = phy_loss_dict["momentum_x"] * interior["sdf"] - mom_y = phy_loss_dict["momentum_y"] * interior["sdf"] - - phy_loss = ( - 1 * torch.mean(cont**2) - + 1 * torch.mean(mom_x**2) - + 1 * torch.mean(mom_y**2) - + u_no_slip - + v_no_slip - + u_slip - + v_slip - ) - phy_loss.backward() - optimizer.step() - scheduler.step() + optimizer.zero_grad() + + bc_data = sample_boundary(2000, dist.device) + int_data = sample_interior(4000, dist.device) + + y_vals = bc_data["y"] + mask_top_wall = y_vals >= height / 2 - 1e-7 + mask_no_slip = ~mask_top_wall + + no_slip_xy = torch.stack( + [bc_data["x"][mask_no_slip], bc_data["y"][mask_no_slip]], dim=-1 + ) + top_wall_x = bc_data["x"][mask_top_wall].unsqueeze(-1) + top_wall_xy = torch.stack( + [bc_data["x"][mask_top_wall], bc_data["y"][mask_top_wall]], dim=-1 + ) + + int_x = int_data["x"].unsqueeze(-1).requires_grad_(True) + int_y = int_data["y"].unsqueeze(-1).requires_grad_(True) + int_sdf = int_data["sdf"].unsqueeze(-1) + coords = torch.cat([int_x, int_y], dim=1) + + no_slip_out = model(no_slip_xy) + top_wall_out = model(top_wall_xy) + interior_out = model(coords) + + u_no_slip = torch.mean(no_slip_out[:, 0:1] ** 2) + v_no_slip = torch.mean(no_slip_out[:, 1:2] ** 2) + u_slip = torch.mean( + ((top_wall_out[:, 0:1] - 1.0) ** 2) * (1 - 20 * torch.abs(top_wall_x)) + ) + v_slip = torch.mean(top_wall_out[:, 1:2] ** 2) + + phy_loss_dict = phy_inf.forward( + { + "coordinates": coords, + "u": interior_out[:, 0:1], + "v": interior_out[:, 1:2], + "p": interior_out[:, 2:3], + } + ) + + cont = phy_loss_dict["continuity"] * int_sdf + mom_x = phy_loss_dict["momentum_x"] * int_sdf + mom_y = phy_loss_dict["momentum_y"] * int_sdf + + phy_loss = ( + torch.mean(cont**2) + + torch.mean(mom_x**2) + + torch.mean(mom_y**2) + + u_no_slip + + v_no_slip + + u_slip + + v_slip + ) + phy_loss.backward() + optimizer.step() + scheduler.step() if i % 1000 == 0: with torch.no_grad(): diff --git a/examples/cfd/mhd_pino/README.md b/examples/cfd/mhd_pino/README.md index b4f0b60c4a..9dc2a04e07 100644 --- a/examples/cfd/mhd_pino/README.md +++ b/examples/cfd/mhd_pino/README.md @@ -57,7 +57,7 @@ simulated data. We observe that the error in each of these cases is relatively We will demonstrate the use of data loss and physics constraints, specifically the equation residual loss, to create accurate predictions. -[PhysicsNeMo Sym](https://github.com/NVIDIA/physicsnemo-sym) +the `physicsnemo.sym` module (install with `pip install "nvidia-physicsnemo[sym]"`) has utilities tailored for physics-informed machine learning. It also presents abstracted APIs that allow users to think and model the problem from the lens of equations, constraints, etc. In this example, we will only leverage the physics-informed @@ -65,7 +65,7 @@ utilities to see how we can add physics to an existing data-driven model with ea still maintaining the flexibility to define our own training loop and other details. For a more abstracted definition of these type of problems, where the training loop definition and other things is taken care of implicitly, you may refer -[PhysicsNeMo Sym](https://github.com/NVIDIA/physicsnemo-sym) +the `physicsnemo.sym` module (install with `pip install "nvidia-physicsnemo[sym]"`) ## Dataset @@ -107,14 +107,15 @@ equations into the loss function. We will also use a tensor factorized Fourier N Operator (TFNO) in the same pipeline. The only difference with a TFNO model is that the weights are factorized using TensorLy. -In this example, we will also use the `PDE` class from PhysicsNeMo-Sym to symbolically define -the PDEs. This is very convenient and most natural way to define these PDEs and allows +In this example, we will also use the `PDE` class from +`physicsnemo.sym` to symbolically define the PDEs. +This is a convenient and natural way to define PDEs and allows us to print the equations to check for correctness. This also abstracts out the -complexity of converting the equation into a pytorch representation. PhysicsNeMo Sym also +complexity of converting the equation into a pytorch representation. `physicsnemo.sym` also provides several complex, well-tested PDEs like 3D Navier-Stokes, Linear elasticity, Electromagnetics, etc. pre-defined which can be used directly in physics-informing applications. We will also give you the option to choose between the -derivative functions from PhysicsNeMo-Sym or from the original paper. +derivative functions from `physicsnemo.sym` or from the original paper. ## Getting Started diff --git a/examples/cfd/mhd_pino/losses/mhd_pde.py b/examples/cfd/mhd_pino/losses/mhd_pde.py index 9a65c8a163..015927d5db 100644 --- a/examples/cfd/mhd_pino/losses/mhd_pde.py +++ b/examples/cfd/mhd_pino/losses/mhd_pde.py @@ -19,7 +19,7 @@ class MHD_PDE(PDE): - """MHD PDEs using PhysicsNeMo Sym""" + """MHD PDEs using physicsnemo.sym""" name = "MHD_PDE" diff --git a/examples/cfd/mhd_pino/train_mhd.py b/examples/cfd/mhd_pino/train_mhd.py index 8f30a54825..77a363571d 100644 --- a/examples/cfd/mhd_pino/train_mhd.py +++ b/examples/cfd/mhd_pino/train_mhd.py @@ -33,7 +33,7 @@ LaunchLogger, ) from physicsnemo.utils.logging.wandb import initialize_wandb -from physicsnemo.sym.hydra import to_absolute_path +from hydra.utils import to_absolute_path from losses import LossMHD, LossMHD_PhysicsNeMo from torch.optim import AdamW diff --git a/examples/cfd/mhd_pino/train_mhd_vec_pot.py b/examples/cfd/mhd_pino/train_mhd_vec_pot.py index b9abea4aea..d8b1c516c7 100644 --- a/examples/cfd/mhd_pino/train_mhd_vec_pot.py +++ b/examples/cfd/mhd_pino/train_mhd_vec_pot.py @@ -33,7 +33,7 @@ LaunchLogger, ) from physicsnemo.utils.logging.wandb import initialize_wandb -from physicsnemo.sym.hydra import to_absolute_path +from hydra.utils import to_absolute_path from losses import LossMHDVecPot, LossMHDVecPot_PhysicsNeMo from torch.optim import AdamW diff --git a/examples/cfd/mhd_pino/train_mhd_vec_pot_tfno.py b/examples/cfd/mhd_pino/train_mhd_vec_pot_tfno.py index be221a3649..5beb802ad3 100644 --- a/examples/cfd/mhd_pino/train_mhd_vec_pot_tfno.py +++ b/examples/cfd/mhd_pino/train_mhd_vec_pot_tfno.py @@ -33,7 +33,7 @@ LaunchLogger, ) from physicsnemo.utils.logging.wandb import initialize_wandb -from physicsnemo.sym.hydra import to_absolute_path +from hydra.utils import to_absolute_path from losses import LossMHDVecPot, LossMHDVecPot_PhysicsNeMo from torch.optim import AdamW diff --git a/examples/cfd/stokes_mgn/README.md b/examples/cfd/stokes_mgn/README.md index 481d4f9d27..294eb68151 100644 --- a/examples/cfd/stokes_mgn/README.md +++ b/examples/cfd/stokes_mgn/README.md @@ -4,8 +4,9 @@ This example demonstrates how to train the MeshGraphNet model to learn the flow of Stokes flow and further improve the accuracy of the model predictions by physics-informed inference. This example also demonstrates how to use physics utilities from -[PhysicsNeMo-Sym](https://github.com/NVIDIA/physicsnemo-sym) to introduce physics-based -constraints. +the `physicsnemo.sym` module +(install with `pip install "nvidia-physicsnemo[sym]"`) +to introduce physics-based constraints. ## Problem overview @@ -92,7 +93,7 @@ Install the requirements using: ```bash pip install -r requirements.txt -pip install nvidia-physicsnemo.sym --no-build-isolation +pip install "nvidia-physicsnemo[sym]" ``` ## Getting Started @@ -160,7 +161,7 @@ The fine-tuning step involves training of a PINN model to first refine the predictions of the MeshGraphNet model followed by an inference of the PINN model. If you are running this fine-tuning outside of the PhysicsNeMo container, install -PhysicsNeMo Sym using the instructions from [here](https://github.com/NVIDIA/physicsnemo-sym?tab=readme-ov-file#pypi) +PhysicsNeMo with the sym extra: `pip install "nvidia-physicsnemo[sym]"` This will save the predictions for the test dataset in `.vtp` format in the `results` directory. Use ParaView to open and explore the results. diff --git a/examples/cfd/stokes_mgn/pi_fine_tuning.py b/examples/cfd/stokes_mgn/pi_fine_tuning.py index 167b01ec88..00cdd29a62 100644 --- a/examples/cfd/stokes_mgn/pi_fine_tuning.py +++ b/examples/cfd/stokes_mgn/pi_fine_tuning.py @@ -48,8 +48,6 @@ from physicsnemo.models.mlp.fully_connected import FullyConnected from physicsnemo.sym.eq.pde import PDE from physicsnemo.sym.eq.phy_informer import PhysicsInformer -from physicsnemo.sym.key import Key -from physicsnemo.sym.models.arch import Arch from sympy import Function, Number, Symbol from utils import get_dataset, relative_lp_error @@ -149,53 +147,30 @@ def forward(self, x): return out -class MdlsSymDNN(Arch): - """ - Wrapper model to convert PyTorch model to PhysicsNeMo-Sym model. - - PhysicsNeMo Sym relies on the inputs/outputs of the model being dictionary of tensors. - This wrapper converts the input dictionary of tensors to a single tensor by - concatenating them along appropriate dimension before passing them as an input to - the pytorch model. During the output, the process is reversed, - the output tensor from pytorch model is split across appropriate dimensions and then - converted to a dictionary with appropriate keys to produce the final output. - - The model arguments thus become a list of `Key` objects that informs the model - about the input and output dimensionality of the pytorch model. - - For more details on PhysicsNeMo Sym models, refer: - https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-core/tutorials/simple_training_example.html#using-custom-models-in-physicsnemo - For more details on Key class, refer: - https://docs.nvidia.com/deeplearning/physicsnemo/physicsnemo-sym/api/physicsnemo.sym.html#module-physicsnemo.sym.key +class FourierDNN(torch.nn.Module): + """Dict-in/dict-out Fourier-feature DNN for physics-informed fine-tuning. + + Translates between the dict-of-tensors interface that PhysicsInformer + expects and the raw tensor interface of the underlying DNN. """ def __init__( self, - input_keys=[Key("x"), Key("y")], - output_keys=[Key("u"), Key("v"), Key("p")], + input_keys=("x", "y"), + output_keys=("u", "v", "p"), layers=[2, 128, 128, 128, 128, 3], fourier_features=64, ): - super().__init__( - input_keys=input_keys, - output_keys=output_keys, - ) - + super().__init__() + self.input_keys = list(input_keys) + self.output_keys = list(output_keys) self.mdls_model = DNN(layers, fourier_features) def forward(self, dict_tensor: Dict[str, torch.Tensor]): - # Use concat_input method of the Arch class to convert dict of tensors to - # a single multi-dimensional tensor. Ref: https://github.com/NVIDIA/physicsnemo-sym/blob/main/physicsnemo/sym/models/arch.py#L251 - x = self.concat_input( - dict_tensor, - self.input_key_dict, - detach_dict=self.detach_key_dict, - dim=-1, - ) + x = torch.cat([dict_tensor[k] for k in self.input_keys], dim=-1) out = self.mdls_model(x) - # Use split_output method of the Arch class to convert a single muli-dimensional - # tensor to a dict of tensors. Ref: https://github.com/NVIDIA/physicsnemo-sym/blob/main/physicsnemo/sym/models/arch.py#L381 - return self.split_output(out, self.output_key_dict, dim=1) + chunks = torch.split(out, 1, dim=-1) + return {k: chunks[i] for i, k in enumerate(self.output_keys)} class PhysicsInformedFineTuner: @@ -238,17 +213,17 @@ def __init__( torch.tensor(coords_noslip, requires_grad=True).float().to(self.device) ) - self.model = MdlsSymDNN( - input_keys=[Key("x"), Key("y")], - output_keys=[Key("u"), Key("v"), Key("p")], + self.model = FourierDNN( + input_keys=["x", "y"], + output_keys=["u", "v", "p"], layers=[2, 128, 128, 128, 128, 3], fourier_features=64, ).to(self.device) self.node_pde = Stokes(nu=self.nu, dim=2) - # note: this example uses the PhysicsInformer class from PhysicsNeMo Sym to - # construct the computational graph. This allows you to leverage PhysicsNeMo Sym's + # note: this example uses the PhysicsInformer class from `physicsnemo.sym` to + # construct the computational graph. This allows you to leverage physicsnemo.sym's # optimized derivative backend to compute the derivatives, along with other # benefits like symbolic definition of PDEs and leveraging the PDEs from PhysicsNeMo # Sym's PDE module. @@ -267,11 +242,13 @@ def __init__( ) def parabolic_inflow(self, y, U_max=0.3): + """Compute the parabolic inflow velocity.""" u = 4 * U_max * y * (0.4 - y) / (0.4**2) v = torch.zeros_like(y) return u, v def loss(self): + """Compute the loss for the physics-informed fine-tuning.""" # inflow points x_in, y_in = self.coords_inflow[:, 0:1], self.coords_inflow[:, 1:2] results_inflow = self.model({"x": x_in, "y": y_in}) @@ -413,6 +390,7 @@ def validation(self): @hydra.main(version_base="1.3", config_path="conf", config_name="config") def main(cfg: DictConfig) -> None: + """Main function for the Stokes physics-informed fine-tuning.""" # CUDA support if torch.cuda.is_available(): device = torch.device("cuda") diff --git a/examples/cfd/stokes_mgn/pi_fine_tuning_gnn.py b/examples/cfd/stokes_mgn/pi_fine_tuning_gnn.py index dca2523147..385ddfd0cf 100644 --- a/examples/cfd/stokes_mgn/pi_fine_tuning_gnn.py +++ b/examples/cfd/stokes_mgn/pi_fine_tuning_gnn.py @@ -48,7 +48,7 @@ from physicsnemo.models.meshgraphnet import MeshGraphNet from physicsnemo.sym.eq.pde import PDE from physicsnemo.sym.eq.phy_informer import PhysicsInformer -from physicsnemo.sym.eq.spatial_grads.spatial_grads import compute_connectivity_tensor +from physicsnemo.sym.eq.gradients import compute_connectivity_tensor from sympy import Function, Number, Symbol from utils import get_dataset, relative_lp_error @@ -166,8 +166,8 @@ def __init__( self.node_pde = Stokes(nu=self.nu, dim=2) - # note: this example uses the PhysicsInformer class from PhysicsNeMo Sym to - # construct the computational graph. This allows you to leverage PhysicsNeMo Sym's + # note: this example uses the PhysicsInformer class from `physicsnemo.sym` to + # construct the computational graph. This allows you to leverage physicsnemo.sym's # optimized derivative backend to compute the derivatives, along with other # benefits like symbolic definition of PDEs and leveraging the PDEs from PhysicsNeMo # Sym's PDE module. @@ -191,11 +191,13 @@ def __init__( ) def parabolic_inflow(self, y, U_max=0.3): + """Compute the parabolic inflow velocity.""" u = 4 * U_max * y * (0.4 - y) / (0.4**2) v = torch.zeros_like(y) return u, v def loss(self): + """Compute the loss for the physics-informed fine-tuning.""" out = self.model(self.pyg_graph.x, self.pyg_graph.edge_attr, self.pyg_graph) # inflow points @@ -358,6 +360,7 @@ def validation(self): @hydra.main(version_base="1.3", config_path="conf", config_name="config") def main(cfg: DictConfig) -> None: + """Main function for the Stokes physics-informed fine-tuning.""" # CUDA support if torch.cuda.is_available(): device = torch.device("cuda") diff --git a/examples/cfd/swe_nonlinear_pino/README.md b/examples/cfd/swe_nonlinear_pino/README.md index 31937fc488..2d27f73119 100644 --- a/examples/cfd/swe_nonlinear_pino/README.md +++ b/examples/cfd/swe_nonlinear_pino/README.md @@ -41,7 +41,7 @@ simulated data. We observe that the error in each of these cases is relatively We will demonstrate the use of data loss and physics constraints, specifically the equation residual loss, to create accurate predictions. -[PhysicsNeMo Sym](https://github.com/NVIDIA/physicsnemo-sym) +the `physicsnemo.sym` module (install with `pip install "nvidia-physicsnemo[sym]"`) has utilities tailored for physics-informed machine learning. It also presents abstracted APIs that allow users to think and model the problem from the lens of equations, constraints, etc. In this example, we will only leverage the physics-informed @@ -49,7 +49,7 @@ utilities to see how we can add physics to an existing data-driven model with ea still maintaining the flexibility to define our own training loop and other details. For a more abstracted definition of these type of problems, where the training loop definition and other things is taken care of implicitly, you may refer -[PhysicsNeMo Sym](https://github.com/NVIDIA/physicsnemo-sym) +the `physicsnemo.sym` module (install with `pip install "nvidia-physicsnemo[sym]"`) ## Dataset @@ -67,14 +67,15 @@ derivatives in a PINO style, using Numerical differentiation with Fourier deriva With this example, we intend to demonstrate how to implement multiple equations into the loss function. -In this example, we will also use the `PDE` class from PhysicsNeMo-Sym to symbolically define -the PDEs. This is very convenient and most natural way to define these PDEs and allows +In this example, we will also use the `PDE` class from +`physicsnemo.sym` to symbolically define the PDEs. +This is a convenient and natural way to define PDEs and allows us to print the equations to check for correctness. This also abstracts out the -complexity of converting the equation into a pytorch representation. PhysicsNeMo Sym also +complexity of converting the equation into a pytorch representation. `physicsnemo.sym` also provides several complex, well-tested PDEs like 3D Navier-Stokes, Linear elasticity, Electromagnetics, etc. pre-defined which can be used directly in physics-informing applications. We will also give you the option to choose between the -derivative functions from PhysicsNeMo-Sym or from the original paper. +derivative functions from `physicsnemo.sym` or from the original paper. ## Prerequisites @@ -82,7 +83,7 @@ Install the requirements using: ```bash pip install -r requirements.txt -pip install nvidia-physicsnemo.sym --no-build-isolation +pip install "nvidia-physicsnemo[sym]" ``` ## Getting Started diff --git a/examples/cfd/swe_nonlinear_pino/swe_nl_pde.py b/examples/cfd/swe_nonlinear_pino/swe_nl_pde.py index 4faec36d01..ec7329a048 100644 --- a/examples/cfd/swe_nonlinear_pino/swe_nl_pde.py +++ b/examples/cfd/swe_nonlinear_pino/swe_nl_pde.py @@ -19,7 +19,7 @@ class SWE_NL(PDE): - """SWE Nonlinear PDE using PhysicsNeMo Sym""" + """SWE Nonlinear PDE using physicsnemo.sym""" name = "SWE_NL" diff --git a/examples/cfd/swe_nonlinear_pino/train_utils/losses.py b/examples/cfd/swe_nonlinear_pino/train_utils/losses.py index 722585e2b2..c1a7357c1d 100644 --- a/examples/cfd/swe_nonlinear_pino/train_utils/losses.py +++ b/examples/cfd/swe_nonlinear_pino/train_utils/losses.py @@ -35,6 +35,7 @@ def __init__(self, d=2, p=2, size_average=True, reduction=True): self.size_average = size_average def rel(self, x, y): + """Compute relative Lp error between *x* and *y*.""" num_examples = x.size()[0] diff_norms = torch.norm( @@ -270,7 +271,7 @@ def physicsnemo_fdm_swe_nonlin(h, u, v, pde_node, D=1, device=0): huv_x = f_dhuv[:, 1 : nt - 1, :nx, :ny] huv_y = f_dhuv[:, nt + 1 : 2 * nt - 1, :nx, :ny] - # Compute PDEs using PhysicsNeMo-Sym + # Compute PDEs using physicsnemo.sym pde_Dh = pde_node[0].evaluate({"h__t": h_t, "hu__x": hu_x, "hv__y": hv_y}) pde_Du = pde_node[1].evaluate( { diff --git a/examples/cfd/vortex_shedding_mgn/inference_analysis/custom_primitives.py b/examples/cfd/vortex_shedding_mgn/inference_analysis/custom_primitives.py index 4c5feda98d..df52edc391 100644 --- a/examples/cfd/vortex_shedding_mgn/inference_analysis/custom_primitives.py +++ b/examples/cfd/vortex_shedding_mgn/inference_analysis/custom_primitives.py @@ -14,64 +14,41 @@ # See the License for the specific language governing permissions and # limitations under the License. -from sympy import Symbol, Abs, sign +"""Simple 2D point primitive with signed distance and boundary sampling.""" + import numpy as np -from physicsnemo.sym.geometry.geometry import Geometry, csg_curve_naming -from physicsnemo.sym.geometry.curve import SympyCurve -from physicsnemo.sym.geometry.parameterization import ( - Parameterization, - Parameter, - Bounds, -) -from physicsnemo.sym.geometry.helper import _sympy_sdf_to_sdf -class Point2D(Geometry): - """ - 2D Point along x and y axis +class Point2D: + """A 2D point with signed distance field and boundary sampling. Parameters ---------- - point : Tuple of int or float - x and y coordinates of the point - parameterization : Parameterization - Parameterization of geometry. + point : tuple[float, float] + (x, y) coordinates of the point. """ - def __init__(self, point, parameterization=Parameterization()): - # make sympy symbols to use - x = Symbol("x") - y = Symbol("y") - - # curves for each side - curve_parameterization = Parameterization({Symbol(csg_curve_naming(0)): (0, 1)}) - curve_parameterization = Parameterization.combine( - curve_parameterization, parameterization - ) - pt_1 = SympyCurve( - functions={"x": point[0], "y": point[1], "normal_x": 1.0, "normal_y": 0}, - area=1.0, - parameterization=curve_parameterization, - ) - curves = [pt_1] - - # calculate SDF - sdf = ((x - point[0]) ** 2 + (y - point[1]) ** 2) ** 0.5 * sign(x - point[0]) - - # calculate bounds - bounds = Bounds( - { - Parameter("x"): (point[0], point[0]), - Parameter("y"): (point[1], point[1]), - }, - parameterization=parameterization, - ) - - # initialize - super().__init__( - curves, - _sympy_sdf_to_sdf(sdf), - dims=1, - bounds=bounds, - parameterization=parameterization, - ) + def __init__(self, point): + """Initialize with (x, y) coordinates.""" + self.point = point + + def sdf(self, points, params=None): + """Signed distance from query points to this point. + + Sign is determined by ``sign(x - point_x)``. + """ + dx = points[:, 0] - self.point[0] + dy = points[:, 1] - self.point[1] + dist = np.sqrt(dx**2 + dy**2) + sign = np.where(dx >= 0, 1.0, -1.0) + return {"sdf": dist * sign} + + def sample_boundary(self, num_points): + """Return the point coordinates repeated *num_points* times.""" + return { + "x": np.full((num_points, 1), self.point[0]), + "y": np.full((num_points, 1), self.point[1]), + "normal_x": np.ones((num_points, 1)), + "normal_y": np.zeros((num_points, 1)), + "area": np.ones((num_points, 1)), + } diff --git a/examples/healthcare/brain_anomaly_detection/invert.py b/examples/healthcare/brain_anomaly_detection/invert.py index 8a4e224c54..73edcaa979 100644 --- a/examples/healthcare/brain_anomaly_detection/invert.py +++ b/examples/healthcare/brain_anomaly_detection/invert.py @@ -15,8 +15,8 @@ # limitations under the License. import physicsnemo -from physicsnemo.sym.hydra import to_absolute_path -from physicsnemo.sym.distributed.manager import DistributedManager +from hydra.utils import to_absolute_path +from physicsnemo.distributed import DistributedManager import torch import torch.nn as nn import numpy as np diff --git a/test/ci_tests/interrogate_baseline.txt b/test/ci_tests/interrogate_baseline.txt index 5b04e08dd0..7a5c497ab1 100644 --- a/test/ci_tests/interrogate_baseline.txt +++ b/test/ci_tests/interrogate_baseline.txt @@ -260,12 +260,6 @@ examples/cfd/navier_stokes_dpot/train_dpot.py:main examples/cfd/navier_stokes_rnn/navier_stokes_rnn.py:main examples/cfd/stokes_mgn/inference.py:MGNRollout examples/cfd/stokes_mgn/inference.py:main -examples/cfd/stokes_mgn/pi_fine_tuning.py:PhysicsInformedFineTuner.loss -examples/cfd/stokes_mgn/pi_fine_tuning.py:PhysicsInformedFineTuner.parabolic_inflow -examples/cfd/stokes_mgn/pi_fine_tuning.py:main -examples/cfd/stokes_mgn/pi_fine_tuning_gnn.py:PhysicsInformedFineTuner.loss -examples/cfd/stokes_mgn/pi_fine_tuning_gnn.py:PhysicsInformedFineTuner.parabolic_inflow -examples/cfd/stokes_mgn/pi_fine_tuning_gnn.py:main examples/cfd/stokes_mgn/preprocess.py:copy_files examples/cfd/stokes_mgn/train.py:MGNTrainer examples/cfd/stokes_mgn/train.py:MGNTrainer.get_lr diff --git a/v2.0-MIGRATION-GUIDE.md b/v2.0-MIGRATION-GUIDE.md index 289b1cc29a..25a545078d 100644 --- a/v2.0-MIGRATION-GUIDE.md +++ b/v2.0-MIGRATION-GUIDE.md @@ -150,6 +150,66 @@ To update your code for physicsnemo v2.0, you will need to adjust several key import paths (like `logging`, see above) and potentially update datapipes and model checkpoints. +## PhysicsNeMo Sym → physicsnemo.sym + +The [PhysicsNeMo-Sym](https://github.com/NVIDIA/physicsnemo-sym) repository is +being archived. Its core functionality — symbolic PDE definition, automatic +spatial derivative computation, and physics-informed residual evaluation — has +been upstreamed into PhysicsNeMo as the `physicsnemo.sym` module. + +### What changed + + +| Before (physicsnemo-sym) | After (physicsnemo) | +|---|---| +| `pip install nvidia-physicsnemo.sym` | `pip install "nvidia-physicsnemo[sym]"` | +| `from physicsnemo.sym.eq.pdes.navier_stokes import NavierStokes` | Define your PDE inline using SymPy (see example below) | +| `from physicsnemo.sym.key import Key` | Use plain strings | +| `from physicsnemo.sym.models.arch import Arch` | Use `torch.nn.Module` | +| `from physicsnemo.sym.geometry.* import ...` | Use `physicsnemo.mesh` primitives + PyVista | +| `from physicsnemo.sym.eq.spatial_grads.spatial_grads import compute_connectivity_tensor` | `from physicsnemo.sym.eq.gradients import compute_connectivity_tensor` | + +### Defining PDEs + +Pre-built PDE classes (NavierStokes, Diffusion, etc.) are no longer shipped. +Instead, define your equations inline using SymPy: + +```python +from sympy import Symbol, Function, Number +from physicsnemo.sym.eq.pde import PDE +from physicsnemo.sym.eq.phy_informer import PhysicsInformer + +class NavierStokes(PDE): + def __init__(self, nu=0.01, dim=2): + self.dim = dim + x, y = Symbol("x"), Symbol("y") + u = Function("u")(x, y) + v = Function("v")(x, y) + p = Function("p")(x, y) + self.equations = { + "continuity": u.diff(x) + v.diff(y), + "momentum_x": (u * u.diff(x) + v * u.diff(y) + + p.diff(x) - nu * (u.diff(x, 2) + u.diff(y, 2))), + "momentum_y": (u * v.diff(x) + v * v.diff(y) + + p.diff(y) - nu * (v.diff(x, 2) + v.diff(y, 2))), + } + +ns = NavierStokes(nu=0.01, dim=2) +pi = PhysicsInformer(["continuity", "momentum_x", "momentum_y"], ns, grad_method="autodiff") +``` + +See the [LDC PINNs example](examples/cfd/ldc_pinns/) for a complete working +training script. + +### Getting help + +If you have a workflow that depends on PhysicsNeMo-Sym functionality not yet +available in the upstreamed module, please reach out to +physicsnemo-team@nvidia.com or open a +[GitHub issue](https://github.com/NVIDIA/physicsnemo/issues). + + + ## Reporting questions, concerns, or comments Please contact the development team via github issues on the physicsnemo repository.