KS
Killer-Skills

pina — how to use pina how to use pina, pina vs PyTorch, pina install guide, what is pina, pina alternative to traditional PDE solvers, pina setup for scientific machine learning, pina MLflow integration, pina for Physics-Informed Neural Networks, pina tutorial for beginners

v1.0.0
GitHub

About this Skill

Perfect for Scientific Machine Learning Agents needing advanced Physics-Informed Neural Networks (PINNs) capabilities with PyTorch. pina is a Physics-Informed Neural networks for Advanced modeling library, combining PINNs, Neural Operators, and Data-Driven Modeling.

Features

Solves forward and inverse PDE problems using Physics-Informed Neural Networks (PINNs)
Utilizes Neural Operators such as FNO and DeepONet for operator learning
Supports Data-Driven Modeling for scientific machine learning applications
Integrates with MLflow for tracking and managing machine learning experiments
Provides interactive ML notebooks with reactive updates and AI assistance

# Core Topics

synapticore-io synapticore-io
[10]
[1]
Updated: 2/26/2026

Quality Score

Top 5%
54
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add synapticore-io/marimo-flow/references/visualization.md

Agent Capability Analysis

The pina MCP Server by synapticore-io is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use pina, pina vs PyTorch, pina install guide.

Ideal Agent Persona

Perfect for Scientific Machine Learning Agents needing advanced Physics-Informed Neural Networks (PINNs) capabilities with PyTorch.

Core Value

Empowers agents to solve forward and inverse partial differential equations (PDEs) using neural networks, leveraging Physics-Informed Neural Networks (PINNs), Neural Operators like FNO and DeepONet, and Data-Driven Modeling, all within the PyTorch framework.

Capabilities Granted for pina MCP Server

Solving complex partial differential equations (PDEs) for scientific simulations
Implementing Neural Operators for operator learning in various domains
Developing data-driven models for real-world applications using PINA's PyTorch-based library

! Prerequisites & Limits

  • Requires PyTorch installation and compatibility
  • Limited to solving partial differential equations and related scientific machine learning tasks
Project
SKILL.md
9.9 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8
SKILL.md
Readonly

PINA Development Skill

Expert guidance for Physics-Informed Neural Networks (PINNs) and Scientific Machine Learning with PINA.

What is PINA?

PINA (Physics-Informed Neural networks for Advanced modeling) is a PyTorch-based library for solving partial differential equations (PDEs) using neural networks. It combines:

  • Physics-Informed Neural Networks (PINNs): Solve forward and inverse PDE problems
  • Neural Operators: FNO, DeepONet for operator learning
  • Data-Driven Modeling: Supervised learning with physics constraints
  • Reduced Order Modeling: POD-NN for efficient simulations

Built on: PyTorch, PyTorch Lightning, PyTorch Geometric

Core Workflow

Every PINA project follows these 4 steps:

python
1from pina import Trainer 2from pina.problem import SpatialProblem 3from pina.solver import PINN 4from pina.model import FeedForward 5 6# Step 1: Define Problem 7problem = MyProblem() 8problem.discretise_domain(n=100, mode="grid") 9 10# Step 2: Design Model 11model = FeedForward(input_dimensions=1, output_dimensions=1, layers=[64, 64]) 12 13# Step 3: Define Solver 14solver = PINN(problem, model) 15 16# Step 4: Train 17trainer = Trainer(solver, max_epochs=1000, accelerator='gpu') 18trainer.train()

Simple ODE Example

python
1from pina.problem import SpatialProblem 2from pina.domain import CartesianDomain 3from pina.condition import Condition 4from pina.equation import Equation, FixedValue 5from pina.operator import grad 6import torch 7 8def ode_equation(input_, output_): 9 """PDE residual: du/dx - u = 0""" 10 u_x = grad(output_, input_, components=["u"], d=["x"]) 11 u = output_.extract(["u"]) 12 return u_x - u 13 14class SimpleODE(SpatialProblem): 15 output_variables = ["u"] 16 spatial_domain = CartesianDomain({"x": [0, 1]}) 17 18 domains = { 19 "x0": CartesianDomain({"x": 0.0}), # Boundary 20 "D": CartesianDomain({"x": [0, 1]}) # Interior 21 } 22 23 conditions = { 24 "bound_cond": Condition(domain="x0", equation=FixedValue(1.0)), 25 "phys_cond": Condition(domain="D", equation=Equation(ode_equation)) 26 } 27 28 def solution(self, pts): 29 """Analytical solution for validation.""" 30 return torch.exp(pts.extract(["x"])) 31 32problem = SimpleODE()

Models

FeedForward Networks

python
1from pina.model import FeedForward 2 3# Basic network 4model = FeedForward( 5 input_dimensions=2, 6 output_dimensions=1, 7 layers=[64, 64, 64], # Hidden layers 8 func=torch.nn.Tanh # Activation function 9) 10 11# Alternative activations 12model = FeedForward( 13 input_dimensions=1, 14 output_dimensions=1, 15 layers=[100, 100, 100], 16 func=torch.nn.Softplus # or torch.nn.SiLU 17)

See Custom Models Reference for advanced architectures including:

  • Hard constraints
  • Fourier feature embeddings
  • Periodic boundary embeddings
  • POD-NN
  • Graph neural networks

See Neural Operators Reference for operator learning with FNO, DeepONet, and more.

PINN Solver

python
1from pina.solver import PINN 2from pina.optim import TorchOptimizer 3import torch 4 5pinn = PINN( 6 problem=problem, 7 model=model, 8 optimizer=TorchOptimizer(torch.optim.Adam, lr=0.001) 9)

See Advanced Solvers Reference for:

  • Self-Adaptive PINN (SAPINN)
  • Supervised Solver
  • Custom solvers
  • Training strategies

Training

Basic Training

python
1from pina import Trainer 2from pina.callbacks import MetricTracker 3 4# Discretize domain 5problem.discretise_domain(n=1000, mode="random", domains="all") 6 7# Create trainer 8trainer = Trainer( 9 solver=pinn, 10 max_epochs=1500, 11 accelerator="cpu", # or "gpu" 12 enable_model_summary=False, 13 callbacks=[MetricTracker()] 14) 15 16# Train 17trainer.train()

Training Configuration

python
1trainer = Trainer( 2 solver=solver, 3 max_epochs=1000, 4 accelerator="gpu", 5 devices=1, 6 batch_size=32, 7 gradient_clip_val=0.1, # Gradient clipping 8 callbacks=[MetricTracker()] 9) 10trainer.train()

Testing

python
1# Test the model 2test_results = trainer.test() 3 4# Manual evaluation 5with torch.no_grad(): 6 test_pts = problem.spatial_domain.sample(100, "grid") 7 prediction = solver(test_pts) 8 true_solution = problem.solution(test_pts) 9 error = torch.abs(prediction - true_solution)

Domain Discretization

Sampling Modes

python
1# Grid sampling (uniform points) 2problem.discretise_domain(n=100, mode="grid", domains=["D", "x0"]) 3 4# Random sampling (Monte Carlo) 5problem.discretise_domain(n=1000, mode="random", domains="all") 6 7# Latin Hypercube Sampling 8problem.discretise_domain(n=500, mode="lh", domains=["D"]) 9 10# Manual sampling 11pts = problem.spatial_domain.sample(256, "grid", variables="x")

Best Practice: Start with grid for testing, use random/LH for training with more points.

Visualization

python
1import matplotlib.pyplot as plt 2 3@torch.no_grad() 4def plot_solution(solver, n_points=256): 5 # Sample points 6 pts = solver.problem.spatial_domain.sample(n_points, "grid") 7 8 # Get predictions 9 predicted = solver(pts).extract("u").detach() 10 true = solver.problem.solution(pts).detach() 11 12 # Plot comparison 13 fig, axes = plt.subplots(1, 3, figsize=(15, 5)) 14 15 axes[0].plot(pts.extract(["x"]), true, label="True", color="blue") 16 axes[0].set_title("True Solution") 17 axes[0].legend() 18 19 axes[1].plot(pts.extract(["x"]), predicted, label="PINN", color="green") 20 axes[1].set_title("PINN Solution") 21 axes[1].legend() 22 23 diff = torch.abs(true - predicted) 24 axes[2].plot(pts.extract(["x"]), diff, label="Error", color="red") 25 axes[2].set_title("Absolute Error") 26 axes[2].legend() 27 28 plt.tight_layout() 29 plt.show()

See Visualization Reference for comprehensive plotting techniques.

Best Practices

1. Start Simple

python
1# Begin with small network 2model = FeedForward(input_dimensions=2, output_dimensions=1, layers=[20, 20]) 3 4# Gradually increase complexity 5model = FeedForward(input_dimensions=2, output_dimensions=1, layers=[64, 64, 64])

2. Monitor Losses

python
1from pina.callbacks import MetricTracker 2 3trainer = Trainer( 4 solver=pinn, 5 max_epochs=1000, 6 callbacks=[MetricTracker(["train_loss", "bound_cond_loss", "phys_cond_loss"])] 7)

3. Two-Phase Training

python
1# Phase 1: Rough solution (high LR) 2pinn = PINN(problem, model, optimizer=TorchOptimizer(torch.optim.Adam, lr=0.01)) 3trainer = Trainer(pinn, max_epochs=500) 4trainer.train() 5 6# Phase 2: Refinement (low LR) 7pinn.optimizer.param_groups[0]['lr'] = 0.001 8trainer = Trainer(pinn, max_epochs=1500) 9trainer.train()

MLflow Integration

Track PINA experiments with MLflow for reproducibility and comparison:

python
1import mlflow 2from pina import Trainer 3from pina.solver import PINN 4 5# Set experiment 6mlflow.set_experiment("pina-poisson-solver") 7 8with mlflow.start_run(run_name="baseline"): 9 # Log hyperparameters 10 mlflow.log_params({ 11 "layers": [64, 64, 64], 12 "activation": "Tanh", 13 "learning_rate": 0.001, 14 "n_points": 1000, 15 "epochs": 1500 16 }) 17 18 # Setup and train 19 problem.discretise_domain(n=1000, mode="random") 20 trainer = Trainer(solver, max_epochs=1500) 21 trainer.train() 22 23 # Log final metrics 24 mlflow.log_metric("final_loss", trainer.callback_metrics["train_loss"]) 25 26 # Log model 27 mlflow.pytorch.log_model(solver.model, "pinn_model")

Marimo Dashboard Integration

Create interactive PINA dashboards with marimo:

python
1import marimo as mo 2from pina.solver import PINN 3 4# UI controls for hyperparameters 5layers = mo.ui.slider(1, 5, value=3, label="Hidden Layers") 6neurons = mo.ui.slider(16, 128, value=64, step=16, label="Neurons/Layer") 7lr = mo.ui.number(value=0.001, start=0.0001, stop=0.1, label="Learning Rate") 8 9# Train button 10train_btn = mo.ui.run_button(label="Train PINN") 11 12# In another cell: run training when button clicked 13if train_btn.value: 14 model = FeedForward( 15 input_dimensions=2, 16 output_dimensions=1, 17 layers=[neurons.value] * layers.value 18 ) 19 # ... train and visualize

Using context7 for Documentation

Query up-to-date PINA documentation directly:

# context7 Library ID (no resolve needed):
# - /mathlab/pina (official docs, 2345 snippets)

# Example: query-docs("/mathlab/pina", "FeedForward model parameters")

When to Use This Skill

Use PINA when:

  • Solving PDEs with neural networks
  • Need to incorporate physics constraints
  • Working with inverse problems
  • Building neural operators (FNO, DeepONet)
  • Reduced order modeling
  • Scientific ML research

Don't use PINA when:

  • Pure data-driven tasks (use standard PyTorch)
  • Not dealing with differential equations
  • Need classical numerical solvers (FEM, FVM)

Reference Documentation

Detailed documentation organized by topic:

  • Problem Types: ODE, Poisson, Wave, Inverse problems, custom equations
  • Neural Operators: FNO, DeepONet, Kernel Neural Operator
  • Custom Models: Hard constraints, Fourier features, periodic embeddings, POD-NN, GNNs
  • Advanced Solvers: SAPINN, supervised solver, custom solvers, training strategies
  • Visualization: Plotting techniques, error analysis, animations

Complete Examples

Ready-to-run example scripts:

Resources

Related Skills

Looking for an alternative to pina or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication