Solving a First-Order ODE with Physics-Informed Neural Networks

Vitality Learning
6 min readSep 2, 2024

--

Photo by Sunder Muthukumaran on Unsplash

In recent years, deep learning has revolutionized many fields, from computer vision to natural language processing. One of the exciting developments is the application of deep learning to solve differential equations, which are fundamental in modeling various physical systems. This post explores the concept of Physics-Informed Neural Networks (PINNs), a powerful technique that leverages the physics governing a system to enhance the learning process. Specifically, we’ll use PINNs to solve a simple first-order Ordinary Differential Equation (ODE).

The code of this post is available at this GitHub page.

The Problem: Solving a First-Order ODE

Consider the following first-order ODE:

This equation is a simple model of exponential decay, where the rate of change of y(x) is proportional to the negative of y(x). The exact solution to this ODE, given the initial condition y(0)=1, is:

Traditionally, such ODEs are solved using analytical methods or numerical techniques like Euler’s method. However, we can also approach this problem using neural networks, specifically through the framework of Physics-Informed Neural Networks.

Introducing Physics-Informed Neural Networks (PINNs)

Physics-Informed Neural Networks (PINNs) were first introduced by Raissi, Perdikaris, and Karniadakis in 2017. The main idea behind PINNs is to incorporate the underlying physical laws directly into the training of neural networks, thereby guiding the learning process.

Instead of relying solely on data, PINNs also minimize a loss function that includes a term derived from the differential equation. This regularization ensures that the network’s predictions not only fit the data but also satisfy the governing physical laws.

Mathematical Formulation

The total loss L minimized during training in a PINN consists of two parts:

is the traditional Mean Squared Error (MSE) between the network’s predictions and the training data;

is a regularization term enforcing the differential equation.

The total loss is a combination of both:

By minimizing this combined loss, the neural network not only fits the data but also adheres to the physical laws described by the ODE.

Applications of PINNs

Since their introduction, PINNs have been successfully applied in various fields:

  1. Fluid Dynamics: PINNs have been used to solve Navier-Stokes equations, modeling fluid flow around objects.
  2. Heat Transfer: They’ve been applied to solve heat equations, predicting temperature distribution over time.
  3. Structural Mechanics: PINNs help in solving elasticity equations to predict stress and strain in materials.
  4. Quantum Mechanics: They’ve been used to solve Schrödinger equations for quantum systems.

The flexibility of PINNs allows them to be applied to a wide range of physical systems governed by differential equations.

The Code Implementation

Now, let’s dive into the code to implement a PINN using TensorFlow/Keras to solve our first-order ODE.

import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
import matplotlib.pyplot as plt

# Step 1: Define the Neural Network model
def build_model():
model = Sequential([
Dense(100, activation='relu', input_shape=(1,)),
Dense(100, activation='relu'),
Dense(100, activation='relu'),
Dense(1, activation='linear')
])
return model

# Step 2: Define the physics-informed loss function
def physics_informed_loss(x, y_pred):
# Compute the derivative of the model's output y with respect to x
with tf.GradientTape() as tape:
tape.watch(x)
y = model(x)
dy_dx = tape.gradient(y, x)

# Physics-informed loss (PDE constraint): dy/dx + y = 0
physics_loss = dy_dx + y

# Compute the Mean Squared Error of the physics loss
return tf.reduce_mean(tf.square(physics_loss))

# Step 3: Generate training data
x_train = np.random.uniform(0, 2, 100).reshape(-1, 1) # Sample points from the domain [0, 2]
y_train = np.exp(-x_train) # True solution y = e^{-x}

# Convert training data to TensorFlow tensors
x_train_tensor = tf.convert_to_tensor(x_train, dtype=tf.float32)
y_train_tensor = tf.convert_to_tensor(y_train, dtype=tf.float32)

# Step 4: Build and compile the model
model = build_model()
optimizer = Adam(learning_rate=0.001)

# Custom training loop
epochs = 6000
for epoch in range(epochs):
with tf.GradientTape() as tape:
y_pred = model(x_train_tensor)
# data loss
data_loss = tf.reduce_mean(tf.square(y_train - y_pred)) # Mean squared error
# Physics-informed loss
pde_loss = physics_informed_loss(x_train_tensor, y_pred)
# Total loss is a weighted sum of both losses
loss = pde_loss + data_loss

# Compute gradients and update model weights
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))

# Print the loss value periodically
if epoch % 500 == 0:
print(f"Epoch {epoch}: Loss = {loss.numpy()}")

# Step 5: Test the trained model
x_test = np.linspace(0, 2, 100).reshape(-1, 1)
y_test = np.exp(-x_test)

# Predict the solution with the trained model
y_pred = model.predict(x_test)

# Plot the results
plt.figure(figsize=(8, 6))
plt.plot(x_test, y_test, label='Exact Solution $y = e^{-x}$', color='blue')
plt.plot(x_test, y_pred, label='PINN Prediction', color='red', linestyle='dashed')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.title('PINN for solving ODE: dy/dx = -y')
plt.show()

Code Breakdown

  1. Building the Model: The neural network consists of 3 hidden layers with 100 neurons each, using the ReLU activation function. The final layer is linear since we’re predicting a continuous variable.
  2. Physics-Informed Loss Function: We use TensorFlow’s GradientTape to compute the derivative of the network's output with respect to the input, enforcing the ODE constraint dy/dx+y=0.
  3. Training Data: We generate synthetic training data using the exact solution y(x)=exp(-x) over the interval [0, 2].
  4. Training the Model: The model is trained using a custom loop where we minimize both the data loss (MSE) and the physics-informed loss.
  5. Testing the Model: After training, we test the model by predicting the solution over a grid of x values and compare it with the exact solution.

Results

The trained PINN provides a good approximation to the exact solution y(x)=exp(-x), as shown in the plot. The red dashed line (PINN prediction) closely follows the blue line (exact solution), indicating that the model has learned both from the data and the underlying physical law.

Conclusion

Physics-Informed Neural Networks (PINNs) offer a powerful method for solving differential equations by combining data-driven learning with physics-based regularization. This approach has broad applications across various scientific and engineering disciplines, from fluid dynamics to quantum mechanics.

If you’re interested in further exploring the capabilities of PINNs, consider extending this example to more complex differential equations or experimenting with different network architectures. The flexibility and power of deep learning, combined with the constraints imposed by physical laws, make PINNs a promising tool for the future of scientific computing.

References

  1. Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2017). “Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations.” Journal of Computational Physics.
  2. Karniadakis, G. E., Kevrekidis, I. G., Lu, L., Perdikaris, P., Wang, S., & Yang, L. (2021). “Physics-informed machine learning.” Nature Reviews Physics

--

--

Vitality Learning

We are teaching, researching and consulting parallel programming on Graphics Processing Units (GPUs) since the delivery of CUDA. We also play Matlab and Python.