Physics-informed Neural Networks (PINN)
Table of Contents
Why do data-driven ‘black-box’ methods fail?
Physics-Informed Neural Networks (PINNs)
$$
\frac{\partial u}{\partial t} - \lambda \frac{\partial^2 u}{\partial x^2} = 0
$$
$$
\begin{align*}
\mathcal{L} &= \omega_{\text{data}} \mathcal{L}_{\text{data}} + \omega_{\text{PDE}} \mathcal{L}_{\text{PDE}}\\\\\\
\mathcal{L}_{\text{data}} &= \frac{1}{N_{\text{data}}} \sum \left(u(x,t) - u \right)^2 \\
\mathcal{L}_{\text{PDE}} &= \frac{1}{N_{\text{PDE}}} \sum \left(\frac{\partial u}{\partial t} - \lambda \frac{\partial^2 u}{\partial x^2} \right)^2
\end{align*}
$$
This is a result first due to Lagaris et.al from 1998. The idea is to solve differential equations using neural networks by representing the solution by a neural network and training the resulting network to satisfy the conditions required by the differential equation.
Consider a system of ordinary differential equations
$$ {u^\prime} = f(u,t) $$with $t \in [0, 1]$ and a known initial condition $u(0) = u_0$.
To solve this, we approximate the solution by a neural network:
$$ {\text{NN}(t)} \approx {u(t)} $$If $\text{NN}(t)$ was the true solution, then it would hold that $\text{NN}'(t) = f\left(\text{NN}(t),t \right)$ for all $t$. Thus we turn this condtion into our loss function. This motivates the loss function.
$${L(\omega)} = \sum_{i} \left(\frac{d \text{NN}(t_i)}{dt}-f\left(\text{NN}(t_i),t_i \right)\right)^2$$
The choice of $t_i$ could be done in many ways: it can be random, it can be a grid, etc. Anyways, when this loss function is minimized (gradients computed with standard reverse-mode automatic differentiation), then we have that $\frac{dNN(t_i)}{dt} \approx f(\text{NN}(t_i),t_i)$ and thus $\text{NN}(t)$ approximately solves the differential equation.
Note that we still have to handle the initial condition. One simple way to do this is to add an initial condition term to the cost function.
$${L(\omega)} = \sum_{i} \left(\frac{d \text{NN}(t_i)}{dt}-f(\text{NN}(t_i),t_i)\right)^2 + (\text{NN}(0)-u_0)^2 $$
where $\omega$ are the parameters that define the neural network $\text{NN}$ that approximates $u$. Thus this reduces down, once again, to simply finding weights which minimize a loss function !
$$\frac{du}{dt} = \cos2\pi t$$
$$u(0) = 1$$
$$u(t) = \frac{1}{2\pi}\sin2\pi t + 1$$
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import random
NN = tf.keras.models.Sequential([
tf.keras.layers.Input((1,)),
tf.keras.layers.Dense(units = 32, activation = 'tanh'),
tf.keras.layers.Dense(units = 1)
])
NN.summary()
optm = tf.keras.optimizers.Adam(learning_rate = 0.001)
$$L_{\text{ODE}} = \frac{1}{n}\sum_{i=1}^{n} \left(\frac{d \text{NN}(t_i)}{dt}- \cos2\pi t_i\right)^2$$
$$L_{IC} = \frac{1}{n}\sum_{i=1}^{n} \left({\text{NN}(0)}- 1\right)^2$$
$$L_{\text{Total}} = L_{\text{ODE}} + L_{\text{IC}}$$
def ode_system(t, net):
t = t.reshape(-1,1)
t = tf.constant(t, dtype = tf.float32)
t_0 = tf.zeros((1,1))
one = tf.ones((1,1))
with tf.GradientTape() as tape:
tape.watch(t)
u = net(t)
u_t = tape.gradient(u, t)
ode_loss = u_t - tf.math.cos(2*np.pi*t)
IC_loss = net(t_0) - one
square_loss = tf.square(ode_loss) + tf.square(IC_loss)
total_loss = tf.reduce_mean(square_loss)
return total_loss
train_t = (np.random.rand(30)*2).reshape(-1, 1)
train_loss_record = []
for itr in range(6000):
with tf.GradientTape() as tape:
train_loss = ode_system(train_t, NN)
train_loss_record.append(train_loss)
grad_w = tape.gradient(train_loss, NN.trainable_variables)
optm.apply_gradients(zip(grad_w, NN.trainable_variables))
if itr % 1000 == 0:
print(train_loss.numpy())
plt.figure(figsize = (10,8))
plt.plot(train_loss_record)
plt.show()
test_t = np.linspace(0, 2, 100)
train_u = np.sin(2*np.pi*train_t)/(2*np.pi) + 1
true_u = np.sin(2*np.pi*test_t)/(2*np.pi) + 1
pred_u = NN.predict(test_t).ravel()
plt.figure(figsize = (10,8))
plt.plot(train_t, train_u, 'ok', label = 'Train')
plt.plot(test_t, true_u, '-k',label = 'True')
plt.plot(test_t, pred_u, '--r', label = 'Prediction')
plt.legend(fontsize = 15)
plt.xlabel('t', fontsize = 15)
plt.ylabel('u', fontsize = 15)
plt.show()
# !pip install deepxde
# from deepxde.backend.set_default_backend import set_default_backend
# set_default_backend("pytorch")
$$\frac{du}{dt} = \cos2\pi t$$
$$u(0) = 1$$
$$u(t) = \frac{1}{2\pi}\sin2\pi t + 1$$
import torch
import deepxde as dde
import numpy as np
import matplotlib.pyplot as plt
pi = torch.pi
def ode_system(t, u):
du_t = dde.grad.jacobian(u, t)
return du_t - torch.cos(2*pi*t)
def boundary(t, on_initial):
return on_initial and np.isclose(t[0], 0)
geom = dde.geometry.TimeDomain(0, 2)
ic = dde.IC(geom, lambda t: 1, boundary)
# Reference solution to compute the error
def true_solution(t):
return np.sin(2*np.pi*t)/(2*np.pi) + 1
data = dde.data.PDE(geom,
ode_system,
ic,
num_domain = 30,
num_boundary = 2,
solution = true_solution,
num_test = 100)
layer_size = [1] + [32] + [1]
activation = "tanh"
initializer = "Glorot uniform"
NN = dde.maps.FNN(layer_size, activation, initializer)
model = dde.Model(data, NN)
model.compile("adam", lr = 0.001)
losshistory, train_state = model.train(epochs = 6000)
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')