Physics-informed Neural Networks (PINN)


By Prof. Seungchul Lee
http://iai.postech.ac.kr/
Industrial AI Lab at POSTECH

Table of Contents

1. Why Deep Learning Needs Physics?Ā¶

Why do data-driven ā€˜black-boxā€™ methods fail?

  • May output result that is physically inconsistent
  • Easy to learn spurious relationships that look good only on training and test data
    • Can lead to poor generalization outside the available data (out-of-sample prediction tasks)
  • Interpretability is absent
    • Discovering the mechanism of an underlying process is crucial for scientific advancements

Physics-Informed Neural Networks (PINNs)

  • Take full advantage of data science methods with the accumulated prior knowledge of scientific theories $\rightarrow$ Improve predictive performance
  • Integration of domain knowledge to overcome the issue of imbalanced data & data shortage

1.1. Taxonomy of Informed Deep LearningĀ¶



1.2. Multilayer Feedforward Networks are Universal ApproximatorsĀ¶

  • The Universal Approximation Theorem
    • Neural Networks are capable of approximating any Borel measurable function
    • Neural Networks (1989)



1.3. Neural Networks for Solving Differential EquationsĀ¶

  • Neural Algorithm for Solving Differential Equations
    • Journal of Computational Physics (1990)
    • Neural minimization for finite difference equation



  • ANN for ODE and PDE
    • IEEE on Neural Networks (1998)



1.4. Journal of Computational Physics (2019)Ā¶






1.5. Nature Reviews Physics (2021)Ā¶




2. Architecture of Physics-informed Neural Networks (PINN)Ā¶

  • NN as an universal function approximator
  • Given
    • Some measured data from initial and boundary conditions
    • ODE or PDE


$$ \frac{\partial u}{\partial t} - \lambda \frac{\partial^2 u}{\partial x^2} = 0 $$

  • Adding constraints for regularization
    • Regularized by physics, but matched with sparse data


$$ \begin{align*} \mathcal{L} &= \omega_{\text{data}} \mathcal{L}_{\text{data}} + \omega_{\text{PDE}} \mathcal{L}_{\text{PDE}}\\\\\\ \mathcal{L}_{\text{data}} &= \frac{1}{N_{\text{data}}} \sum \left(u(x,t) - u \right)^2 \\ \mathcal{L}_{\text{PDE}} &= \frac{1}{N_{\text{PDE}}} \sum \left(\frac{\partial u}{\partial t} - \lambda \frac{\partial^2 u}{\partial x^2} \right)^2 \end{align*} $$




3. Methond for Solving ODE with Neural NetworksĀ¶

3.1. BackgroundĀ¶

This is a result first due to Lagaris et.al from 1998. The idea is to solve differential equations using neural networks by representing the solution by a neural network and training the resulting network to satisfy the conditions required by the differential equation.

Consider a system of ordinary differential equations

$$ {u^\prime} = f(u,t) $$

with $t \in [0, 1]$ and a known initial condition $u(0) = u_0$.

To solve this, we approximate the solution by a neural network:

$$ {\text{NN}(t)} \approx {u(t)} $$


If $\text{NN}(t)$ was the true solution, then it would hold that $\text{NN}'(t) = f\left(\text{NN}(t),t \right)$ for all $t$. Thus we turn this condtion into our loss function. This motivates the loss function.


$${L(\omega)} = \sum_{i} \left(\frac{d \text{NN}(t_i)}{dt}-f\left(\text{NN}(t_i),t_i \right)\right)^2$$

The choice of $t_i$ could be done in many ways: it can be random, it can be a grid, etc. Anyways, when this loss function is minimized (gradients computed with standard reverse-mode automatic differentiation), then we have that $\frac{dNN(t_i)}{dt} \approx f(\text{NN}(t_i),t_i)$ and thus $\text{NN}(t)$ approximately solves the differential equation.

Note that we still have to handle the initial condition. One simple way to do this is to add an initial condition term to the cost function.


$${L(\omega)} = \sum_{i} \left(\frac{d \text{NN}(t_i)}{dt}-f(\text{NN}(t_i),t_i)\right)^2 + (\text{NN}(0)-u_0)^2 $$

where $\omega$ are the parameters that define the neural network $\text{NN}$ that approximates $u$. Thus this reduces down, once again, to simply finding weights which minimize a loss function !

3.2. Lab 1: Simple ExampleĀ¶

  • Let's look at the ODE:


$$\frac{du}{dt} = \cos2\pi t$$

  • Initial condition:


$$u(0) = 1$$

  • The exact solution:


$$u(t) = \frac{1}{2\pi}\sin2\pi t + 1$$

  • Make a neural network and loss functions like below:



3.2.0. SetupĀ¶

  • Using PyTorch
  • Install and Setup
    • TensorFlow >= 2.2.0 and TensorFlow Probability >= 0.10.0
    • If you have TensorFlow 2.2.0, TensorFlow Probability == 0.10.0 should be installed
      • (TensorFlow 2.3.0 matches with TensorFlow Probability 0.11.0 and so on)

3.2.1. Import LibraryĀ¶

InĀ [Ā ]:
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import random

3.2.2. Define Network and Hyper-parametersĀ¶

InĀ [Ā ]:
NN = tf.keras.models.Sequential([
    tf.keras.layers.Input((1,)),
    tf.keras.layers.Dense(units = 32, activation = 'tanh'),
    tf.keras.layers.Dense(units = 32, activation = 'tanh'),
    tf.keras.layers.Dense(units = 32, activation = 'tanh'),
    tf.keras.layers.Dense(units = 1)
])

NN.summary()
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense (Dense)               (None, 32)                64        
                                                                 
 dense_1 (Dense)             (None, 32)                1056      
                                                                 
 dense_2 (Dense)             (None, 32)                1056      
                                                                 
 dense_3 (Dense)             (None, 1)                 33        
                                                                 
=================================================================
Total params: 2,209
Trainable params: 2,209
Non-trainable params: 0
_________________________________________________________________
InĀ [Ā ]:
optm = tf.keras.optimizers.Adam(learning_rate = 0.001)

3.2.3. Define ODE SystemĀ¶

  • ODE loss:


$$L_{\text{ODE}} = \frac{1}{n}\sum_{i=1}^{n} \left(\frac{d \text{NN}(t_i)}{dt}- \cos2\pi t_i\right)^2$$

  • Initial condition loss:


$$L_{IC} = \frac{1}{n}\sum_{i=1}^{n} \left({\text{NN}(0)}- 1\right)^2$$

  • Total loss:


$$L_{\text{Total}} = L_{\text{ODE}} + L_{\text{IC}}$$

InĀ [Ā ]:
def ode_system(t, net):
    t = t.reshape(-1,1)
    t = tf.constant(t, dtype = tf.float32)
    t_0 = tf.zeros((1,1))
    one = tf.ones((1,1))

    with tf.GradientTape() as tape:
        tape.watch(t)

        u = net(t)
        u_t = tape.gradient(u, t)

    ode_loss = u_t - tf.math.cos(2*np.pi*t)
    IC_loss = net(t_0) - one

    square_loss = tf.square(ode_loss) + tf.square(IC_loss)
    total_loss = tf.reduce_mean(square_loss)

    return total_loss

3.2.4. Train & PredictionĀ¶

InĀ [Ā ]:
train_t = (np.array([0., 0.025, 0.475, 0.5, 0.525, 0.9, 0.95, 1., 1.05, 1.1, 1.4, 1.45, 1.5, 1.55, 1.6, 1.95, 2.])).reshape(-1, 1)
train_loss_record = []

for itr in range(6000):
    with tf.GradientTape() as tape:
        train_loss = ode_system(train_t, NN)
        train_loss_record.append(train_loss)

        grad_w = tape.gradient(train_loss, NN.trainable_variables)
        optm.apply_gradients(zip(grad_w, NN.trainable_variables))

    if itr % 1000 == 0:
        print(train_loss.numpy())

plt.figure(figsize = (10,8))
plt.plot(train_loss_record)
plt.show()
1.7445493
0.0003114283
0.0002242499
0.00014402653
8.462762e-05
7.19602e-05
InĀ [Ā ]:
test_t = np.linspace(0, 2, 100)

train_u = np.sin(2*np.pi*train_t)/(2*np.pi) + 1
true_u = np.sin(2*np.pi*test_t)/(2*np.pi) + 1
pred_u = NN.predict(test_t).ravel()

plt.figure(figsize = (10,8))
plt.plot(train_t, train_u, 'ok', label = 'Train')
plt.plot(test_t, true_u, '-k',label = 'True')
plt.plot(test_t, pred_u, '--r', label = 'Prediction')
plt.legend(fontsize = 15)
plt.xlabel('t', fontsize = 15)
plt.ylabel('u', fontsize = 15)
plt.show()

4. Lab 2: Solve Lab 1 Again using DeepXDEĀ¶

4.1. DeepXDEĀ¶

  • Using DeepXDE libray
  • DeepXDE is a useful library for solving forward and inverse partial differential equations (PDEs) via physics-informed neural network (PINN)
InĀ [Ā ]:
!pip install deepxde
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting deepxde
  Downloading DeepXDE-1.4.0-py3-none-any.whl (132 kB)
     |ā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆ| 132 kB 5.1 MB/s 
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from deepxde) (1.21.6)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from deepxde) (1.4.1)
Collecting scikit-optimize
  Downloading scikit_optimize-0.9.0-py2.py3-none-any.whl (100 kB)
     |ā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆ| 100 kB 10.4 MB/s 
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from deepxde) (3.2.2)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from deepxde) (1.0.2)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->deepxde) (1.4.2)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->deepxde) (3.0.9)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->deepxde) (0.11.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->deepxde) (2.8.2)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from kiwisolver>=1.0.1->matplotlib->deepxde) (4.2.0)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.1->matplotlib->deepxde) (1.15.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->deepxde) (1.1.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->deepxde) (3.1.0)
Collecting pyaml>=16.9
  Downloading pyaml-21.10.1-py2.py3-none-any.whl (24 kB)
Requirement already satisfied: PyYAML in /usr/local/lib/python3.7/dist-packages (from pyaml>=16.9->scikit-optimize->deepxde) (3.13)
Installing collected packages: pyaml, scikit-optimize, deepxde
Successfully installed deepxde-1.4.0 pyaml-21.10.1 scikit-optimize-0.9.0
  • Change DeepXDE backends
    • DeepXDE supports backends for TensorFlow 1.x, TensorFlow 2.x, and PyTorch. We will change it into Pytorch
InĀ [Ā ]:
# from deepxde.backend.set_default_backend import set_default_backend
# set_default_backend("tensorflow")

4.2. Problem Setup AgainĀ¶

  • ODE


$$\frac{du}{dt} = \cos2\pi t$$

  • Boundary condition:


$$u(0) = 1$$

  • The exact solution:


$$u(t) = \frac{1}{2\pi}\sin2\pi t + 1$$

4.2.1. Import LibraryĀ¶

InĀ [Ā ]:
import tensorflow as tf
import deepxde as dde
import numpy as np
import matplotlib.pyplot as plt
import math as m
Deepxde backend not selected or invalid. Assuming tensorflow.compat.v1 for now.
Using backend: tensorflow.compat.v1

Setting the default backend to "tensorflow.compat.v1". You can change it in the ~/.deepxde/config.json file or export the DDEBACKEND environment variable. Valid options are: tensorflow.compat.v1, tensorflow, pytorch, jax, paddle (all lowercase)
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/compat/v2_compat.py:107: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/deepxde/nn/initializers.py:118: The name tf.keras.initializers.he_normal is deprecated. Please use tf.compat.v1.keras.initializers.he_normal instead.

4.2.2. Define ODE SystemĀ¶

InĀ [Ā ]:
pi = tf.constant(m.pi)

def ode_system(t, u):
    du_t = dde.grad.jacobian(u, t)
    return du_t - tf.math.cos(2*pi*t)

4.2.3. Define Initial ConditionĀ¶

InĀ [Ā ]:
def boundary(t, on_initial):
    return on_initial and np.isclose(t[0], 0)

4.2.4. Define Geometry & Implement Initial ConditionĀ¶

InĀ [Ā ]:
geom = dde.geometry.TimeDomain(0, 2)

ic = dde.IC(geom, lambda t: 1, boundary)

# Reference solution to compute the error
def true_solution(t):
    return np.sin(2*np.pi*t)/(2*np.pi) + 1

data = dde.data.PDE(geom,
                    ode_system,
                    ic,
                    num_domain = 30,
                    num_boundary = 2,
                    solution = true_solution,
                    num_test = 100)

4.2.5. Define Network and Hyper-parametersĀ¶

InĀ [Ā ]:
layer_size = [1] + [32] + [1]
activation = "tanh"
initializer = "Glorot uniform"

NN = dde.maps.FNN(layer_size, activation, initializer)
InĀ [Ā ]:
model = dde.Model(data, NN)
model.compile("adam", lr = 0.001)
Compiling model...
Building feed-forward neural network...
'build' took 0.036146 s

'compile' took 0.199044 s

/usr/local/lib/python3.7/dist-packages/deepxde/nn/tensorflow_compat_v1/fnn.py:110: UserWarning: `tf.layers.dense` is deprecated and will be removed in a future version. Please use `tf.keras.layers.Dense` instead.
  kernel_constraint=self.kernel_constraint,
/usr/local/lib/python3.7/dist-packages/keras/legacy_tf_layers/core.py:261: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.
  return layer.apply(inputs)

4.2.6. Train & PredictionĀ¶

InĀ [Ā ]:
losshistory, train_state = model.train(epochs = 6000)
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Initializing variables...
Training model...

Step      Train loss              Test loss               Test metric
0         [5.17e-01, 1.00e+00]    [5.11e-01, 1.00e+00]    []  
1000      [3.87e-01, 4.56e-04]    [4.15e-01, 4.56e-04]    []  
2000      [2.56e-01, 3.67e-05]    [2.56e-01, 3.67e-05]    []  
3000      [5.72e-02, 1.75e-06]    [5.49e-02, 1.75e-06]    []  
4000      [7.51e-03, 3.48e-08]    [6.90e-03, 3.48e-08]    []  
5000      [3.88e-03, 6.03e-09]    [2.87e-03, 6.03e-09]    []  
6000      [3.02e-03, 2.29e-08]    [2.10e-03, 2.29e-08]    []  

Best model at step 6000:
  train loss: 3.02e-03
  test loss: 2.10e-03
  test metric: []

'train' took 5.522096 s

5. Lab 3: Spring and Mass SystemĀ¶

5.1. Problem SetupĀ¶

  • We will solve a spring coupled masses problem:



  • Problem properties
$$m_1 = 1 \operatorname{kg}, \quad m_2 = 1 \operatorname{kg}, \quad k_1 = 5 \operatorname{N/m}, \quad k_2 = 2 \operatorname{N/m}, \quad L_1 = 0.5 \operatorname{m}, \quad L_2 = 0.5 \operatorname{m}$$


  • Initial properties
$$d_1 = 0.5 \operatorname{m}, \quad d_2 = 3.25 \operatorname{m}, \quad v_1 = 0 \operatorname{m/s}, \quad v_2 = 0 \operatorname{m/s}$$


  • Partial differential equations


$$m_1 \ddot{x_1} + k_1 (x_1 - L_1) - k_2 (x_2 -x_1 - L_2) = 0$$

$$m_2 \ddot{x_2} + k_2 (x_2 -x_1 - L_2) = 0$$

  • One Dirichlet initial condition:


$$x_1(0) = 0.5, \quad x_2(0) = 3.25$$

  • One Neumann initial condition:


$$\dot{x_1}(0) = 0, \quad \dot{x_2}(0) = 0$$

  • Make a neural network and loss functions like below :



5.2. Solve the Spring Coupled Masses ProblemĀ¶

5.2.1. Import LibraryĀ¶

InĀ [1]:
import deepxde as dde
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
Using backend: pytorch
Other available backends: tensorflow.compat.v1, tensorflow, jax, paddle.
paddle supports more examples now and is recommended.
 

5.2.2. Initial ConstantsmĀ¶

InĀ [2]:
# Masses
m1 = 1.0
m2 = 1.0
# Spring constants
k1 = 5.0
k2 = 2.0
# Natural lengths
L1 = 0.5
L2 = 0.5

# Initial conditions
# d1_0 and d2_0 are the initial displacements; v1_0 and v2_0 are the initial velocities
d1_0 = 0.5
d2_0 = 3.25

v1_0 = 0.0
v2_0 = 0.0

b1 = 0
b2 = 0

# maximum time to simulate
t_max = 5

5.2.3. Define PDE SystemĀ¶

InĀ [3]:
def dy(t, x):
    return dde.grad.jacobian(x, t)
InĀ [4]:
def pde(t, x):
    # mass 1 location
    x_1 = x[:, [0]]
    # mass 2 location
    x_2 = x[:, [1]]
    

    dx1_tt = dde.grad.hessian(x, t, i = 0, j = 0, component = 0)
    dx2_tt = dde.grad.hessian(x, t, i = 0, j = 0, component = 1)

    pde1 = m1 * dx1_tt + k1 * (x_1 - L1) - k2 * (x_2 - x_1 - L1)
    pde2 = m2 * dx2_tt + k2 * (x_2 - x_1 - L1)

    return [pde1, pde2]

5.2.4. Define Initial ConditionsĀ¶

InĀ [5]:
def boundary_init(t, on_boundary):
    return on_boundary and np.isclose(t[0], 0)
InĀ [6]:
geom = dde.geometry.Interval(0, t_max)

init_d1 = dde.icbc.PointSetBC(np.array([0]), np.array([d1_0]).reshape(-1, 1), component=0)
init_d2 = dde.icbc.PointSetBC(np.array([0]), np.array([d2_0]).reshape(-1, 1), component=1)
init_v1 = dde.OperatorBC(geom, lambda x, y, _: dy(x, y[:, [0]]), boundary_init)
init_v2 = dde.OperatorBC(geom, lambda x, y, _: dy(x, y[:, [1]]), boundary_init)


data = dde.data.PDE(geom,
                    pde,
                    [init_d1, init_d2, init_v1, init_v2], 
                    num_domain = 2000, 
                    num_boundary = 100, 
                    num_test = 1000)

5.2.5. Define Network and Hyper-parametersĀ¶

InĀ [7]:
layer_size = [1] + [20] * 3 + [2]
activation = "tanh"
initializer = "Glorot uniform"

net = dde.maps.FNN(layer_size, activation, initializer)
InĀ [8]:
model = dde.Model(data, net)
model.compile("adam", lr = 1e-3)
Compiling model...
'compile' took 0.000123 s

5.2.6. Train & PredictionĀ¶

InĀ [9]:
losshistory, train_state = model.train(epochs = 5000)
dde.saveplot(losshistory, train_state, issave = False, isplot = False)
Warning: epochs is deprecated and will be removed in a future version. Use iterations instead.
Training model...

Step      Train loss                                                      Test loss                                                       Test metric
0         [2.72e+01, 2.49e+00, 2.50e-01, 1.06e+01, 2.12e-02, 7.08e-01]    [2.74e+01, 2.46e+00, 2.50e-01, 1.06e+01, 2.12e-02, 7.08e-01]    []  
1000      [1.22e-01, 2.49e-01, 1.88e-01, 1.22e-01, 6.08e-06, 7.16e-04]    [1.23e-01, 2.59e-01, 1.88e-01, 1.22e-01, 6.08e-06, 7.16e-04]    []  
2000      [1.01e-01, 3.52e-02, 1.74e-01, 2.86e-02, 3.71e-05, 1.69e-05]    [1.04e-01, 3.07e-02, 1.74e-01, 2.86e-02, 3.71e-05, 1.69e-05]    []  
3000      [9.80e-02, 1.75e-02, 9.86e-02, 1.20e-02, 2.27e-07, 1.23e-05]    [1.02e-01, 1.63e-02, 9.86e-02, 1.20e-02, 2.27e-07, 1.23e-05]    []  
4000      [5.66e-02, 1.16e-02, 4.80e-02, 4.87e-03, 1.12e-04, 2.80e-05]    [5.92e-02, 1.19e-02, 4.80e-02, 4.87e-03, 1.12e-04, 2.80e-05]    []  
5000      [7.55e-03, 2.38e-03, 4.16e-03, 3.49e-04, 4.11e-07, 4.08e-07]    [7.83e-03, 2.31e-03, 4.16e-03, 3.49e-04, 4.11e-07, 4.08e-07]    []  

Best model at step 5000:
  train loss: 1.44e-02
  test loss: 1.47e-02
  test metric: []

'train' took 73.364264 s

InĀ [10]:
dde.optimizers.config.set_LBFGS_options(maxiter=5000)
model.compile("L-BFGS")
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Compiling model...
'compile' took 0.000187 s

Training model...

Step      Train loss                                                      Test loss                                                       Test metric
5000      [7.55e-03, 2.38e-03, 4.16e-03, 3.49e-04, 4.11e-07, 4.08e-07]    [7.83e-03, 2.31e-03, 4.16e-03, 3.49e-04, 4.11e-07, 4.08e-07]    []  
6000      [6.94e-06, 2.48e-06, 1.88e-09, 3.47e-09, 8.10e-11, 1.90e-10]    [6.71e-06, 2.60e-06, 1.88e-09, 3.47e-09, 8.10e-11, 1.90e-10]    []  
7000      [2.20e-06, 3.76e-07, 9.98e-12, 5.46e-11, 6.88e-12, 7.85e-12]    [2.13e-06, 3.79e-07, 9.98e-12, 5.46e-11, 6.88e-12, 7.80e-12]    []  
8000      [2.20e-06, 3.72e-07, 8.88e-12, 3.27e-11, 5.68e-12, 8.88e-12]    [2.12e-06, 3.74e-07, 8.88e-12, 3.27e-11, 5.68e-12, 9.04e-12]    []  
9000      [2.19e-06, 3.72e-07, 8.53e-12, 2.51e-11, 1.20e-11, 8.19e-12]    [2.12e-06, 3.74e-07, 8.53e-12, 2.51e-11, 1.20e-11, 8.15e-12]    []  
10000     [2.19e-06, 3.73e-07, 8.88e-12, 1.46e-11, 1.20e-11, 9.24e-12]    [2.11e-06, 3.75e-07, 8.88e-12, 1.46e-11, 1.20e-11, 9.27e-12]    []  

Best model at step 10000:
  train loss: 2.56e-06
  test loss: 2.49e-06
  test metric: []

'train' took 98.359580 s

5.2.7. EvaluationĀ¶

InĀ [11]:
def vectorfield(w, t, p):
    """
    Defines the differential equations for the coupled spring-mass system.
 
    Arguments:
        w :  vector of the state variables:
                  w = [x1,y1,x2,y2]
        t :  time
        p :  vector of the parameters:
                  p = [m1,m2,k1,k2,L1,L2,b1,b2]
    """
    x1, y1, x2, y2 = w
    m1, m2, k1, k2, L1, L2, b1, b2 = p
 
    # Create f = (x1',y1',x2',y2'):
    f = [y1,
         (-b1 * y1 - k1 * (x1 - L1) + k2 * (x2 - x1 - L2)) / m1,
         y2,
         (-b2 * y2 - k2 * (x2 - x1 - L2)) / m2]
    return f

# Use ODEINT to solve the differential equations defined by the vector field
# Parameter values



# ODE solver parameters
abserr = 1.0e-8
relerr = 1.0e-6
stoptime = t_max
numpoints = 250
t = geom.random_points(5000)

t[:,0].sort()
# Create the time samples for the output of the ODE solver.
# I use a large number of points, only because I want to make
# a plot of the solution that looks nice.

# Pack up the parameters and initial conditions:
p = [m1, m2, k1, k2, L1, L2, b1, b2]
w0 = [d1_0, v1_0, d2_0, v2_0]
 
# Call the ODE solver.
wsol = odeint(vectorfield, w0, t[:, 0], args=(p,),
              atol=abserr, rtol=relerr)
InĀ [13]:
result = model.predict(t)

usol1 = np.array(result[:, 0])
usol2 = np.array(result[:, 1])


lw = 2
plt.figure(figsize = (12, 6))
plt.plot(t,
            wsol[:, 0],
           alpha = 1,
            label = 'Mass1 GT',
        c = 'r',
        lw = lw
        )
plt.plot(t,
            wsol[:, 2],
           alpha = 1,
            label = 'Mass2 GT',
        c = 'b',
        lw = lw
        )
plt.plot(t[:, 0],
            usol1,
           alpha = 1,
            label = 'Mass1 Prediction',
         linestyle='dashed', 
         c = 'k',
         lw = lw
           )
plt.plot(t[:, 0],
            usol2,
           alpha = 1,
            label = 'Mass2 Prediction',
         linestyle='dashed', 
         c = 'k',
         lw = lw
           )
plt.legend(fontsize = 15)
plt.xticks(fontsize = 25)
plt.yticks(fontsize = 25)
plt.xlabel('Time (s)', fontsize = 30)
plt.ylabel('Displacement (m)', fontsize = 30)
plt.show()
InĀ [2]:
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')