PINN with Data

Solid Mechanics Example


By Prof. Seungchul Lee
http://iai.postech.ac.kr/
Industrial AI Lab at POSTECH

Table of Contents

1. Thin Plate (Solid Mechanics)Ā¶

1.1. Problem SetupĀ¶

  • We will solve thin plate equations to find displacement and stress distribution of thin plate
  • Based on Kirchhoff-Love plate theory, three hypotheses were used
    • straight lines normal to the mid-surface remain straight after deformation
    • straight lines normal to the mid-surface remain normal to the mid-surface after deformation
    • the thickness of the plate does not change during a deformation
  • A non-uniform stretching force is applied to square elastic plate
  • Only one quarter of the plate is considered since the geometry and in-plane forces are symmetric (yellow domain)





  • Problem properties


$$ E = 50 \operatorname{Mpa}, \quad \nu = 0.3, \quad \omega = 20 \operatorname{mm}, \quad h = 1 \operatorname{mm}, \quad f = 1 \operatorname{Mpa} $$

  • Governing equations (Fƶpplā€“von KĆ”rmĆ”n equations) for the isotropic elastic plate:


$$ \begin{align*} &{E \over 1 - \nu^2}\left({\partial^2 u \over \partial x^2} + {1 - \nu \over 2}{\partial^2 u \over \partial y^2} + {1 + \nu \over 2}{\partial^2 v \over \partial x \partial y} \right) = 0\\\\ &{E \over 1 - \nu^2}\left({\partial^2 v \over \partial y^2} + {1 - \nu \over 2}{\partial^2 v \over \partial x^2} + {1 + \nu \over 2}{\partial^2 x \over \partial x \partial y} \right) = 0 \end{align*} $$

  • Two Dirichlet boundary conditions at $x = 0,\, y = 0\; (B.C.ā‘ , B.C.ā‘”)$:


$$ v(x,y) = 0 \qquad \text{at} \quad y = 0\\\\ u(x,y) = 0 \qquad \text{at} \quad x = 0 $$

  • Two free boundary conditions at $y = \omega / 2\; (B.C.ā‘¢)$:


$$ \sigma_{yy} = 0,\quad \sigma_{yx} = 0 $$

  • Free boundary condition and in-plane force boundary condition at $x = \omega / 2\; (B.C.ā‘£)$:


$$ \sigma_{xx} = P \centerdot h,\quad \sigma_{xy} = 0 $$

  • Make a neural network and loss funcitons like below:



1.2. Numerical SolutionĀ¶

  • Numerical solution of this problem is illustrated in below figures
    • x, y direction displacement and stress u, v, Ļƒ, respectively
  • Solve this problem using PINN and then compare with a numerical solution



InĀ [Ā ]:
!pip install deepxde
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting deepxde
  Downloading DeepXDE-1.4.0-py3-none-any.whl (132 kB)
     |ā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆ| 132 kB 29.6 MB/s 
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from deepxde) (3.2.2)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from deepxde) (1.4.1)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from deepxde) (1.21.6)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from deepxde) (1.0.2)
Collecting scikit-optimize
  Downloading scikit_optimize-0.9.0-py2.py3-none-any.whl (100 kB)
     |ā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆ| 100 kB 12.3 MB/s 
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->deepxde) (1.4.2)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->deepxde) (0.11.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->deepxde) (3.0.9)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->deepxde) (2.8.2)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from kiwisolver>=1.0.1->matplotlib->deepxde) (4.2.0)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.1->matplotlib->deepxde) (1.15.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->deepxde) (1.1.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->deepxde) (3.1.0)
Collecting pyaml>=16.9
  Downloading pyaml-21.10.1-py2.py3-none-any.whl (24 kB)
Requirement already satisfied: PyYAML in /usr/local/lib/python3.7/dist-packages (from pyaml>=16.9->scikit-optimize->deepxde) (3.13)
Installing collected packages: pyaml, scikit-optimize, deepxde
Successfully installed deepxde-1.4.0 pyaml-21.10.1 scikit-optimize-0.9.0
InĀ [Ā ]:
from deepxde.backend.set_default_backend import set_default_backend
set_default_backend("tensorflow")
Using backend: tensorflow

Setting the default backend to "tensorflow". You can change it in the ~/.deepxde/config.json file or export the DDEBACKEND environment variable. Valid options are: tensorflow.compat.v1, tensorflow, pytorch, jax, paddle (all lowercase)

1.3. PINN as PDE SolverĀ¶

1.3.1. Define ParametersĀ¶

InĀ [Ā ]:
import deepxde as dde
import tensorflow as tf
import numpy as np
import random
import matplotlib.pyplot as plt
import math as m
InĀ [Ā ]:
# Properties
E = 50
nu = 0.3
a = 10
b = 10
f = 1
h = 1

1.3.2. Define PDE with Boundary ConditionsĀ¶

InĀ [Ā ]:
def boundary_x(X, on_boundary):
    return on_boundary and np.isclose(X[0], a)

def boundary_y(X, on_boundary):
    return on_boundary and np.isclose(X[1], b)

def boundary_y0(X, on_boundary):
    return on_boundary and np.isclose(X[1], 0)

def boundary_x0(X, on_boundary):
    return on_boundary and np.isclose(X[0], 0)
InĀ [Ā ]:
def pde(X, Y):
    du_xx = dde.grad.hessian(Y, X, i = 0, j = 0, component = 0)
    du_yy = dde.grad.hessian(Y, X, i = 1, j = 1, component = 0)
    du_xy = dde.grad.hessian(Y, X, i = 0, j = 1, component = 0)
    dv_xx = dde.grad.hessian(Y, X, i = 0, j = 0, component = 1)
    dv_yy = dde.grad.hessian(Y, X, i = 1, j = 1, component = 1)
    dv_xy = dde.grad.hessian(Y, X, i = 0, j = 1, component = 1)

    force_eq_x = (du_xx + 0.5 * (1 - nu) * du_yy + 0.5 * (1 + nu) * dv_xy) * E / (1 - nu**2)
    force_eq_y = (dv_yy + 0.5 * (1 - nu) * dv_xx + 0.5 * (1 + nu) * du_xy) * E / (1 - nu**2)

    return [force_eq_x, force_eq_y]
InĀ [Ā ]:
def BC_xy(X, Y):
    du_x = dde.grad.jacobian(Y, X, i = 0, j = 0)
    du_y = dde.grad.jacobian(Y, X, i = 0, j = 1)
    dv_x = dde.grad.jacobian(Y, X, i = 1, j = 0)
    dv_y = dde.grad.jacobian(Y, X, i = 1, j = 1)

    sig_xx = (du_x + nu * dv_y) * E / (1 - nu**2)
    sig_yy = (dv_y + nu * du_x) * E / (1 - nu**2)
    sig_xy = (dv_x + du_y) * E / (1 + nu) / 2

    sig_ex = f * h * tf.reshape(tf.math.cos(np.pi / (2 * b) * X[:,1]), [-1, 1])

    return sig_xx-sig_ex, sig_xy, sig_yy

1.3.3. Define Geometry and Implement Boundary ConditionĀ¶

InĀ [Ā ]:
geom = dde.geometry.Rectangle(xmin = [0, 0], xmax = [a, b])

bc_Y0 = dde.DirichletBC(geom, lambda x:0, boundary_y0, component = 1)
bc_X0 = dde.DirichletBC(geom, lambda x:0, boundary_x0, component = 0)

bc_free_yy = dde.OperatorBC(geom, lambda x, y, _ : BC_xy(x,y)[2], boundary_y)
bc_free_yx = dde.OperatorBC(geom, lambda x, y, _ : BC_xy(x,y)[1], boundary_y)

bc_traction = dde.OperatorBC(geom, lambda x, y, _ : BC_xy(x,y)[0], boundary_x)
bc_free_xy = dde.OperatorBC(geom, lambda x, y, _ : BC_xy(x,y)[1], boundary_x)
InĀ [Ā ]:
data = dde.data.PDE(geom,
                    pde,
                    [bc_traction, bc_free_xy, bc_free_yx, bc_free_yy, bc_Y0, bc_X0],
                    num_domain = 9000,
                    num_boundary = 1000,
                    num_test = 100)
/usr/local/lib/python3.7/dist-packages/skopt/sampler/sobol.py:250: UserWarning: The balance properties of Sobol' points require n to be a power of 2. 0 points have been previously generated, then: n=0+9002=9002. 
  total_n_samples))
/usr/local/lib/python3.7/dist-packages/skopt/sampler/sobol.py:250: UserWarning: The balance properties of Sobol' points require n to be a power of 2. 0 points have been previously generated, then: n=0+1004=1004. 
  total_n_samples))
InĀ [Ā ]:
plt.figure(figsize = (5,5))
plt.scatter(data.train_x[:,0], data.train_x[:,1], s = 1)
plt.xlabel('x (mm)')
plt.ylabel('y (mm)')
plt.show()

1.3.4. Define Network and Hyper-parametersĀ¶

InĀ [Ā ]:
layer_size = [2] + [64] * 5 + [2]
activation = "tanh"
initializer = "Glorot uniform"

net = dde.nn.tensorflow.fnn.FNN(layer_size, activation, initializer)

model = dde.Model(data, net)
model.compile("adam", lr = 1e-3)
Compiling model...
'compile' took 0.001698 s

1.3.5. Train (Adam Optimizer)Ā¶

InĀ [Ā ]:
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Training model...

WARNING:tensorflow:AutoGraph could not transform <function <lambda> at 0x7fae75516b00> and will run it as-is.
Cause: could not parse the source code of <function <lambda> at 0x7fae75516b00>: no matching AST found among candidates:
# coding=utf-8
(lambda x, on: np.array([on_boundary(x[i], on[i]) for i in range(len(x))]))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function <lambda> at 0x7fae75516b00> and will run it as-is.
Cause: could not parse the source code of <function <lambda> at 0x7fae75516b00>: no matching AST found among candidates:
# coding=utf-8
(lambda x, on: np.array([on_boundary(x[i], on[i]) for i in range(len(x))]))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function <lambda> at 0x7fae754ba320> and will run it as-is.
Cause: could not parse the source code of <function <lambda> at 0x7fae754ba320>: no matching AST found among candidates:
# coding=utf-8
(lambda x, on: np.array([on_boundary(x[i], on[i]) for i in range(len(x))]))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function <lambda> at 0x7fae754ba320> and will run it as-is.
Cause: could not parse the source code of <function <lambda> at 0x7fae754ba320>: no matching AST found among candidates:
# coding=utf-8
(lambda x, on: np.array([on_boundary(x[i], on[i]) for i in range(len(x))]))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
Step      Train loss                                                                          Test loss                                                                           Test metric
0         [9.07e-01, 1.31e+00, 1.45e-01, 3.17e-01, 6.04e-01, 1.87e+00, 9.71e-02, 1.52e-02]    [7.61e-01, 1.20e+00, 1.45e-01, 3.17e-01, 6.04e-01, 1.87e+00, 9.71e-02, 1.52e-02]    []  
1000      [5.46e-04, 4.29e-04, 1.53e-04, 1.11e-04, 3.64e-05, 5.53e-05, 9.20e-05, 1.54e-04]    [3.68e-04, 3.72e-04, 1.53e-04, 1.11e-04, 3.64e-05, 5.53e-05, 9.20e-05, 1.54e-04]    []  
2000      [3.39e-04, 2.92e-04, 9.89e-05, 7.09e-05, 2.83e-05, 4.32e-05, 3.95e-05, 1.03e-04]    [2.20e-04, 2.21e-04, 9.89e-05, 7.09e-05, 2.83e-05, 4.32e-05, 3.95e-05, 1.03e-04]    []  
3000      [2.32e-04, 2.42e-04, 7.07e-05, 5.26e-05, 2.25e-05, 3.63e-05, 1.57e-05, 7.99e-05]    [1.44e-04, 1.73e-04, 7.07e-05, 5.26e-05, 2.25e-05, 3.63e-05, 1.57e-05, 7.99e-05]    []  
4000      [1.80e-04, 2.02e-04, 4.67e-05, 4.36e-05, 2.20e-05, 3.53e-05, 9.68e-06, 6.39e-05]    [1.07e-04, 1.41e-04, 4.67e-05, 4.36e-05, 2.20e-05, 3.53e-05, 9.68e-06, 6.39e-05]    []  
5000      [1.12e-04, 1.59e-04, 3.00e-05, 3.17e-05, 1.74e-05, 3.14e-05, 3.60e-06, 5.03e-05]    [6.20e-05, 1.07e-04, 3.00e-05, 3.17e-05, 1.74e-05, 3.14e-05, 3.60e-06, 5.03e-05]    []  
6000      [7.75e-05, 1.15e-04, 2.70e-05, 2.83e-05, 1.05e-05, 2.90e-05, 2.79e-06, 4.15e-05]    [4.14e-05, 7.78e-05, 2.70e-05, 2.83e-05, 1.05e-05, 2.90e-05, 2.79e-06, 4.15e-05]    []  
7000      [3.41e-05, 5.83e-05, 1.02e-05, 1.32e-05, 5.15e-06, 1.38e-05, 2.53e-06, 2.51e-05]    [1.98e-05, 3.57e-05, 1.02e-05, 1.32e-05, 5.15e-06, 1.38e-05, 2.53e-06, 2.51e-05]    []  
8000      [2.59e-05, 5.08e-05, 9.50e-05, 1.26e-05, 3.92e-06, 6.15e-05, 3.84e-06, 1.78e-05]    [1.58e-05, 3.33e-05, 9.50e-05, 1.26e-05, 3.92e-06, 6.15e-05, 3.84e-06, 1.78e-05]    []  
9000      [2.29e-05, 2.55e-05, 4.30e-05, 1.07e-05, 3.78e-06, 8.87e-05, 4.88e-06, 1.85e-05]    [1.33e-05, 1.49e-05, 4.30e-05, 1.07e-05, 3.78e-06, 8.87e-05, 4.88e-06, 1.85e-05]    []  
10000     [2.08e-05, 2.17e-05, 2.14e-06, 8.15e-06, 2.41e-06, 3.27e-05, 4.13e-06, 1.25e-05]    [1.30e-05, 1.25e-05, 2.14e-06, 8.15e-06, 2.41e-06, 3.27e-05, 4.13e-06, 1.25e-05]    []  

Best model at step 10000:
  train loss: 1.05e-04
  test loss: 8.75e-05
  test metric: []

'train' took 195.200952 s

1.3.6. Plot Displacement (Adam Optimzer)Ā¶

InĀ [Ā ]:
color_legend = [[0, 0.182], [-0.06, 0.011], [-0.0022,1.0], [-0.15, 0.45]]
title = ['x-displacement ($u$)', 'y-displacement ($v$)', '$\sigma_{xx}$', '$\sigma_{yy}$']
InĀ [Ā ]:
samples = geom.uniform_points(10201)
pde_disp = model.predict(samples)

plt.figure(figsize = (10, 4))
for idx in range(2):
    plt.subplot(1,2,idx+1)
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = pde_disp[:, idx],
                cmap = 'rainbow',
                s = 5)
    plt.clim(color_legend[idx])
    plt.title(title[idx])
    plt.xlim((0, a))
    plt.ylim((0, b))
    plt.colorbar()
plt.tight_layout()
plt.show()

1.3.7. Train More (L-BFGS Optimizer)Ā¶

InĀ [Ā ]:
dde.optimizers.config.set_LBFGS_options(maxiter = 3000)
model.compile("L-BFGS")
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Compiling model...
'compile' took 0.060927 s

Training model...

Step      Train loss                                                                          Test loss                                                                           Test metric
10000     [2.08e-05, 2.17e-05, 2.14e-06, 8.15e-06, 2.41e-06, 3.27e-05, 4.13e-06, 1.25e-05]    [1.30e-05, 1.25e-05, 2.14e-06, 8.15e-06, 2.41e-06, 3.27e-05, 4.13e-06, 1.25e-05]    []  
13000     [9.99e-07, 6.80e-07, 1.37e-07, 1.41e-07, 3.69e-08, 2.39e-07, 1.65e-07, 1.80e-07]    [7.23e-07, 3.93e-07, 1.37e-07, 1.41e-07, 3.69e-08, 2.39e-07, 1.65e-07, 1.80e-07]    []  

Best model at step 13000:
  train loss: 2.58e-06
  test loss: 2.02e-06
  test metric: []

'train' took 770.618288 s

1.3.8. Plot Displacement (Adam + L-BFGS)Ā¶

InĀ [Ā ]:
pde_disp = model.predict(samples)

plt.figure(figsize = (10, 4))
for idx in range(2):
    plt.subplot(1,2,idx+1)
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = pde_disp[:, idx],
                cmap = 'rainbow',
                s = 5)
    plt.clim(color_legend[idx])
    plt.title(title[idx], fontsize = 15)
    plt.xlabel('x (mm)', fontsize = 14)
    plt.ylabel('y (mm)', fontsize = 14)
    plt.axis('square')
    plt.xlim((0, a))
    plt.ylim((0, b))
    plt.colorbar()
plt.tight_layout()
plt.show()

1.3.9 Plot StressĀ¶

  • Since the neural network's output is only x and y displacement, we should autograd to calculate stress
    • If we assume that the Cauchy stress tensor components are linearly related to the von KĆ”rmĆ”n strains by Hooke's law, stress can be represented as:


$$ \sigma_{xx} = {E \over 1 - \nu^2}\left({\partial u \over \partial x} + \nu {\partial v \over \partial y} \right)\\\\ \sigma_{yy} = {E \over 1 - \nu^2}\left({\partial v \over \partial y} + \nu {\partial u \over \partial x} \right) $$

InĀ [Ā ]:
def check_stress(net, X):
    X = tf.constant(X)

    with tf.GradientTape(persistent=True) as tape:
      tape.watch(X)
      disp = net(X)
      u_pred = tf.reshape(disp[:,0],[-1,1])
      v_pred = tf.reshape(disp[:,1],[-1,1])

    du_xy = tape.gradient(u_pred, X)
    dv_xy = tape.gradient(v_pred, X)

    du_x, du_y = du_xy[:,0], du_xy[:,1]
    dv_x, dv_y = dv_xy[:,0], dv_xy[:,1]

    sig_xx = tf.reshape(((du_x + nu * dv_y) * E / (1 - nu**2)), [-1, 1])
    sig_yy = tf.reshape(((dv_y + nu * du_x) * E / (1 - nu**2)), [-1, 1])

    return sig_xx, sig_yy
InĀ [Ā ]:
sig_x, sig_y = check_stress(model.net, samples)
pde_sig = np.hstack([sig_x, sig_y])

plt.figure(figsize = (10, 4))
for idx in range(2):
    plt.subplot(1,2,idx + 1)
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = pde_sig[:, idx],
                cmap = 'rainbow',
                s = 5)
    plt.clim(color_legend[idx + 2])
    plt.title(title[idx + 2], fontsize = 15)
    plt.xlabel('x (mm)', fontsize = 14)
    plt.ylabel('y (mm)', fontsize = 14)
    plt.axis('square')
    plt.xlim((0, a))
    plt.ylim((0, b))
    plt.colorbar()
plt.tight_layout()
plt.show()

2. Data-driven Approach with Big dataĀ¶

2.1. Load and Sample DataĀ¶

Data Download

InĀ [Ā ]:
from google.colab import drive
drive.mount('/content/drive/')
Mounted at /content/drive/
InĀ [Ā ]:
Plate_data = np.load('/content/drive/MyDrive/postech/KSNVE/data_files/Plate_data.npy')
loc = Plate_data[:, 0:2]
u = Plate_data[:, 2:3]
v = Plate_data[:, 3:4]
stress = Plate_data[:, 4:6]
InĀ [Ā ]:
tr_sample = np.array([np.where(loc[:,0] == i) for i in range(11)]).reshape(-1,)
x_sample = loc[tr_sample,:]
u_sample = u[tr_sample,:]
v_sample = v[tr_sample,:]
InĀ [Ā ]:
observe_u = dde.icbc.PointSetBC(x_sample, u_sample, component = 0)
observe_v = dde.icbc.PointSetBC(x_sample, v_sample, component = 1)

2.2. Define GeometryĀ¶

InĀ [Ā ]:
geom = dde.geometry.Rectangle(xmin = [0, 0], xmax = [a, b])
data = dde.data.PDE(geom,
                    None,
                    [observe_u, observe_v],
                    num_domain = 0,
                    num_boundary = 0,
                    num_test = 100)
InĀ [Ā ]:
plt.figure(figsize = (5,5))
plt.scatter(data.train_x[:,0], data.train_x[:,1], s = 1)
plt.xlim([-0.5,10.5])
plt.xlabel('x (mm)')
plt.ylabel('y (mm)')
plt.show()

2.3. Define Network and Hyper-parametersĀ¶

InĀ [Ā ]:
layer_size = [2] + [64] * 5 + [2]
activation = "tanh"
initializer = "Glorot uniform"

net = dde.maps.FNN(layer_size, activation, initializer)

model = dde.Model(data, net)
model.compile("adam", lr = 1e-3)
Compiling model...
'compile' took 0.000606 s

2.4. Train (Adam Optimizer)Ā¶

InĀ [Ā ]:
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = False)
Training model...

Step      Train loss              Test loss               Test metric
0         [1.43e-02, 6.80e-01]    [1.43e-02, 6.80e-01]    []  
1000      [5.62e-06, 1.24e-05]    [5.62e-06, 1.24e-05]    []  
2000      [3.22e-06, 4.58e-06]    [3.22e-06, 4.58e-06]    []  
3000      [2.12e-06, 3.40e-06]    [2.12e-06, 3.40e-06]    []  
4000      [2.14e-06, 2.47e-06]    [2.14e-06, 2.47e-06]    []  
5000      [3.87e-05, 7.92e-05]    [3.87e-05, 7.92e-05]    []  
6000      [1.79e-05, 2.35e-05]    [1.79e-05, 2.35e-05]    []  
7000      [2.95e-04, 2.21e-04]    [2.95e-04, 2.21e-04]    []  
8000      [2.00e-07, 5.65e-07]    [2.00e-07, 5.65e-07]    []  
9000      [1.34e-07, 3.09e-07]    [1.34e-07, 3.09e-07]    []  
10000     [9.49e-08, 1.95e-07]    [9.49e-08, 1.95e-07]    []  

Best model at step 10000:
  train loss: 2.90e-07
  test loss: 2.90e-07
  test metric: []

'train' took 13.659126 s

2.5. Train More (L-BFGS Optimizer)Ā¶

InĀ [Ā ]:
dde.optimizers.config.set_LBFGS_options(maxiter = 5000)
model.compile("L-BFGS")
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Compiling model...
'compile' took 0.013830 s

Training model...

Step      Train loss              Test loss               Test metric
10000     [9.49e-08, 1.95e-07]    [9.49e-08, 1.95e-07]    []  
11354     [2.90e-09, 3.26e-09]    [2.90e-09, 3.26e-09]    []  

Best model at step 11354:
  train loss: 6.15e-09
  test loss: 6.15e-09
  test metric: []

'train' took 268.365061 s

2.6. Plot Results (Adam + L-BFGS)Ā¶

InĀ [Ā ]:
big_disp = model.predict(samples)
sig_x, sig_y = np.array(check_stress(model.net, samples))
big_sig = np.hstack([sig_x, sig_y])
big_result = np.hstack([big_disp, big_sig])

plt.figure(figsize = (10, 8))
for idx in range(4):
    plt.subplot(2,2,idx + 1)
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = big_result[:,idx],
                cmap = 'rainbow',
                s = 5)
    plt.title(title[idx], fontsize = 15)
    plt.xlabel('x (mm)', fontsize = 14)
    plt.ylabel('y (mm)', fontsize = 14)
    plt.axis('square')
    plt.xlim((0, a))
    plt.ylim((0, b))
    plt.colorbar()
plt.tight_layout()
plt.show()

3. Data-driven Approach with Small DataĀ¶

3.1. Sample DataĀ¶

InĀ [Ā ]:
tr_sample = np.array([np.where(loc[:,0] == i) for i in [2,5,8]]).reshape(-1,)
x_sample = loc[tr_sample,:]
u_sample = u[tr_sample,:]
v_sample = v[tr_sample,:]
InĀ [Ā ]:
observe_u = dde.icbc.PointSetBC(x_sample, u_sample, component = 0)
observe_v = dde.icbc.PointSetBC(x_sample, v_sample, component = 1)

3.2. Define GeometryĀ¶

InĀ [Ā ]:
geom = dde.geometry.Rectangle(xmin = [0, 0], xmax = [a, b])
data = dde.data.PDE(geom,
                    None,
                    [observe_u, observe_v],
                    num_domain = 0,
                    num_boundary = 0,
                    num_test = 100)
InĀ [Ā ]:
plt.figure(figsize = (5,5))
plt.scatter(data.train_x[:,0], data.train_x[:,1], s = 1)
plt.xlim([-0.5,10.5])
plt.xlabel('x (mm)')
plt.ylabel('y (mm)')
plt.show()

3.3. Define Network and Hyper-parametersĀ¶

InĀ [Ā ]:
layer_size = [2] + [64] * 5 + [2]
activation = "tanh"
initializer = "Glorot uniform"

net = dde.maps.FNN(layer_size, activation, initializer)

model = dde.Model(data, net)
model.compile("adam", lr = 1e-3)
Compiling model...
'compile' took 0.000466 s

3.4. Train (Adam Optimizer)Ā¶

InĀ [Ā ]:
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = False)
Training model...

Step      Train loss              Test loss               Test metric
0         [5.66e-01, 1.17e-01]    [5.66e-01, 1.17e-01]    []  
1000      [7.23e-06, 1.72e-06]    [7.23e-06, 1.72e-06]    []  
2000      [1.99e-06, 1.45e-07]    [1.99e-06, 1.45e-07]    []  
3000      [4.59e-07, 6.24e-08]    [4.59e-07, 6.24e-08]    []  
4000      [1.82e-07, 1.23e-06]    [1.82e-07, 1.23e-06]    []  
5000      [1.46e-07, 5.34e-08]    [1.46e-07, 5.34e-08]    []  
6000      [4.39e-07, 5.56e-07]    [4.39e-07, 5.56e-07]    []  
7000      [3.22e-08, 4.66e-08]    [3.22e-08, 4.66e-08]    []  
8000      [5.46e-07, 8.47e-08]    [5.46e-07, 8.47e-08]    []  
9000      [5.77e-05, 7.88e-05]    [5.77e-05, 7.88e-05]    []  
10000     [2.72e-08, 9.30e-09]    [2.72e-08, 9.30e-09]    []  

Best model at step 10000:
  train loss: 3.65e-08
  test loss: 3.65e-08
  test metric: []

'train' took 13.839770 s

3.5. Train More (L-BFGS Optimizer)Ā¶

InĀ [Ā ]:
dde.optimizers.config.set_LBFGS_options(maxiter = 5000)
model.compile("L-BFGS")
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Compiling model...
'compile' took 0.015548 s

Training model...

Step      Train loss              Test loss               Test metric
10000     [2.72e-08, 9.30e-09]    [2.72e-08, 9.30e-09]    []  
10102     [3.10e-09, 1.79e-09]    [3.10e-09, 1.79e-09]    []  

Best model at step 10102:
  train loss: 4.89e-09
  test loss: 4.89e-09
  test metric: []

'train' took 21.782147 s

3.6. Plot Results (Adam + L-BFGS)Ā¶

InĀ [Ā ]:
small_disp = model.predict(samples)
sig_x, sig_y = np.array(check_stress(model.net, samples))
small_sig = np.hstack([sig_x, sig_y])
small_result = np.hstack([small_disp, small_sig])

plt.figure(figsize = (10, 8))
for idx in range(4):
    plt.subplot(2,2,idx + 1)
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = small_result[:,idx],
                cmap = 'rainbow',
                s = 5)
    plt.title(title[idx], fontsize = 15)
    plt.xlabel('x (mm)', fontsize = 14)
    plt.ylabel('y (mm)', fontsize = 14)
    plt.axis('square')
    plt.xlim((0, a))
    plt.ylim((0, b))
    plt.colorbar()
plt.tight_layout()
plt.show()

4. PINN with Small DataĀ¶

InĀ [Ā ]:
data = dde.data.PDE(geom,
                    pde,
                    [bc_traction, bc_free_xy, bc_free_yx, bc_free_yy, bc_Y0, bc_X0, observe_u, observe_v],
                    num_domain = 100,
                    num_boundary = 100,
                    num_test = 100,
                    train_distribution = 'LHS')
InĀ [Ā ]:
plt.figure(figsize = (5,5))
plt.scatter(data.train_x[:,0], data.train_x[:,1], s = 1)
plt.xlabel('x (mm)')
plt.ylabel('y (mm)')
plt.show()

4.1. Define Network and Hyper-parametersĀ¶

InĀ [Ā ]:
layer_size = [2] + [64] * 5 + [2]
activation = "tanh"
initializer = "Glorot uniform"

net = dde.maps.FNN(layer_size, activation, initializer)

model = dde.Model(data, net)
model.compile("adam", lr = 1e-3, loss_weights=[1, 1, 1, 1, 1, 1, 1, 1, 9, 9])
Compiling model...
'compile' took 0.001331 s

4.2 Train (Adam Optimizer)Ā¶

InĀ [Ā ]:
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = False)
Training model...

Step      Train loss                                                                                              Test loss                                                                                               Test metric
0         [1.64e+00, 6.02e+00, 6.92e-01, 1.31e+00, 7.07e+00, 2.03e+01, 6.89e-01, 1.83e-03, 1.92e-01, 3.02e+00]    [1.28e+00, 5.31e+00, 6.92e-01, 1.31e+00, 7.07e+00, 2.03e+01, 6.89e-01, 1.83e-03, 1.92e-01, 3.02e+00]    []  
1000      [1.06e-03, 1.09e-03, 4.79e-04, 3.19e-04, 7.37e-05, 1.20e-04, 3.42e-04, 2.19e-05, 8.15e-04, 4.29e-04]    [1.09e-03, 1.01e-03, 4.79e-04, 3.19e-04, 7.37e-05, 1.20e-04, 3.42e-04, 2.19e-05, 8.15e-04, 4.29e-04]    []  
2000      [1.86e-03, 2.19e-03, 2.42e-04, 1.97e-04, 6.35e-05, 7.73e-05, 2.44e-04, 1.06e-04, 1.46e-03, 1.14e-03]    [1.10e-03, 1.74e-03, 2.42e-04, 1.97e-04, 6.35e-05, 7.73e-05, 2.44e-04, 1.06e-04, 1.46e-03, 1.14e-03]    []  
3000      [2.31e-04, 5.24e-04, 1.17e-04, 9.65e-05, 3.28e-05, 6.53e-05, 4.54e-05, 2.05e-06, 9.69e-05, 5.51e-05]    [1.88e-04, 9.30e-04, 1.17e-04, 9.65e-05, 3.28e-05, 6.53e-05, 4.54e-05, 2.05e-06, 9.69e-05, 5.51e-05]    []  
4000      [1.15e-04, 3.37e-04, 7.68e-05, 5.48e-05, 2.26e-05, 3.82e-05, 1.99e-05, 2.21e-06, 5.05e-05, 2.17e-05]    [1.04e-04, 4.80e-04, 7.68e-05, 5.48e-05, 2.26e-05, 3.82e-05, 1.99e-05, 2.21e-06, 5.05e-05, 2.17e-05]    []  
5000      [5.88e-05, 2.33e-04, 4.06e-05, 3.27e-05, 1.50e-05, 2.60e-05, 1.22e-05, 2.37e-06, 2.68e-05, 1.07e-05]    [7.61e-05, 2.87e-04, 4.06e-05, 3.27e-05, 1.50e-05, 2.60e-05, 1.22e-05, 2.37e-06, 2.68e-05, 1.07e-05]    []  
6000      [2.98e-03, 4.72e-03, 8.24e-03, 4.67e-04, 7.76e-04, 3.98e-03, 7.60e-04, 4.74e-04, 4.99e-03, 8.39e-03]    [2.15e-03, 3.79e-03, 8.24e-03, 4.67e-04, 7.76e-04, 3.98e-03, 7.60e-04, 4.74e-04, 4.99e-03, 8.39e-03]    []  
7000      [3.56e-04, 7.79e-04, 1.92e-03, 6.05e-05, 3.09e-04, 1.46e-03, 1.95e-04, 7.28e-05, 3.79e-04, 1.52e-03]    [2.78e-04, 8.87e-04, 1.92e-03, 6.05e-05, 3.09e-04, 1.46e-03, 1.95e-04, 7.28e-05, 3.79e-04, 1.52e-03]    []  
8000      [2.53e-04, 2.52e-04, 6.71e-05, 1.01e-05, 1.59e-05, 3.97e-05, 4.69e-05, 5.06e-05, 7.61e-04, 5.88e-04]    [1.97e-04, 1.72e-04, 6.71e-05, 1.01e-05, 1.59e-05, 3.97e-05, 4.69e-05, 5.06e-05, 7.61e-04, 5.88e-04]    []  
9000      [1.96e-05, 3.62e-05, 2.95e-06, 6.67e-06, 4.48e-06, 6.07e-06, 3.00e-06, 1.39e-06, 9.07e-06, 1.79e-06]    [3.40e-05, 4.83e-05, 2.95e-06, 6.67e-06, 4.48e-06, 6.07e-06, 3.00e-06, 1.39e-06, 9.07e-06, 1.79e-06]    []  
10000     [1.17e-04, 3.21e-05, 1.65e-05, 2.57e-05, 2.00e-06, 7.02e-05, 8.63e-06, 6.37e-05, 8.37e-04, 8.57e-05]    [9.81e-05, 4.67e-05, 1.65e-05, 2.57e-05, 2.00e-06, 7.02e-05, 8.63e-06, 6.37e-05, 8.37e-04, 8.57e-05]    []  

Best model at step 9000:
  train loss: 9.12e-05
  test loss: 1.18e-04
  test metric: []

'train' took 35.418515 s

4.3. Train More (L-BFGS Optimizer)Ā¶

InĀ [Ā ]:
dde.optimizers.config.set_LBFGS_options(maxiter = 5000)
model.compile("L-BFGS")
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Compiling model...
'compile' took 0.062198 s

Training model...

Step      Train loss                                                                                              Test loss                                                                                               Test metric
10000     [1.17e-04, 3.21e-05, 1.65e-05, 2.57e-05, 2.00e-06, 7.02e-05, 8.63e-06, 6.37e-05, 9.30e-05, 9.52e-06]    [9.81e-05, 4.67e-05, 1.65e-05, 2.57e-05, 2.00e-06, 7.02e-05, 8.63e-06, 6.37e-05, 9.30e-05, 9.52e-06]    []  
11433     [1.90e-06, 1.58e-06, 1.77e-07, 9.64e-08, 5.10e-08, 2.71e-07, 1.62e-08, 7.50e-08, 1.07e-07, 7.71e-08]    [3.59e-06, 3.15e-06, 1.77e-07, 9.65e-08, 5.10e-08, 2.71e-07, 1.62e-08, 7.50e-08, 1.07e-07, 7.71e-08]    []  

Best model at step 11433:
  train loss: 4.36e-06
  test loss: 7.61e-06
  test metric: []

'train' took 307.998509 s

4.4. Plot Results (Adam + L-BFGS)Ā¶

InĀ [Ā ]:
pde_small_disp = model.predict(samples)
sig_x, sig_y = np.array(check_stress(model.net, samples))
pde_small_sig = np.hstack([sig_x, sig_y])
pde_small_result = np.hstack([pde_small_disp, pde_small_sig])

diag_x = np.arange(100, -1, -1)/10

plt.figure(figsize = (10, 8))
for idx in range(4):
    plt.subplot(2,2,idx+1)
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = pde_small_result[:,idx],
                cmap = 'rainbow',
                s = 5)
    plt.plot(diag_x, diag_x[::-1], '--', c = 'k', linewidth = 2)
    plt.title(title[idx], fontsize = 15)
    plt.xlabel('x (mm)', fontsize = 14)
    plt.ylabel('y (mm)', fontsize = 14)
    plt.axis('square')
    plt.xlim((0, a))
    plt.ylim((0, b))
    plt.colorbar()
plt.tight_layout()
plt.show()
WARNING:tensorflow:5 out of the last 5 calls to <function Model._compile_tensorflow.<locals>.outputs at 0x7fadf6051320> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for  more details.

4.5. Plot with All ResultsĀ¶

InĀ [Ā ]:
diag_ = [i for i in range(u.shape[0]) if loc[i,0] + loc[i,1] == 10]

diag_result_true = [u[diag_], v[diag_], stress[diag_, 0], stress[diag_, 1]]
diag_result_pde = [pde_disp[diag_, 0], pde_disp[diag_, 1], pde_sig[diag_, 0], pde_sig[diag_, 1]]
diag_result_big = [big_disp[diag_, 0], big_disp[diag_, 1], big_sig[diag_, 0], big_sig[diag_, 1]]
diag_result_small = [small_disp[diag_, 0], small_disp[diag_, 1], small_sig[diag_, 0], small_sig[diag_, 1]]
diag_result_pde_small = [pde_small_disp[diag_, 0], pde_small_disp[diag_, 1], pde_small_sig[diag_, 0], pde_small_sig[diag_, 1]]

plt.figure(figsize = (10, 9))
for idx in range(4):
    plt.subplot(2,2,idx+1)
    plt.plot(loc[diag_,0], diag_result_true[idx], 'k' , linewidth = 3, label = 'FEM')
    plt.plot(loc[diag_,0], diag_result_pde[idx], '--' , linewidth = 3, label = 'PDE')
    plt.plot(loc[diag_,0], diag_result_big[idx], '-.', linewidth = 3, label = 'Big data')
    plt.plot(loc[diag_,0], diag_result_small[idx], ':', linewidth = 3, label = 'Small data')
    plt.plot(loc[diag_,0], diag_result_pde_small[idx], '.', linewidth = 3, label = 'PDE+small data')
    plt.xlabel('x (mm)', fontsize = 15)
    plt.ylabel(title[idx], fontsize = 15)
    plt.xlim((0, a))
    plt.legend(fontsize = 11)
plt.tight_layout()
plt.show()
InĀ [1]:
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')