PINN with Data

Solid Mechanics Example


By Prof. Seungchul Lee
http://iai.postech.ac.kr/
Industrial AI Lab at POSTECH

Table of Contents

1. Thin Plate (Solid Mechanics)Ā¶

1.1. Problem SetupĀ¶

  • We will solve thin plate equations to find displacement and stress distribution of thin plate
  • Based on Kirchhoff-Love plate theory, three hypotheses were used
    • straight lines normal to the mid-surface remain straight after deformation
    • straight lines normal to the mid-surface remain normal to the mid-surface after deformation
    • the thickness of the plate does not change during a deformation
  • A non-uniform stretching force is applied to square elastic plate
  • Only one quarter of the plate is considered since the geometry and in-plane forces are symmetric (yellow domain)





  • Problem properties


$$ E = 50 \operatorname{Mpa}, \quad \nu = 0.3, \quad \omega = 20 \operatorname{mm}, \quad h = 1 \operatorname{mm}, \quad f = 1 \operatorname{Mpa} $$

  • Governing equations (Fƶpplā€“von KĆ”rmĆ”n equations) for the isotropic elastic plate:


$$ \begin{align*} &{E \over 1 - \nu^2}\left({\partial^2 u \over \partial x^2} + {1 - \nu \over 2}{\partial^2 u \over \partial y^2} + {1 + \nu \over 2}{\partial^2 v \over \partial x \partial y} \right) = 0\\\\ &{E \over 1 - \nu^2}\left({\partial^2 v \over \partial y^2} + {1 - \nu \over 2}{\partial^2 v \over \partial x^2} + {1 + \nu \over 2}{\partial^2 x \over \partial x \partial y} \right) = 0 \end{align*} $$

  • Two Dirichlet boundary conditions at $x = 0,\, y = 0\; (B.C.ā‘ , B.C.ā‘”)$:


$$ v(x,y) = 0 \qquad \text{at} \quad y = 0\\\\ u(x,y) = 0 \qquad \text{at} \quad x = 0 $$

  • Two free boundary conditions at $y = \omega / 2\; (B.C.ā‘¢)$:


$$ \sigma_{yy} = 0,\quad \sigma_{yx} = 0 $$

  • Free boundary condition and in-plane force boundary condition at $x = \omega / 2\; (B.C.ā‘£)$:


$$ \sigma_{xx} = P \centerdot h,\quad \sigma_{xy} = 0 $$

  • Make a neural network and loss funcitons like below:



1.2. Numerical SolutionĀ¶

  • Numerical solution of this problem is illustrated in below figures
    • $x, y$ direction displacement and stress $u$, $v$, $\sigma_{xx}$, $\sigma_{yy}$, respectively
  • Solve this problem using PINN and then compare with a numerical solution



InĀ [1]:
!pip install deepxde
Requirement already satisfied: deepxde in /usr/local/lib/python3.7/dist-packages (1.4.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from deepxde) (1.21.6)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from deepxde) (1.4.1)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from deepxde) (3.2.2)
Requirement already satisfied: scikit-optimize in /usr/local/lib/python3.7/dist-packages (from deepxde) (0.9.0)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from deepxde) (1.0.2)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->deepxde) (0.11.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->deepxde) (2.8.2)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->deepxde) (1.4.2)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->deepxde) (3.0.9)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from kiwisolver>=1.0.1->matplotlib->deepxde) (4.2.0)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.1->matplotlib->deepxde) (1.15.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->deepxde) (1.1.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->deepxde) (3.1.0)
Requirement already satisfied: pyaml>=16.9 in /usr/local/lib/python3.7/dist-packages (from scikit-optimize->deepxde) (21.10.1)
Requirement already satisfied: PyYAML in /usr/local/lib/python3.7/dist-packages (from pyaml>=16.9->scikit-optimize->deepxde) (3.13)
InĀ [2]:
from deepxde.backend.set_default_backend import set_default_backend
set_default_backend("tensorflow")
Using backend: tensorflow

Setting the default backend to "tensorflow". You can change it in the ~/.deepxde/config.json file or export the DDEBACKEND environment variable. Valid options are: tensorflow.compat.v1, tensorflow, pytorch, jax, paddle (all lowercase)

1.3. PINN as PDE SolverĀ¶

1.3.1. Define ParametersĀ¶

InĀ [3]:
import deepxde as dde
import tensorflow as tf
import numpy as np
import random
import matplotlib.pyplot as plt
import math as m
InĀ [4]:
# Properties
E = 50
nu = 0.3
a = 10
b = 10
f = 1
h = 1

1.3.2. Define PDE with Boundary ConditionsĀ¶

InĀ [5]:
def boundary_x(X, on_boundary):
    return on_boundary and np.isclose(X[0], a)

def boundary_y(X, on_boundary):
    return on_boundary and np.isclose(X[1], b)

def boundary_y0(X, on_boundary):
    return on_boundary and np.isclose(X[1], 0)

def boundary_x0(X, on_boundary):
    return on_boundary and np.isclose(X[0], 0)
InĀ [6]:
def pde(X, Y):
    du_xx = dde.grad.hessian(Y, X, i = 0, j = 0, component = 0)
    du_yy = dde.grad.hessian(Y, X, i = 1, j = 1, component = 0)
    du_xy = dde.grad.hessian(Y, X, i = 0, j = 1, component = 0)
    dv_xx = dde.grad.hessian(Y, X, i = 0, j = 0, component = 1)
    dv_yy = dde.grad.hessian(Y, X, i = 1, j = 1, component = 1)
    dv_xy = dde.grad.hessian(Y, X, i = 0, j = 1, component = 1)
    
    force_eq_x = (du_xx + 0.5 * (1 - nu) * du_yy + 0.5 * (1 + nu) * dv_xy) * E / (1 - nu**2)
    force_eq_y = (dv_yy + 0.5 * (1 - nu) * dv_xx + 0.5 * (1 + nu) * du_xy) * E / (1 - nu**2)
    
    return [force_eq_x, force_eq_y]
InĀ [7]:
def BC_xy(X, Y):
    du_x = dde.grad.jacobian(Y, X, i = 0, j = 0)
    du_y = dde.grad.jacobian(Y, X, i = 0, j = 1)
    dv_x = dde.grad.jacobian(Y, X, i = 1, j = 0)
    dv_y = dde.grad.jacobian(Y, X, i = 1, j = 1)
    
    sig_xx = (du_x + nu * dv_y) * E / (1 - nu**2)
    sig_yy = (dv_y + nu * du_x) * E / (1 - nu**2)
    sig_xy = (dv_x + du_y) * E / (1 + nu) / 2
    
    sig_ex = f * h * tf.reshape(tf.math.cos(np.pi / (2 * b) * X[:,1]), [-1, 1])
    
    return sig_xx-sig_ex, sig_xy, sig_yy

1.3.3. Define Geometry and Implement Boundary ConditionĀ¶

InĀ [8]:
geom = dde.geometry.Rectangle(xmin = [0, 0], xmax = [a, b])

bc_Y0 = dde.DirichletBC(geom, lambda x:0, boundary_y0, component = 1)
bc_X0 = dde.DirichletBC(geom, lambda x:0, boundary_x0, component = 0)

bc_free_yy = dde.OperatorBC(geom, lambda x, y, _ : BC_xy(x,y)[2], boundary_y)
bc_free_yx = dde.OperatorBC(geom, lambda x, y, _ : BC_xy(x,y)[1], boundary_y)

bc_traction = dde.OperatorBC(geom, lambda x, y, _ : BC_xy(x,y)[0], boundary_x)
bc_free_xy = dde.OperatorBC(geom, lambda x, y, _ : BC_xy(x,y)[1], boundary_x)
InĀ [9]:
data = dde.data.PDE(geom, 
                    pde, 
                    [bc_traction, bc_free_xy, bc_free_yx, bc_free_yy, bc_Y0, bc_X0], 
                    num_domain = 9000,
                    num_boundary = 1000,
                    num_test = 100)
/usr/local/lib/python3.7/dist-packages/skopt/sampler/sobol.py:250: UserWarning: The balance properties of Sobol' points require n to be a power of 2. 0 points have been previously generated, then: n=0+9002=9002. 
  total_n_samples))
/usr/local/lib/python3.7/dist-packages/skopt/sampler/sobol.py:250: UserWarning: The balance properties of Sobol' points require n to be a power of 2. 0 points have been previously generated, then: n=0+1004=1004. 
  total_n_samples))
InĀ [10]:
plt.figure(figsize = (5,5))
plt.scatter(data.train_x[:,0], data.train_x[:,1], s = 1)
plt.xlabel('x (mm)')
plt.ylabel('y (mm)')
plt.show()

1.3.4. Define Network and Hyper-parametersĀ¶

InĀ [11]:
layer_size = [2] + [64] * 5 + [2]
activation = "tanh"
initializer = "Glorot uniform"

net = dde.nn.tensorflow.fnn.FNN(layer_size, activation, initializer)

model = dde.Model(data, net)
model.compile("adam", lr = 1e-3)
Compiling model...
'compile' took 0.000484 s

1.3.5. Train (Adam Optimizer)Ā¶

InĀ [12]:
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Training model...

WARNING:tensorflow:AutoGraph could not transform <function <lambda> at 0x7f043ef675f0> and will run it as-is.
Cause: could not parse the source code of <function <lambda> at 0x7f043ef675f0>: no matching AST found among candidates:
# coding=utf-8
(lambda x, on: np.array([on_boundary(x[i], on[i]) for i in range(len(x))]))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function <lambda> at 0x7f043ef675f0> and will run it as-is.
Cause: could not parse the source code of <function <lambda> at 0x7f043ef675f0>: no matching AST found among candidates:
# coding=utf-8
(lambda x, on: np.array([on_boundary(x[i], on[i]) for i in range(len(x))]))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function <lambda> at 0x7f043ef67b00> and will run it as-is.
Cause: could not parse the source code of <function <lambda> at 0x7f043ef67b00>: no matching AST found among candidates:
# coding=utf-8
(lambda x, on: np.array([on_boundary(x[i], on[i]) for i in range(len(x))]))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function <lambda> at 0x7f043ef67b00> and will run it as-is.
Cause: could not parse the source code of <function <lambda> at 0x7f043ef67b00>: no matching AST found among candidates:
# coding=utf-8
(lambda x, on: np.array([on_boundary(x[i], on[i]) for i in range(len(x))]))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
Step      Train loss                                                                          Test loss                                                                           Test metric
0         [7.83e+00, 1.13e+00, 1.56e+00, 1.73e+00, 9.55e-01, 1.07e+00, 1.26e-02, 5.28e-01]    [5.09e+00, 4.83e-01, 1.56e+00, 1.73e+00, 9.55e-01, 1.07e+00, 1.26e-02, 5.28e-01]    []  
1000      [6.91e-04, 4.59e-04, 2.59e-04, 1.04e-04, 3.57e-05, 3.78e-05, 1.02e-04, 1.41e-04]    [5.81e-04, 3.80e-04, 2.59e-04, 1.04e-04, 3.57e-05, 3.78e-05, 1.02e-04, 1.41e-04]    []  
2000      [3.21e-04, 3.52e-04, 1.12e-04, 5.58e-05, 2.71e-05, 4.35e-05, 5.20e-05, 8.19e-05]    [2.57e-04, 2.72e-04, 1.12e-04, 5.58e-05, 2.71e-05, 4.35e-05, 5.20e-05, 8.19e-05]    []  
3000      [2.11e-04, 2.76e-04, 7.34e-05, 4.35e-05, 2.42e-05, 3.93e-05, 2.63e-05, 6.73e-05]    [1.59e-04, 2.07e-04, 7.34e-05, 4.35e-05, 2.42e-05, 3.93e-05, 2.63e-05, 6.73e-05]    []  
4000      [1.58e-04, 2.21e-04, 5.32e-05, 3.89e-05, 2.03e-05, 3.60e-05, 1.48e-05, 6.24e-05]    [1.16e-04, 1.64e-04, 5.32e-05, 3.89e-05, 2.03e-05, 3.60e-05, 1.48e-05, 6.24e-05]    []  
5000      [7.32e-04, 9.15e-04, 2.72e-04, 3.18e-05, 2.68e-05, 5.38e-05, 4.73e-05, 1.08e-04]    [4.71e-04, 6.13e-04, 2.72e-04, 3.18e-05, 2.68e-05, 5.38e-05, 4.73e-05, 1.08e-04]    []  
6000      [1.11e-04, 1.65e-04, 3.77e-05, 3.51e-05, 1.77e-05, 3.15e-05, 6.83e-06, 5.69e-05]    [8.17e-05, 1.17e-04, 3.77e-05, 3.51e-05, 1.77e-05, 3.15e-05, 6.83e-06, 5.69e-05]    []  
7000      [1.00e-04, 1.59e-04, 3.38e-05, 3.64e-05, 1.89e-05, 3.19e-05, 6.13e-06, 5.02e-05]    [7.34e-05, 1.16e-04, 3.38e-05, 3.64e-05, 1.89e-05, 3.19e-05, 6.13e-06, 5.02e-05]    []  
8000      [4.13e-04, 3.11e-04, 6.38e-04, 1.43e-04, 1.04e-04, 5.29e-04, 5.66e-05, 1.82e-04]    [4.34e-04, 1.97e-04, 6.38e-04, 1.43e-04, 1.04e-04, 5.29e-04, 5.66e-05, 1.82e-04]    []  
9000      [5.23e-05, 1.16e-04, 1.61e-05, 2.41e-05, 1.62e-05, 2.20e-05, 4.61e-06, 3.67e-05]    [3.63e-05, 7.85e-05, 1.61e-05, 2.41e-05, 1.62e-05, 2.20e-05, 4.61e-06, 3.67e-05]    []  
10000     [2.86e-05, 7.52e-05, 2.42e-05, 1.14e-05, 1.95e-05, 7.09e-05, 4.71e-06, 2.64e-05]    [1.91e-05, 4.48e-05, 2.42e-05, 1.14e-05, 1.95e-05, 7.09e-05, 4.71e-06, 2.64e-05]    []  

Best model at step 10000:
  train loss: 2.61e-04
  test loss: 2.21e-04
  test metric: []

'train' took 190.501425 s

1.3.6. Plot Displacement (Adam Optimzer)Ā¶

InĀ [13]:
color_legend = [[0, 0.182], [-0.06, 0.011], [-0.0022,1.0], [-0.15, 0.45]]
title = ['x-displacement ($u$)', 'y-displacement ($v$)', '$\sigma_{xx}$', '$\sigma_{yy}$']
InĀ [14]:
samples = geom.uniform_points(10201)
pde_disp = model.predict(samples)

plt.figure(figsize = (10, 4))
for idx in range(2):
    plt.subplot(1,2,idx+1)
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = pde_disp[:, idx],
                cmap = 'rainbow',
                s = 5)
    plt.clim(color_legend[idx])
    plt.title(title[idx])
    plt.xlim((0, a))
    plt.ylim((0, b))
    plt.colorbar()
plt.tight_layout()
plt.show()

1.3.7. Train More (L-BFGS Optimizer)Ā¶

InĀ [15]:
dde.optimizers.config.set_LBFGS_options(maxiter = 3000)
model.compile("L-BFGS")
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Compiling model...
'compile' took 0.061669 s

Training model...

Step      Train loss                                                                          Test loss                                                                           Test metric
10000     [2.86e-05, 7.52e-05, 2.42e-05, 1.14e-05, 1.95e-05, 7.09e-05, 4.71e-06, 2.64e-05]    [1.91e-05, 4.48e-05, 2.42e-05, 1.14e-05, 1.95e-05, 7.09e-05, 4.71e-06, 2.64e-05]    []  
11845     [2.01e-06, 1.90e-06, 4.77e-07, 3.82e-07, 2.05e-07, 5.41e-07, 5.52e-07, 6.17e-07]    [1.61e-06, 1.53e-06, 4.77e-07, 3.82e-07, 2.05e-07, 5.41e-07, 5.52e-07, 6.17e-07]    []  

Best model at step 11845:
  train loss: 6.69e-06
  test loss: 5.91e-06
  test metric: []

'train' took 487.722181 s

1.3.8. Plot Displacement (Adam + L-BFGS)Ā¶

InĀ [16]:
pde_disp = model.predict(samples)

plt.figure(figsize = (10, 4))
for idx in range(2):
    plt.subplot(1,2,idx+1)
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = pde_disp[:, idx],
                cmap = 'rainbow',
                s = 5)
    plt.clim(color_legend[idx])
    plt.title(title[idx], fontsize = 15)
    plt.xlabel('x (mm)', fontsize = 14)
    plt.ylabel('y (mm)', fontsize = 14)
    plt.axis('square')
    plt.xlim((0, a))
    plt.ylim((0, b))
    plt.colorbar()
plt.tight_layout()
plt.show()

1.3.9 Plot StressĀ¶

  • Since the neural network's output is only x and y displacement, we should autograd to calculate stress
    • If we assume that the Cauchy stress tensor components are linearly related to the von KĆ”rmĆ”n strains by Hooke's law, stress can be represented as:


$$ \sigma_{xx} = {E \over 1 - \nu^2}\left({\partial u \over \partial x} + \nu {\partial v \over \partial y} \right)\\\\ \sigma_{yy} = {E \over 1 - \nu^2}\left({\partial v \over \partial y} + \nu {\partial u \over \partial x} \right) $$

InĀ [17]:
def check_stress(net, X):
    X = tf.constant(X)

    with tf.GradientTape(persistent=True) as tape:
      tape.watch(X)
      disp = net(X)
      u_pred = tf.reshape(disp[:,0],[-1,1])
      v_pred = tf.reshape(disp[:,1],[-1,1])

    du_xy = tape.gradient(u_pred, X)
    dv_xy = tape.gradient(v_pred, X)
    
    du_x, du_y = du_xy[:,0], du_xy[:,1]
    dv_x, dv_y = dv_xy[:,0], dv_xy[:,1]
    
    sig_xx = tf.reshape(((du_x + nu * dv_y) * E / (1 - nu**2)), [-1, 1])
    sig_yy = tf.reshape(((dv_y + nu * du_x) * E / (1 - nu**2)), [-1, 1])
    
    return sig_xx, sig_yy
InĀ [18]:
sig_x, sig_y = check_stress(model.net, samples)
pde_sig = np.hstack([sig_x, sig_y])

plt.figure(figsize = (10, 4))
for idx in range(2):
    plt.subplot(1,2,idx + 1)
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = pde_sig[:, idx],
                cmap = 'rainbow',
                s = 5)
    plt.clim(color_legend[idx + 2])
    plt.title(title[idx + 2], fontsize = 15)
    plt.xlabel('x (mm)', fontsize = 14)
    plt.ylabel('y (mm)', fontsize = 14)
    plt.axis('square')
    plt.xlim((0, a))
    plt.ylim((0, b))
    plt.colorbar()
plt.tight_layout()
plt.show()

2. Data-driven Approach with Big dataĀ¶

2.1. Load and Sample DataĀ¶

Data Download

InĀ [19]:
from google.colab import drive
drive.mount('/content/drive/')
Mounted at /content/drive/
InĀ [20]:
Plate_data = np.load('/content/drive/MyDrive/Colab Notebooks/data_files/Plate_data.npy')
loc = Plate_data[:, 0:2]
u = Plate_data[:, 2:3]
v = Plate_data[:, 3:4]
stress = Plate_data[:, 4:6]
InĀ [21]:
tr_sample = np.array([np.where(loc[:,0] == i) for i in range(11)]).reshape(-1,)
x_sample = loc[tr_sample,:]
u_sample = u[tr_sample,:]
v_sample = v[tr_sample,:]
InĀ [22]:
observe_u = dde.icbc.PointSetBC(x_sample, u_sample, component = 0)
observe_v = dde.icbc.PointSetBC(x_sample, v_sample, component = 1)

2.2. Define GeometryĀ¶

InĀ [23]:
geom = dde.geometry.Rectangle(xmin = [0, 0], xmax = [a, b])
data = dde.data.PDE(geom, 
                    None, 
                    [observe_u, observe_v], 
                    num_domain = 0, 
                    num_boundary = 0, 
                    num_test = 100)
InĀ [24]:
plt.figure(figsize = (5,5))
plt.scatter(data.train_x[:,0], data.train_x[:,1], s = 1)
plt.xlim([-0.5,10.5])
plt.xlabel('x (mm)')
plt.ylabel('y (mm)')
plt.show()

2.3. Define Network and Hyper-parametersĀ¶

InĀ [25]:
layer_size = [2] + [64] * 5 + [2]
activation = "tanh"
initializer = "Glorot uniform"

net = dde.maps.FNN(layer_size, activation, initializer)

model = dde.Model(data, net)
model.compile("adam", lr = 1e-3)
Compiling model...
'compile' took 0.000571 s

2.4. Train (Adam Optimizer)Ā¶

InĀ [26]:
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = False)
Training model...

Step      Train loss              Test loss               Test metric
0         [1.59e-01, 2.42e-02]    [1.59e-01, 2.42e-02]    []  
1000      [4.53e-06, 8.47e-06]    [4.53e-06, 8.47e-06]    []  
2000      [1.38e-06, 2.61e-06]    [1.38e-06, 2.61e-06]    []  
3000      [1.17e-06, 2.61e-06]    [1.17e-06, 2.61e-06]    []  
4000      [1.39e-06, 1.91e-06]    [1.39e-06, 1.91e-06]    []  
5000      [6.86e-07, 9.91e-07]    [6.86e-07, 9.91e-07]    []  
6000      [5.49e-07, 6.23e-07]    [5.49e-07, 6.23e-07]    []  
7000      [1.29e-06, 1.57e-06]    [1.29e-06, 1.57e-06]    []  
8000      [2.54e-07, 3.74e-07]    [2.54e-07, 3.74e-07]    []  
9000      [2.44e-07, 4.57e-07]    [2.44e-07, 4.57e-07]    []  
10000     [1.65e-07, 3.83e-07]    [1.65e-07, 3.83e-07]    []  

Best model at step 10000:
  train loss: 5.47e-07
  test loss: 5.47e-07
  test metric: []

'train' took 13.647047 s

2.5. Train More (L-BFGS Optimizer)Ā¶

InĀ [27]:
dde.optimizers.config.set_LBFGS_options(maxiter = 5000)
model.compile("L-BFGS")
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Compiling model...
'compile' took 0.013668 s

Training model...

Step      Train loss              Test loss               Test metric
10000     [1.65e-07, 3.83e-07]    [1.65e-07, 3.83e-07]    []  
11241     [2.99e-09, 3.91e-09]    [2.99e-09, 3.91e-09]    []  

Best model at step 11241:
  train loss: 6.90e-09
  test loss: 6.90e-09
  test metric: []

'train' took 251.334709 s

2.6. Plot Results (Adam + L-BFGS)Ā¶

InĀ [28]:
big_disp = model.predict(samples)
sig_x, sig_y = np.array(check_stress(model.net, samples))
big_sig = np.hstack([sig_x, sig_y])
big_result = np.hstack([big_disp, big_sig])

plt.figure(figsize = (10, 8))
for idx in range(4):
    plt.subplot(2,2,idx + 1)
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = big_result[:,idx],
                cmap = 'rainbow',
                s = 5)
    plt.title(title[idx], fontsize = 15)
    plt.xlabel('x (mm)', fontsize = 14)
    plt.ylabel('y (mm)', fontsize = 14)
    plt.axis('square')
    plt.xlim((0, a))
    plt.ylim((0, b))
    plt.colorbar()
plt.tight_layout()
plt.show()

3. Data-driven Approach with Small DataĀ¶

3.1. Sample DataĀ¶

InĀ [29]:
tr_sample = np.array([np.where(loc[:,0] == i) for i in [2,5,8]]).reshape(-1,)
x_sample = loc[tr_sample,:]
u_sample = u[tr_sample,:]
v_sample = v[tr_sample,:]
InĀ [30]:
observe_u = dde.icbc.PointSetBC(x_sample, u_sample, component = 0)
observe_v = dde.icbc.PointSetBC(x_sample, v_sample, component = 1)

3.2. Define GeometryĀ¶

InĀ [31]:
geom = dde.geometry.Rectangle(xmin = [0, 0], xmax = [a, b])
data = dde.data.PDE(geom, 
                    None, 
                    [observe_u, observe_v], 
                    num_domain = 0, 
                    num_boundary = 0, 
                    num_test = 100)
InĀ [32]:
plt.figure(figsize = (5,5))
plt.scatter(data.train_x[:,0], data.train_x[:,1], s = 1)
plt.xlim([-0.5,10.5])
plt.xlabel('x (mm)')
plt.ylabel('y (mm)')
plt.show()

3.3. Define Network and Hyper-parametersĀ¶

InĀ [33]:
layer_size = [2] + [64] * 5 + [2]
activation = "tanh"
initializer = "Glorot uniform"

net = dde.maps.FNN(layer_size, activation, initializer)

model = dde.Model(data, net)
model.compile("adam", lr = 1e-3)
Compiling model...
'compile' took 0.000519 s

3.4. Train (Adam Optimizer)Ā¶

InĀ [34]:
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = False)
Training model...

Step      Train loss              Test loss               Test metric
0         [2.97e-01, 1.21e-01]    [2.97e-01, 1.21e-01]    []  
1000      [1.08e-06, 1.86e-06]    [1.08e-06, 1.86e-06]    []  
2000      [4.63e-07, 3.86e-07]    [4.63e-07, 3.86e-07]    []  
3000      [4.39e-07, 2.20e-07]    [4.39e-07, 2.20e-07]    []  
4000      [1.72e-07, 1.09e-07]    [1.72e-07, 1.09e-07]    []  
5000      [1.82e-07, 1.18e-07]    [1.82e-07, 1.18e-07]    []  
6000      [4.51e-08, 3.86e-08]    [4.51e-08, 3.86e-08]    []  
7000      [6.11e-08, 3.05e-08]    [6.11e-08, 3.05e-08]    []  
8000      [3.36e-08, 1.38e-08]    [3.36e-08, 1.38e-08]    []  
9000      [1.74e-08, 9.04e-09]    [1.74e-08, 9.04e-09]    []  
10000     [2.42e-06, 6.06e-07]    [2.42e-06, 6.06e-07]    []  

Best model at step 9000:
  train loss: 2.64e-08
  test loss: 2.64e-08
  test metric: []

'train' took 14.176561 s

3.5. Train More (L-BFGS Optimizer)Ā¶

InĀ [35]:
dde.optimizers.config.set_LBFGS_options(maxiter = 5000)
model.compile("L-BFGS")
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Compiling model...
'compile' took 0.015152 s

Training model...

Step      Train loss              Test loss               Test metric
10000     [2.42e-06, 6.06e-07]    [2.42e-06, 6.06e-07]    []  
10233     [3.53e-09, 2.98e-09]    [3.53e-09, 2.98e-09]    []  

Best model at step 10233:
  train loss: 6.51e-09
  test loss: 6.51e-09
  test metric: []

'train' took 49.376678 s

3.6. Plot Results (Adam + L-BFGS)Ā¶

InĀ [36]:
small_disp = model.predict(samples)
sig_x, sig_y = np.array(check_stress(model.net, samples))
small_sig = np.hstack([sig_x, sig_y])
small_result = np.hstack([small_disp, small_sig])

plt.figure(figsize = (10, 8))
for idx in range(4):
    plt.subplot(2,2,idx + 1)
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = small_result[:,idx],
                cmap = 'rainbow',
                s = 5)
    plt.title(title[idx], fontsize = 15)
    plt.xlabel('x (mm)', fontsize = 14)
    plt.ylabel('y (mm)', fontsize = 14)
    plt.axis('square')
    plt.xlim((0, a))
    plt.ylim((0, b))
    plt.colorbar()
plt.tight_layout()
plt.show()

4. PINN with Small DataĀ¶

InĀ [37]:
data = dde.data.PDE(geom, 
                    pde, 
                    [bc_traction, bc_free_xy, bc_free_yx, bc_free_yy, bc_Y0, bc_X0, observe_u, observe_v], 
                    num_domain = 100, 
                    num_boundary = 100,
                    num_test = 100,
                    train_distribution = 'LHS')
InĀ [38]:
plt.figure(figsize = (5,5))
plt.scatter(data.train_x[:,0], data.train_x[:,1], s = 1)
plt.xlabel('x (mm)')
plt.ylabel('y (mm)')
plt.show()

4.1. Define Network and Hyper-parametersĀ¶

InĀ [39]:
layer_size = [2] + [64] * 5 + [2]
activation = "tanh"
initializer = "Glorot uniform"

net = dde.maps.FNN(layer_size, activation, initializer)

model = dde.Model(data, net)
model.compile("adam", lr = 1e-3, loss_weights=[1, 1, 1, 1, 1, 1, 1, 1, 9, 9])
Compiling model...
'compile' took 0.000449 s

4.2 Train (Adam Optimizer)Ā¶

InĀ [40]:
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = False)
Training model...

Step      Train loss                                                                                              Test loss                                                                                               Test metric
0         [1.17e+00, 7.98e-01, 7.78e-01, 2.30e+00, 5.54e-01, 1.29e+00, 3.63e-03, 2.34e-01, 1.67e+00, 1.54e-02]    [3.42e-01, 4.91e-01, 7.78e-01, 2.30e+00, 5.54e-01, 1.29e+00, 3.63e-03, 2.34e-01, 1.67e+00, 1.54e-02]    []  
1000      [5.96e-04, 6.56e-04, 2.22e-04, 9.00e-05, 3.54e-05, 5.39e-05, 8.53e-05, 2.34e-05, 2.69e-04, 1.84e-04]    [5.34e-04, 9.55e-04, 2.22e-04, 9.00e-05, 3.54e-05, 5.39e-05, 8.53e-05, 2.34e-05, 2.69e-04, 1.84e-04]    []  
2000      [2.33e-04, 3.09e-04, 8.81e-05, 5.92e-05, 1.81e-05, 3.28e-05, 2.33e-05, 4.71e-06, 9.17e-05, 2.34e-05]    [2.92e-04, 3.76e-04, 8.81e-05, 5.92e-05, 1.81e-05, 3.28e-05, 2.33e-05, 4.71e-06, 9.17e-05, 2.34e-05]    []  
3000      [2.28e-04, 2.19e-04, 8.99e-05, 6.13e-05, 3.80e-05, 6.85e-05, 1.68e-05, 2.65e-05, 3.21e-04, 7.05e-05]    [1.50e-04, 2.35e-04, 8.99e-05, 6.13e-05, 3.80e-05, 6.85e-05, 1.68e-05, 2.65e-05, 3.21e-04, 7.05e-05]    []  
4000      [7.01e-05, 1.39e-04, 2.64e-05, 3.11e-05, 9.59e-06, 2.32e-05, 7.36e-06, 3.76e-06, 3.49e-05, 5.87e-06]    [9.17e-05, 1.61e-04, 2.64e-05, 3.11e-05, 9.59e-06, 2.32e-05, 7.36e-06, 3.76e-06, 3.49e-05, 5.87e-06]    []  
5000      [4.29e-05, 1.14e-04, 1.44e-05, 2.27e-05, 1.01e-05, 1.92e-05, 5.99e-06, 3.39e-06, 2.43e-05, 4.89e-06]    [5.17e-05, 1.09e-04, 1.44e-05, 2.27e-05, 1.01e-05, 1.92e-05, 5.99e-06, 3.39e-06, 2.43e-05, 4.89e-06]    []  
6000      [2.06e-05, 5.84e-05, 5.11e-06, 1.19e-05, 7.06e-06, 9.13e-06, 3.05e-06, 2.59e-06, 1.32e-05, 2.87e-06]    [3.66e-05, 6.16e-05, 5.11e-06, 1.19e-05, 7.06e-06, 9.13e-06, 3.05e-06, 2.59e-06, 1.32e-05, 2.87e-06]    []  
7000      [1.87e-04, 8.14e-05, 9.08e-04, 1.76e-05, 9.15e-06, 2.63e-04, 6.00e-06, 8.46e-05, 8.47e-04, 1.05e-05]    [2.31e-04, 6.14e-05, 9.08e-04, 1.76e-05, 9.15e-06, 2.63e-04, 6.00e-06, 8.46e-05, 8.47e-04, 1.05e-05]    []  
8000      [1.82e-05, 1.81e-04, 3.04e-04, 3.88e-05, 5.11e-05, 2.34e-05, 6.37e-05, 7.70e-06, 9.62e-05, 1.07e-03]    [3.23e-05, 1.68e-04, 3.04e-04, 3.88e-05, 5.11e-05, 2.34e-05, 6.37e-05, 7.70e-06, 9.62e-05, 1.07e-03]    []  
9000      [1.50e-05, 2.37e-05, 1.79e-06, 6.13e-06, 5.14e-06, 3.55e-06, 1.37e-06, 1.36e-06, 6.87e-06, 1.75e-06]    [2.30e-05, 2.59e-05, 1.79e-06, 6.13e-06, 5.14e-06, 3.55e-06, 1.37e-06, 1.36e-06, 6.87e-06, 1.75e-06]    []  
10000     [4.81e-05, 7.37e-05, 2.26e-04, 4.43e-05, 3.52e-05, 7.69e-05, 2.92e-05, 1.06e-05, 1.05e-04, 3.56e-04]    [7.22e-05, 5.39e-05, 2.26e-04, 4.43e-05, 3.52e-05, 7.69e-05, 2.92e-05, 1.06e-05, 1.05e-04, 3.56e-04]    []  

Best model at step 9000:
  train loss: 6.66e-05
  test loss: 7.69e-05
  test metric: []

'train' took 35.974494 s

4.3. Train More (L-BFGS Optimizer)Ā¶

InĀ [41]:
dde.optimizers.config.set_LBFGS_options(maxiter = 5000)
model.compile("L-BFGS")
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Compiling model...
'compile' took 0.063803 s

Training model...

Step      Train loss                                                                                              Test loss                                                                                               Test metric
10000     [4.81e-05, 7.37e-05, 2.26e-04, 4.43e-05, 3.52e-05, 7.69e-05, 2.92e-05, 1.06e-05, 1.17e-05, 3.96e-05]    [7.22e-05, 5.39e-05, 2.26e-04, 4.43e-05, 3.52e-05, 7.69e-05, 2.92e-05, 1.06e-05, 1.17e-05, 3.96e-05]    []  
11310     [2.03e-06, 1.42e-06, 2.60e-07, 1.83e-07, 1.07e-07, 4.87e-07, 5.01e-08, 8.72e-08, 1.45e-07, 9.32e-08]    [4.18e-06, 1.98e-06, 2.60e-07, 1.83e-07, 1.07e-07, 4.87e-07, 5.01e-08, 8.72e-08, 1.45e-07, 9.32e-08]    []  

Best model at step 11310:
  train loss: 4.86e-06
  test loss: 7.57e-06
  test metric: []

'train' took 281.476495 s

4.4. Plot Results (Adam + L-BFGS)Ā¶

InĀ [42]:
pde_small_disp = model.predict(samples)
sig_x, sig_y = np.array(check_stress(model.net, samples))
pde_small_sig = np.hstack([sig_x, sig_y])
pde_small_result = np.hstack([pde_small_disp, pde_small_sig])

diag_x = np.arange(100, -1, -1)/10

plt.figure(figsize = (10, 8))
for idx in range(4):
    plt.subplot(2,2,idx+1)
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = pde_small_result[:,idx],
                cmap = 'rainbow',
                s = 5)
    plt.plot(diag_x, diag_x[::-1], '--', c = 'k', linewidth = 2)
    plt.title(title[idx], fontsize = 15)
    plt.xlabel('x (mm)', fontsize = 14)
    plt.ylabel('y (mm)', fontsize = 14)
    plt.axis('square')
    plt.xlim((0, a))
    plt.ylim((0, b))
    plt.colorbar()
plt.tight_layout()
plt.show()
WARNING:tensorflow:5 out of the last 5 calls to <function Model._compile_tensorflow.<locals>.outputs at 0x7f0367a51320> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for  more details.

4.5. Plot with All ResultsĀ¶

InĀ [43]:
diag_ = [i for i in range(u.shape[0]) if loc[i,0] + loc[i,1] == 10]

diag_result_true = [u[diag_], v[diag_], stress[diag_, 0], stress[diag_, 1]]
diag_result_pde = [pde_disp[diag_, 0], pde_disp[diag_, 1], pde_sig[diag_, 0], pde_sig[diag_, 1]]
diag_result_big = [big_disp[diag_, 0], big_disp[diag_, 1], big_sig[diag_, 0], big_sig[diag_, 1]]
diag_result_small = [small_disp[diag_, 0], small_disp[diag_, 1], small_sig[diag_, 0], small_sig[diag_, 1]]
diag_result_pde_small = [pde_small_disp[diag_, 0], pde_small_disp[diag_, 1], pde_small_sig[diag_, 0], pde_small_sig[diag_, 1]]

plt.figure(figsize = (10, 9))
for idx in range(4):
    plt.subplot(2,2,idx+1)
    plt.plot(loc[diag_,0], diag_result_true[idx], 'k' , linewidth = 3, label = 'FEM')
    plt.plot(loc[diag_,0], diag_result_pde[idx], '--' , linewidth = 3, label = 'PDE')
    plt.plot(loc[diag_,0], diag_result_big[idx], '-.', linewidth = 3, label = 'Big data')
    plt.plot(loc[diag_,0], diag_result_small[idx], ':', linewidth = 3, label = 'Small data')
    plt.plot(loc[diag_,0], diag_result_pde_small[idx], '.', linewidth = 3, label = 'PDE+small data')
    plt.xlabel('x (mm)', fontsize = 15)
    plt.ylabel(title[idx], fontsize = 15)
    plt.xlim((0, a))
    plt.legend(fontsize = 11)
plt.tight_layout()
plt.show()
InĀ [44]:
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')