PINN with Data

Fluid Mechanics Example


By Prof. Seungchul Lee
http://iai.postech.ac.kr/
Industrial AI Lab at POSTECH

Table of Contents

1. Data-driven Approach with Big dataĀ¶

1.1. Load and Sample DataĀ¶

Fluid_bigdata Download

InĀ [Ā ]:
import deepxde as dde
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
Deepxde backend not selected or invalid. Assuming tensorflow.compat.v1 for now.
Using backend: tensorflow.compat.v1

Setting the default backend to "tensorflow.compat.v1". You can change it in the ~/.deepxde/config.json file or export the DDEBACKEND environment variable. Valid options are: tensorflow.compat.v1, tensorflow, pytorch, jax, paddle (all lowercase)
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/compat/v2_compat.py:107: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/deepxde/nn/initializers.py:118: The name tf.keras.initializers.he_normal is deprecated. Please use tf.compat.v1.keras.initializers.he_normal instead.

InĀ [Ā ]:
from google.colab import drive
drive.mount('/content/drive/')
Mounted at /content/drive/
InĀ [Ā ]:
fluid_bigdata = np.load('/content/drive/MyDrive/postech/KSNVE/data_files/fluid_bigdata.npy')

observe_x = fluid_bigdata[:, :2]
observe_y = fluid_bigdata[:, 2:]
InĀ [Ā ]:
observe_u = dde.icbc.PointSetBC(observe_x, observe_y[:, 0].reshape(-1, 1), component=0)
observe_v = dde.icbc.PointSetBC(observe_x, observe_y[:, 1].reshape(-1, 1), component=1)
observe_p = dde.icbc.PointSetBC(observe_x, observe_y[:, 2].reshape(-1, 1), component=2)

1.2. Define ParametersĀ¶

InĀ [Ā ]:
# Properties
rho = 1
mu = 1
u_in = 1
D = 1
L = 2

1.3. Define GeometryĀ¶

InĀ [Ā ]:
geom = dde.geometry.Rectangle(xmin = [-L/2, -D/2], xmax = [L/2, D/2])
data = dde.data.PDE(geom,
                    None,
                    [observe_u, observe_v, observe_p],
                    num_domain = 0,
                    num_boundary = 0,
                    num_test = 100)
Warning: 100 points required, but 120 points sampled.
InĀ [Ā ]:
plt.figure(figsize = (20,4))
plt.scatter(data.train_x_all[:,0], data.train_x_all[:,1], s = 0.5)
plt.scatter(observe_x[:, 0], observe_x[:, 1], c = observe_y[:, 0], s = 6.5, cmap = 'jet')
plt.scatter(observe_x[:, 0], observe_x[:, 1], s = 0.5, color='k', alpha = 0.5)
plt.xlim((0-L/2, L-L/2))
plt.ylim((0-D/2, D-D/2))
plt.xlabel('x-direction length (m)')
plt.ylabel('Distance from middle of plates (m)')
plt.title('Velocity (u)')
plt.show()

1.4. Define Network and Hyper-parametersĀ¶

InĀ [Ā ]:
layer_size = [2] + [64] * 5 + [3]
activation = "tanh"
initializer = "Glorot uniform"

net = dde.maps.FNN(layer_size, activation, initializer)

model = dde.Model(data, net)
model.compile("adam", lr = 1e-3)
Compiling model...
Building feed-forward neural network...
'build' took 0.092175 s

/usr/local/lib/python3.7/dist-packages/deepxde/nn/tensorflow_compat_v1/fnn.py:110: UserWarning: `tf.layers.dense` is deprecated and will be removed in a future version. Please use `tf.keras.layers.Dense` instead.
  kernel_constraint=self.kernel_constraint,
/usr/local/lib/python3.7/dist-packages/keras/legacy_tf_layers/core.py:261: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.
  return layer.apply(inputs)
'compile' took 3.501978 s

1.5. Train (Adam Optimizer)Ā¶

InĀ [Ā ]:
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = False)
Initializing variables...
Training model...

Step      Train loss                        Test loss                         Test metric
0         [1.18e+00, 2.81e-02, 2.01e+02]    [1.18e+00, 2.81e-02, 2.01e+02]    []  
1000      [8.32e-02, 7.61e-03, 4.87e-01]    [8.32e-02, 7.61e-03, 4.87e-01]    []  
2000      [3.26e-03, 2.92e-03, 3.97e-02]    [3.26e-03, 2.92e-03, 3.97e-02]    []  
3000      [1.28e-03, 1.67e-03, 1.85e-02]    [1.28e-03, 1.67e-03, 1.85e-02]    []  
4000      [5.72e-04, 5.96e-04, 2.62e-02]    [5.72e-04, 5.96e-04, 2.62e-02]    []  
5000      [3.22e-04, 3.20e-04, 4.66e-03]    [3.22e-04, 3.20e-04, 4.66e-03]    []  
6000      [2.47e-04, 2.22e-04, 3.97e-03]    [2.47e-04, 2.22e-04, 3.97e-03]    []  
7000      [1.68e-04, 1.28e-04, 2.89e-03]    [1.68e-04, 1.28e-04, 2.89e-03]    []  
8000      [1.32e-04, 8.76e-05, 1.68e-03]    [1.32e-04, 8.76e-05, 1.68e-03]    []  
9000      [8.23e-05, 6.97e-05, 8.99e-04]    [8.23e-05, 6.97e-05, 8.99e-04]    []  
10000     [6.16e-05, 5.54e-05, 8.38e-04]    [6.16e-05, 5.54e-05, 8.38e-04]    []  

Best model at step 10000:
  train loss: 9.55e-04
  test loss: 9.55e-04
  test metric: []

'train' took 18.581455 s

1.6. Train More (L-BFGS Optimizer)Ā¶

InĀ [Ā ]:
dde.optimizers.config.set_LBFGS_options(maxiter=3000)
model.compile("L-BFGS")
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Compiling model...
'compile' took 0.152857 s

Training model...

Step      Train loss                        Test loss                         Test metric
10000     [6.16e-05, 5.54e-05, 8.38e-04]    [6.16e-05, 5.54e-05, 8.38e-04]    []  
11000     [6.13e-06, 5.24e-06, 6.89e-05]                                          
12000     [4.42e-06, 1.44e-06, 5.44e-05]                                          
INFO:tensorflow:Optimization terminated with:
  Message: b'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH'
  Objective function value: 0.000057
  Number of iterations: 2083
  Number of functions evaluations: 2257
12257     [3.66e-06, 1.33e-06, 5.23e-05]    [3.66e-06, 1.33e-06, 5.23e-05]    []  

Best model at step 12257:
  train loss: 5.73e-05
  test loss: 5.73e-05
  test metric: []

'train' took 42.100872 s

1.7. Plot Results (Adam + L-BFGS)Ā¶

InĀ [Ā ]:
samples = geom.random_points(500000)
result = model.predict(samples)
color_legend = [[0, 1.5], [-0.3, 0.3], [0, 35]]

for idx in range(3):
    plt.figure(figsize = (20, 4))
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = result[:, idx],
                s = 2,
                cmap = 'jet')
    plt.colorbar()
    plt.clim(color_legend[idx])
    plt.xlim((0-L/2, L-L/2))
    plt.ylim((0-D/2, D-D/2))
    plt.tight_layout()
    plt.show()

2. Data-driven Approach with Small DataĀ¶

2.1. Load and Sample DataĀ¶

Fluid_smalldata Download

InĀ [Ā ]:
fluid_smalldata = np.load('/content/drive/MyDrive/KSME_CAE/notebooks/data_files/fluid_smalldata.npy')

observe_x = fluid_smalldata[:, :2]
observe_y = fluid_smalldata[:, 2:]
InĀ [Ā ]:
observe_u = dde.icbc.PointSetBC(observe_x, observe_y[:, 0].reshape(-1, 1), component=0)
observe_v = dde.icbc.PointSetBC(observe_x, observe_y[:, 1].reshape(-1, 1), component=1)
observe_p = dde.icbc.PointSetBC(observe_x, observe_y[:, 2].reshape(-1, 1), component=2)

2.2. Define GeometryĀ¶

InĀ [Ā ]:
geom = dde.geometry.Rectangle(xmin = [-L/2, -D/2], xmax = [L/2, D/2])
data = dde.data.PDE(geom,
                    None,
                    [observe_u, observe_v, observe_p],
                    num_domain = 0,
                    num_boundary = 0,
                    num_test = 120)
Warning: 120 points required, but 128 points sampled.
InĀ [Ā ]:
plt.figure(figsize = (20,4))
plt.scatter(data.train_x_all[:,0], data.train_x_all[:,1], s = 0.5)
plt.scatter(observe_x[:, 0], observe_x[:, 1], c = observe_y[:, 0], s = 6.5, cmap = 'jet')
plt.scatter(observe_x[:, 0], observe_x[:, 1], s = 0.5, color='k', alpha = 0.5)
plt.xlim((0-L/2, L-L/2))
plt.ylim((0-D/2, D-D/2))
plt.xlabel('x-direction length (m)')
plt.ylabel('Distance from middle of plates (m)')
plt.title('Velocity (u)')
plt.show()

2.3. Define Network and Hyper-parametersĀ¶

InĀ [Ā ]:
layer_size = [2] + [64] * 5 + [3]
activation = "tanh"
initializer = "Glorot uniform"

net = dde.maps.FNN(layer_size, activation, initializer)

model = dde.Model(data, net)
model.compile("adam", lr = 1e-3)
Compiling model...
Building feed-forward neural network...
/usr/local/lib/python3.7/dist-packages/deepxde/nn/tensorflow_compat_v1/fnn.py:110: UserWarning: `tf.layers.dense` is deprecated and will be removed in a future version. Please use `tf.keras.layers.Dense` instead.
  kernel_constraint=self.kernel_constraint,
/usr/local/lib/python3.7/dist-packages/keras/legacy_tf_layers/core.py:261: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.
  return layer.apply(inputs)
'build' took 0.261709 s

'compile' took 0.527379 s

2.4. Train (Adam Optimizer)Ā¶

InĀ [Ā ]:
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = False)
Initializing variables...
Training model...

Step      Train loss                        Test loss                         Test metric
0         [1.20e+00, 1.30e-02, 1.95e+02]    [1.20e+00, 1.30e-02, 1.95e+02]    []  
1000      [1.80e-01, 5.40e-03, 1.58e-01]    [1.80e-01, 5.40e-03, 1.58e-01]    []  
2000      [6.14e-03, 4.61e-03, 3.52e-02]    [6.14e-03, 4.61e-03, 3.52e-02]    []  
3000      [7.58e-04, 5.51e-04, 4.19e-03]    [7.58e-04, 5.51e-04, 4.19e-03]    []  
4000      [2.66e-04, 1.18e-04, 5.05e-04]    [2.66e-04, 1.18e-04, 5.05e-04]    []  
5000      [1.11e-04, 6.62e-05, 8.58e-05]    [1.11e-04, 6.62e-05, 8.58e-05]    []  
6000      [6.09e-05, 4.26e-05, 5.94e-05]    [6.09e-05, 4.26e-05, 5.94e-05]    []  
7000      [3.93e-05, 3.12e-05, 1.67e-04]    [3.93e-05, 3.12e-05, 1.67e-04]    []  
8000      [3.35e-05, 2.60e-05, 2.41e-04]    [3.35e-05, 2.60e-05, 2.41e-04]    []  
9000      [2.35e-05, 2.15e-05, 2.52e-04]    [2.35e-05, 2.15e-05, 2.52e-04]    []  
10000     [1.57e-05, 1.32e-05, 3.60e-05]    [1.57e-05, 1.32e-05, 3.60e-05]    []  

Best model at step 10000:
  train loss: 6.49e-05
  test loss: 6.49e-05
  test metric: []

'train' took 12.289989 s

2.5. Train More (L-BFGS Optimizer)Ā¶

InĀ [Ā ]:
dde.optimizers.config.set_LBFGS_options(maxiter=3000)
model.compile("L-BFGS")
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Compiling model...
'compile' took 0.175097 s

Training model...

Step      Train loss                        Test loss                         Test metric
10000     [1.57e-05, 1.32e-05, 3.60e-05]    [1.57e-05, 1.32e-05, 3.60e-05]    []  
11000     [9.41e-07, 3.98e-07, 1.72e-05]                                          
12000     [6.56e-07, 4.52e-07, 1.13e-05]                                          
13000     [4.65e-07, 2.96e-07, 8.21e-06]                                          
INFO:tensorflow:Optimization terminated with:
  Message: b'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH'
  Objective function value: 0.000009
  Number of iterations: 2918
  Number of functions evaluations: 3141
13141     [4.23e-07, 2.48e-07, 7.94e-06]    [4.23e-07, 2.48e-07, 7.94e-06]    []  

Best model at step 13141:
  train loss: 8.61e-06
  test loss: 8.61e-06
  test metric: []

'train' took 102.445192 s

2.6. Plot Results (Adam + L-BFGS)Ā¶

InĀ [Ā ]:
samples = geom.random_points(500000)
result = model.predict(samples)
color_legend = [[0, 1.5], [-0.3, 0.3], [0, 35]]

for idx in range(3):
    plt.figure(figsize = (20, 4))
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = result[:, idx],
                s = 2,
                cmap = 'jet')
    plt.colorbar()
    plt.clim(color_legend[idx])
    plt.xlim((0-L/2, L-L/2))
    plt.ylim((0-D/2, D-D/2))
    plt.tight_layout()
    plt.show()

3. PINN with Small DataĀ¶

3.1. Define PDE with Boundary & Initial ConditionsĀ¶

InĀ [Ā ]:
def boundary_wall(X, on_boundary):
    on_wall = np.logical_and(np.logical_or(np.isclose(X[1], -D/2), np.isclose(X[1], D/2)), on_boundary)
    return on_wall

def boundary_inlet(X, on_boundary):
    return on_boundary and np.isclose(X[0], -L/2)

def boundary_outlet(X, on_boundary):
    return on_boundary and np.isclose(X[0], L/2)
InĀ [Ā ]:
def pde(X, Y):
    du_x = dde.grad.jacobian(Y, X, i = 0, j = 0)
    du_y = dde.grad.jacobian(Y, X, i = 0, j = 1)
    dv_x = dde.grad.jacobian(Y, X, i = 1, j = 0)
    dv_y = dde.grad.jacobian(Y, X, i = 1, j = 1)
    dp_x = dde.grad.jacobian(Y, X, i = 2, j = 0)
    dp_y = dde.grad.jacobian(Y, X, i = 2, j = 1)
    du_xx = dde.grad.hessian(Y, X, i = 0, j = 0, component = 0)
    du_yy = dde.grad.hessian(Y, X, i = 1, j = 1, component = 0)
    dv_xx = dde.grad.hessian(Y, X, i = 0, j = 0, component = 1)
    dv_yy = dde.grad.hessian(Y, X, i = 1, j = 1, component = 1)

    pde_u = Y[:,0:1] * du_x + Y[:,1:2] * du_y + 1/rho * dp_x - (mu/rho) * (du_xx + du_yy)
    pde_v = Y[:,0:1] * dv_x + Y[:,1:2] * dv_y + 1/rho * dp_y - (mu/rho) * (dv_xx + dv_yy)
    pde_cont = du_x + dv_y

    return [pde_u, pde_v, pde_cont]

3.2. Define Geometry and Implement Boundary ConditionĀ¶

InĀ [Ā ]:
geom = dde.geometry.Rectangle(xmin=[-L/2, -D/2], xmax=[L/2, D/2])

bc_wall_u = dde.DirichletBC(geom, lambda X: 0., boundary_wall, component = 0)
bc_wall_v = dde.DirichletBC(geom, lambda X: 0., boundary_wall, component = 1)

bc_inlet_u = dde.DirichletBC(geom, lambda X: u_in, boundary_inlet, component = 0)
bc_inlet_v = dde.DirichletBC(geom, lambda X: 0., boundary_inlet, component = 1)

bc_outlet_p = dde.DirichletBC(geom, lambda X: 0., boundary_outlet, component = 2)
bc_outlet_v = dde.DirichletBC(geom, lambda X: 0., boundary_outlet, component = 1)
InĀ [Ā ]:
data = dde.data.PDE(geom,
                    pde,
                    [bc_wall_u, bc_wall_v, bc_inlet_u, bc_inlet_v, bc_outlet_p, bc_outlet_v, observe_u, observe_v, observe_p],
                    num_domain = 1000,
                    num_boundary = 500,
                    num_test = 1000,
                    train_distribution = 'LHS')
Warning: 1000 points required, but 1035 points sampled.
InĀ [Ā ]:
plt.figure(figsize = (20,4))
plt.scatter(data.train_x_all[:,0], data.train_x_all[:,1], s = 0.5)
plt.scatter(observe_x[:, 0], observe_x[:, 1], c = observe_y[:, 0], s = 6.5, cmap = 'jet')
plt.scatter(observe_x[:, 0], observe_x[:, 1], s = 0.5, color='k', alpha = 0.5)
plt.xlim((0-L/2, L-L/2))
plt.ylim((0-D/2, D-D/2))
plt.xlabel('x-direction length (m)')
plt.ylabel('Distance from middle of plates (m)')
plt.title('Velocity (u)')
plt.show()

3.3. Define Network and Hyper-parametersĀ¶

InĀ [Ā ]:
layer_size = [2] + [64] * 5 + [3]
activation = "tanh"
initializer = "Glorot uniform"

net = dde.maps.FNN(layer_size, activation, initializer)

model = dde.Model(data, net)
model.compile("adam", lr = 1e-3, loss_weights = [1, 1, 1, 1, 1, 1, 1, 1, 1, 9, 9, 9])
Compiling model...
Building feed-forward neural network...
'build' took 0.087828 s

/usr/local/lib/python3.7/dist-packages/deepxde/nn/tensorflow_compat_v1/fnn.py:110: UserWarning: `tf.layers.dense` is deprecated and will be removed in a future version. Please use `tf.keras.layers.Dense` instead.
  kernel_constraint=self.kernel_constraint,
/usr/local/lib/python3.7/dist-packages/keras/legacy_tf_layers/core.py:261: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.
  return layer.apply(inputs)
'compile' took 1.969883 s

3.4. Train (Adam Optimizer)Ā¶

InĀ [Ā ]:
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = False)
Initializing variables...
Training model...

Step      Train loss                                                                                                                  Test loss                                                                                                                   Test metric
0         [1.78e-02, 1.15e-01, 3.57e-03, 2.76e-02, 4.75e-04, 1.16e+00, 8.45e-04, 1.35e-02, 8.43e-04, 1.06e+01, 4.90e-02, 1.78e+03]    [1.56e-02, 1.20e-01, 3.60e-03, 2.76e-02, 4.75e-04, 1.16e+00, 8.45e-04, 1.35e-02, 8.43e-04, 1.06e+01, 4.90e-02, 1.78e+03]    []  
1000      [5.46e-02, 8.70e-02, 1.11e-01, 5.35e-02, 3.75e-03, 4.73e-02, 9.23e-05, 2.43e-03, 7.06e-05, 4.37e-02, 4.02e-02, 7.17e-01]    [5.67e-02, 1.03e-01, 1.23e-01, 5.35e-02, 3.75e-03, 4.73e-02, 9.23e-05, 2.43e-03, 7.06e-05, 4.37e-02, 4.02e-02, 7.17e-01]    []  
2000      [1.52e-02, 1.73e-02, 5.53e-02, 1.57e-02, 5.15e-04, 6.24e-02, 9.86e-04, 3.90e-04, 3.93e-04, 2.72e-02, 1.10e-02, 2.00e-01]    [2.15e-02, 2.01e-02, 5.19e-02, 1.57e-02, 5.15e-04, 6.24e-02, 9.86e-04, 3.90e-04, 3.93e-04, 2.72e-02, 1.10e-02, 2.00e-01]    []  
3000      [4.15e-02, 1.43e-02, 6.90e-02, 1.26e-02, 4.71e-04, 6.93e-02, 1.20e-03, 9.98e-03, 1.37e-04, 2.58e-02, 1.35e-02, 1.24e-01]    [3.24e-02, 1.73e-02, 5.63e-02, 1.26e-02, 4.71e-04, 6.93e-02, 1.20e-03, 9.98e-03, 1.37e-04, 2.58e-02, 1.35e-02, 1.24e-01]    []  
4000      [6.81e-03, 5.76e-03, 7.93e-02, 1.29e-02, 4.90e-04, 6.60e-02, 2.22e-03, 5.32e-04, 1.33e-04, 2.37e-02, 1.40e-02, 5.27e-02]    [9.24e-03, 9.91e-03, 6.19e-02, 1.29e-02, 4.90e-04, 6.60e-02, 2.22e-03, 5.32e-04, 1.33e-04, 2.37e-02, 1.40e-02, 5.27e-02]    []  
5000      [4.15e-03, 4.45e-03, 8.18e-02, 1.35e-02, 4.69e-04, 6.21e-02, 3.44e-03, 1.37e-04, 9.83e-05, 2.27e-02, 1.34e-02, 3.58e-02]    [6.55e-03, 8.45e-03, 6.54e-02, 1.35e-02, 4.69e-04, 6.21e-02, 3.44e-03, 1.37e-04, 9.83e-05, 2.27e-02, 1.34e-02, 3.58e-02]    []  
6000      [3.42e-03, 4.02e-03, 7.91e-02, 1.40e-02, 4.36e-04, 5.91e-02, 4.96e-03, 1.25e-04, 1.03e-04, 2.16e-02, 1.26e-02, 2.83e-02]    [5.36e-03, 7.54e-03, 6.60e-02, 1.40e-02, 4.36e-04, 5.91e-02, 4.96e-03, 1.25e-04, 1.03e-04, 2.16e-02, 1.26e-02, 2.83e-02]    []  
7000      [3.57e-03, 3.98e-03, 7.43e-02, 1.44e-02, 4.29e-04, 5.60e-02, 6.88e-03, 7.45e-05, 9.62e-05, 2.07e-02, 1.20e-02, 2.30e-02]    [5.40e-03, 7.35e-03, 6.54e-02, 1.44e-02, 4.29e-04, 5.60e-02, 6.88e-03, 7.45e-05, 9.62e-05, 2.07e-02, 1.20e-02, 2.30e-02]    []  
8000      [4.31e-03, 3.93e-03, 6.95e-02, 1.43e-02, 4.57e-04, 5.38e-02, 8.42e-03, 1.89e-04, 7.33e-05, 2.00e-02, 1.20e-02, 1.98e-02]    [6.15e-03, 7.28e-03, 6.33e-02, 1.43e-02, 4.57e-04, 5.38e-02, 8.42e-03, 1.89e-04, 7.33e-05, 2.00e-02, 1.20e-02, 1.98e-02]    []  
9000      [5.70e-03, 4.27e-03, 6.56e-02, 1.45e-02, 5.18e-04, 5.12e-02, 9.41e-03, 7.66e-04, 7.04e-05, 1.93e-02, 1.22e-02, 1.99e-02]    [6.90e-03, 7.20e-03, 6.14e-02, 1.45e-02, 5.18e-04, 5.12e-02, 9.41e-03, 7.66e-04, 7.04e-05, 1.93e-02, 1.22e-02, 1.99e-02]    []  
10000     [4.43e-03, 3.76e-03, 6.28e-02, 1.46e-02, 5.73e-04, 4.90e-02, 1.01e-02, 5.68e-05, 3.82e-05, 1.85e-02, 1.23e-02, 1.68e-02]    [6.54e-03, 6.76e-03, 5.95e-02, 1.46e-02, 5.73e-04, 4.90e-02, 1.01e-02, 5.68e-05, 3.82e-05, 1.85e-02, 1.23e-02, 1.68e-02]    []  

Best model at step 10000:
  train loss: 1.93e-01
  test loss: 1.95e-01
  test metric: []

'train' took 85.422519 s

3.5. Train More (L-BFGS Optimizer)Ā¶

InĀ [Ā ]:
dde.optimizers.config.set_LBFGS_options(maxiter=3000)
model.compile("L-BFGS", loss_weights = [1, 1, 1, 1, 1, 1, 1, 1, 1, 9, 9, 9])
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Compiling model...
'compile' took 1.503600 s

Training model...

Step      Train loss                                                                                                                  Test loss                                                                                                                   Test metric
10000     [4.43e-03, 3.76e-03, 6.28e-02, 1.46e-02, 5.73e-04, 4.90e-02, 1.01e-02, 5.68e-05, 3.82e-05, 1.85e-02, 1.23e-02, 1.68e-02]    [6.54e-03, 6.76e-03, 5.95e-02, 1.46e-02, 5.73e-04, 4.90e-02, 1.01e-02, 5.68e-05, 3.82e-05, 1.85e-02, 1.23e-02, 1.68e-02]    []  
11000     [3.78e-03, 3.96e-03, 3.07e-02, 1.18e-02, 1.17e-03, 1.45e-02, 1.86e-02, 1.70e-05, 3.26e-05, 8.61e-03, 5.59e-03, 5.02e-03]                                                                                                                                    
12000     [3.04e-03, 2.79e-03, 8.35e-03, 8.85e-03, 4.59e-04, 1.09e-02, 6.52e-03, 3.81e-05, 1.69e-05, 1.86e-03, 1.27e-03, 1.23e-03]                                                                                                                                    
13000     [1.53e-03, 2.29e-03, 2.03e-03, 8.29e-03, 4.01e-04, 6.95e-03, 2.78e-03, 1.57e-05, 1.72e-06, 3.61e-04, 3.51e-04, 6.21e-04]                                                                                                                                    
INFO:tensorflow:Optimization terminated with:
  Message: b'STOP: TOTAL NO. of ITERATIONS REACHED LIMIT'
  Objective function value: 0.025109
  Number of iterations: 3000
  Number of functions evaluations: 3061
13061     [1.42e-03, 2.25e-03, 1.87e-03, 8.29e-03, 4.12e-04, 6.75e-03, 2.78e-03, 2.16e-05, 5.67e-06, 3.59e-04, 3.30e-04, 6.19e-04]    [5.02e-03, 4.44e-03, 2.19e-03, 8.29e-03, 4.12e-04, 6.75e-03, 2.78e-03, 2.16e-05, 5.67e-06, 3.59e-04, 3.30e-04, 6.19e-04]    []  

Best model at step 13061:
  train loss: 2.51e-02
  test loss: 3.12e-02
  test metric: []

'train' took 188.013299 s

3.6. Plot Results (Adam + L-BFGS)Ā¶

InĀ [Ā ]:
samples = geom.random_points(500000)
result = model.predict(samples)
color_legend = [[0, 1.5], [-0.3, 0.3], [0, 35]]

for idx in range(3):
    plt.figure(figsize = (20, 4))
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = result[:, idx],
                s = 2,
                cmap = 'jet')
    plt.colorbar()
    plt.clim(color_legend[idx])
    plt.xlim((0-L/2, L-L/2))
    plt.ylim((0-D/2, D-D/2))
    plt.tight_layout()
    plt.show()
InĀ [1]:
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')