PINN with Data

Fluid Mechanics Example


By Prof. Seungchul Lee
http://iai.postech.ac.kr/
Industrial AI Lab at POSTECH

Table of Contents

1. Data-driven Approach with Big data

1.1. Load and Sample Data

Fluid_bigdata Download

In [ ]:
import deepxde as dde
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
In [ ]:
from google.colab import drive
drive.mount('/content/drive/')
Drive already mounted at /content/drive/; to attempt to forcibly remount, call drive.mount("/content/drive/", force_remount=True).
In [ ]:
fluid_bigdata = np.load('/content/drive/MyDrive/Colab Notebooks/data_files/fluid_bigdata.npy')

observe_x = fluid_bigdata[:, :2]
observe_y = fluid_bigdata[:, 2:]
In [ ]:
observe_u = dde.icbc.PointSetBC(observe_x, observe_y[:, 0].reshape(-1, 1), component=0)
observe_v = dde.icbc.PointSetBC(observe_x, observe_y[:, 1].reshape(-1, 1), component=1)
observe_p = dde.icbc.PointSetBC(observe_x, observe_y[:, 2].reshape(-1, 1), component=2)

1.2. Define Parameters

In [ ]:
# Properties
rho = 1
mu = 1
u_in = 1
D = 1
L = 2

1.3. Define Geometry

In [ ]:
geom = dde.geometry.Rectangle(xmin = [-L/2, -D/2], xmax = [L/2, D/2])
data = dde.data.PDE(geom,
                    None,
                    [observe_u, observe_v, observe_p], 
                    num_domain = 0, 
                    num_boundary = 0, 
                    num_test = 100)
Warning: 100 points required, but 120 points sampled.
In [ ]:
plt.figure(figsize = (20,4))
plt.scatter(data.train_x_all[:,0], data.train_x_all[:,1], s = 0.5)
plt.scatter(observe_x[:, 0], observe_x[:, 1], c = observe_y[:, 0], s = 6.5, cmap = 'jet')
plt.scatter(observe_x[:, 0], observe_x[:, 1], s = 0.5, color='k', alpha = 0.5)
plt.xlim((0-L/2, L-L/2))
plt.ylim((0-D/2, D-D/2))
plt.xlabel('x-direction length (m)')
plt.ylabel('Distance from middle of plates (m)')
plt.title('Velocity (u)')
plt.show()

1.4. Define Network and Hyper-parameters

In [ ]:
layer_size = [2] + [64] * 5 + [3]
activation = "tanh"
initializer = "Glorot uniform"

net = dde.maps.FNN(layer_size, activation, initializer)

model = dde.Model(data, net)
model.compile("adam", lr = 1e-3)
Compiling model...
Building feed-forward neural network...
'build' took 0.104355 s

/usr/local/lib/python3.7/dist-packages/deepxde/nn/tensorflow_compat_v1/fnn.py:110: UserWarning: `tf.layers.dense` is deprecated and will be removed in a future version. Please use `tf.keras.layers.Dense` instead.
  kernel_constraint=self.kernel_constraint,
/usr/local/lib/python3.7/dist-packages/keras/legacy_tf_layers/core.py:261: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.
  return layer.apply(inputs)
'compile' took 0.378851 s

1.5. Train (Adam Optimizer)

In [ ]:
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = False)
Initializing variables...
Training model...

0         [1.17e+00, 6.43e-03, 2.02e+02]    [1.17e+00, 6.43e-03, 2.02e+02]    []  
1000      [1.83e-01, 5.70e-03, 6.76e-01]    [1.83e-01, 5.70e-03, 6.76e-01]    []  
2000      [7.91e-03, 4.33e-03, 8.52e-02]    [7.91e-03, 4.33e-03, 8.52e-02]    []  
3000      [1.68e-03, 2.46e-03, 2.26e-02]    [1.68e-03, 2.46e-03, 2.26e-02]    []  
4000      [5.00e-04, 6.59e-04, 1.58e-02]    [5.00e-04, 6.59e-04, 1.58e-02]    []  
5000      [2.51e-04, 3.36e-04, 9.71e-03]    [2.51e-04, 3.36e-04, 9.71e-03]    []  
6000      [2.00e-04, 1.78e-04, 3.42e-03]    [2.00e-04, 1.78e-04, 3.42e-03]    []  
7000      [1.65e-04, 1.25e-04, 8.15e-03]    [1.65e-04, 1.25e-04, 8.15e-03]    []  
8000      [1.24e-04, 1.07e-04, 1.52e-02]    [1.24e-04, 1.07e-04, 1.52e-02]    []  
9000      [7.33e-05, 5.47e-05, 1.16e-04]    [7.33e-05, 5.47e-05, 1.16e-04]    []  
10000     [5.68e-05, 4.45e-05, 1.30e-04]    [5.68e-05, 4.45e-05, 1.30e-04]    []  

Best model at step 10000:
  train loss: 2.31e-04
  test loss: 2.31e-04
  test metric: []

'train' took 18.869112 s

1.6. Train More (L-BFGS Optimizer)

In [ ]:
dde.optimizers.config.set_LBFGS_options(maxiter=3000)
model.compile("L-BFGS")
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Compiling model...
'compile' took 0.375285 s

Training model...

Step      Train loss                        Test loss                         Test metric
10000     [5.68e-05, 4.45e-05, 1.30e-04]    [5.68e-05, 4.45e-05, 1.30e-04]    []  
11000     [4.70e-06, 2.77e-06, 2.75e-05]                                          
12000     [3.22e-06, 9.92e-07, 1.41e-05]                                          
13000     [2.41e-06, 4.91e-07, 9.34e-06]                                          
INFO:tensorflow:Optimization terminated with:
  Message: b'STOP: TOTAL NO. of ITERATIONS REACHED LIMIT'
  Objective function value: 0.000011
  Number of iterations: 3000
  Number of functions evaluations: 3189
13189     [2.15e-06, 4.98e-07, 8.57e-06]    [2.15e-06, 4.98e-07, 8.57e-06]    []  

Best model at step 13189:
  train loss: 1.12e-05
  test loss: 1.12e-05
  test metric: []

'train' took 109.374331 s

1.7. Plot Results (Adam + L-BFGS)

In [ ]:
samples = geom.random_points(500000)
result = model.predict(samples)
color_legend = [[0, 1.5], [-0.3, 0.3], [0, 35]]

for idx in range(3):
    plt.figure(figsize = (20, 4))
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = result[:, idx],
                s = 2,
                cmap = 'jet')
    plt.colorbar()
    plt.clim(color_legend[idx])
    plt.xlim((0-L/2, L-L/2))
    plt.ylim((0-D/2, D-D/2))
plt.tight_layout()
plt.show()

2. Data-driven Approach with Small Data

2.1. Load and Sample Data

Fluid_smalldata Download

In [ ]:
fluid_smalldata = np.load('/content/drive/MyDrive/Colab Notebooks/data_files/fluid_smalldata.npy')

observe_x = fluid_smalldata[:, :2]
observe_y = fluid_smalldata[:, 2:]
In [ ]:
observe_u = dde.icbc.PointSetBC(observe_x, observe_y[:, 0].reshape(-1, 1), component=0)
observe_v = dde.icbc.PointSetBC(observe_x, observe_y[:, 1].reshape(-1, 1), component=1)
observe_p = dde.icbc.PointSetBC(observe_x, observe_y[:, 2].reshape(-1, 1), component=2)

2.2. Define Geometry

In [ ]:
geom = dde.geometry.Rectangle(xmin = [-L/2, -D/2], xmax = [L/2, D/2])
data = dde.data.PDE(geom,
                    None,
                    [observe_u, observe_v, observe_p], 
                    num_domain = 0, 
                    num_boundary = 0, 
                    num_test = 120)
Warning: 120 points required, but 128 points sampled.
In [ ]:
plt.figure(figsize = (20,4))
plt.scatter(data.train_x_all[:,0], data.train_x_all[:,1], s = 0.5)
plt.scatter(observe_x[:, 0], observe_x[:, 1], c = observe_y[:, 0], s = 6.5, cmap = 'jet')
plt.scatter(observe_x[:, 0], observe_x[:, 1], s = 0.5, color='k', alpha = 0.5)
plt.xlim((0-L/2, L-L/2))
plt.ylim((0-D/2, D-D/2))
plt.xlabel('x-direction length (m)')
plt.ylabel('Distance from middle of plates (m)')
plt.title('Velocity (u)')
plt.show()

2.3. Define Network and Hyper-parameters

In [ ]:
layer_size = [2] + [64] * 5 + [3]
activation = "tanh"
initializer = "Glorot uniform"

net = dde.maps.FNN(layer_size, activation, initializer)

model = dde.Model(data, net)
model.compile("adam", lr = 1e-3)
Compiling model...
Building feed-forward neural network...
'build' took 0.081095 s

/usr/local/lib/python3.7/dist-packages/deepxde/nn/tensorflow_compat_v1/fnn.py:110: UserWarning: `tf.layers.dense` is deprecated and will be removed in a future version. Please use `tf.keras.layers.Dense` instead.
  kernel_constraint=self.kernel_constraint,
/usr/local/lib/python3.7/dist-packages/keras/legacy_tf_layers/core.py:261: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.
  return layer.apply(inputs)
'compile' took 0.365724 s

2.4. Train (Adam Optimizer)

In [ ]:
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = False)
Initializing variables...
Training model...

Step      Train loss                        Test loss                         Test metric
0         [1.17e+00, 9.51e-03, 1.96e+02]    [1.17e+00, 9.51e-03, 1.96e+02]    []  
1000      [1.80e-01, 5.38e-03, 1.58e-01]    [1.80e-01, 5.38e-03, 1.58e-01]    []  
2000      [1.47e-02, 4.12e-03, 7.18e-02]    [1.47e-02, 4.12e-03, 7.18e-02]    []  
3000      [1.66e-03, 3.83e-04, 8.27e-03]    [1.66e-03, 3.83e-04, 8.27e-03]    []  
4000      [2.05e-04, 9.62e-05, 6.63e-04]    [2.05e-04, 9.62e-05, 6.63e-04]    []  
5000      [1.21e-04, 4.02e-05, 4.19e-04]    [1.21e-04, 4.02e-05, 4.19e-04]    []  
6000      [6.11e-05, 2.92e-05, 6.89e-05]    [6.11e-05, 2.92e-05, 6.89e-05]    []  
7000      [3.16e-05, 1.93e-05, 5.86e-05]    [3.16e-05, 1.93e-05, 5.86e-05]    []  
8000      [2.71e-04, 5.21e-05, 1.29e-02]    [2.71e-04, 5.21e-05, 1.29e-02]    []  
9000      [1.31e-05, 1.03e-05, 3.12e-05]    [1.31e-05, 1.03e-05, 3.12e-05]    []  
10000     [1.05e-05, 7.77e-06, 2.87e-05]    [1.05e-05, 7.77e-06, 2.87e-05]    []  

Best model at step 10000:
  train loss: 4.69e-05
  test loss: 4.69e-05
  test metric: []

'train' took 13.253190 s

2.5. Train More (L-BFGS Optimizer)

In [ ]:
dde.optimizers.config.set_LBFGS_options(maxiter=3000)
model.compile("L-BFGS")
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Compiling model...
'compile' took 0.231557 s

Training model...

Step      Train loss                        Test loss                         Test metric
10000     [1.05e-05, 7.77e-06, 2.87e-05]    [1.05e-05, 7.77e-06, 2.87e-05]    []  
11000     [1.39e-06, 3.73e-07, 1.65e-05]                                          
12000     [5.68e-07, 4.20e-07, 1.11e-05]                                          
INFO:tensorflow:Optimization terminated with:
  Message: b'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH'
  Objective function value: 0.000008
  Number of iterations: 2784
  Number of functions evaluations: 2901
12901     [4.12e-07, 3.33e-07, 7.09e-06]    [4.12e-07, 3.33e-07, 7.09e-06]    []  

Best model at step 12901:
  train loss: 7.83e-06
  test loss: 7.83e-06
  test metric: []

'train' took 143.774919 s

2.6. Plot Results (Adam + L-BFGS)

In [ ]:
samples = geom.random_points(500000)
result = model.predict(samples)
color_legend = [[0, 1.5], [-0.3, 0.3], [0, 35]]

for idx in range(3):
    plt.figure(figsize = (20, 4))
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = result[:, idx],
                s = 2,
                cmap = 'jet')
    plt.colorbar()
    plt.clim(color_legend[idx])
    plt.xlim((0-L/2, L-L/2))
    plt.ylim((0-D/2, D-D/2))
plt.tight_layout()
plt.show()

3. PINN with Small Data

3.1. Define PDE with Boundary & Initial Conditions

In [ ]:
def boundary_wall(X, on_boundary):
    on_wall = np.logical_and(np.logical_or(np.isclose(X[1], -D/2), np.isclose(X[1], D/2)), on_boundary)
    return on_wall

def boundary_inlet(X, on_boundary):
    return on_boundary and np.isclose(X[0], -L/2)

def boundary_outlet(X, on_boundary):
    return on_boundary and np.isclose(X[0], L/2)
In [ ]:
def pde(X, Y):
    du_x = dde.grad.jacobian(Y, X, i = 0, j = 0)
    du_y = dde.grad.jacobian(Y, X, i = 0, j = 1)
    dv_x = dde.grad.jacobian(Y, X, i = 1, j = 0)
    dv_y = dde.grad.jacobian(Y, X, i = 1, j = 1)
    dp_x = dde.grad.jacobian(Y, X, i = 2, j = 0)
    dp_y = dde.grad.jacobian(Y, X, i = 2, j = 1)
    du_xx = dde.grad.hessian(Y, X, i = 0, j = 0, component = 0)
    du_yy = dde.grad.hessian(Y, X, i = 1, j = 1, component = 0)
    dv_xx = dde.grad.hessian(Y, X, i = 0, j = 0, component = 1)
    dv_yy = dde.grad.hessian(Y, X, i = 1, j = 1, component = 1)
    
    pde_u = Y[:,0:1] * du_x + Y[:,1:2] * du_y + 1/rho * dp_x - (mu/rho) * (du_xx + du_yy)
    pde_v = Y[:,0:1] * dv_x + Y[:,1:2] * dv_y + 1/rho * dp_y - (mu/rho) * (dv_xx + dv_yy)
    pde_cont = du_x + dv_y

    return [pde_u, pde_v, pde_cont]

3.2. Define Geometry and Implement Boundary Condition

In [ ]:
geom = dde.geometry.Rectangle(xmin=[-L/2, -D/2], xmax=[L/2, D/2])

bc_wall_u = dde.DirichletBC(geom, lambda X: 0., boundary_wall, component = 0)
bc_wall_v = dde.DirichletBC(geom, lambda X: 0., boundary_wall, component = 1)

bc_inlet_u = dde.DirichletBC(geom, lambda X: u_in, boundary_inlet, component = 0)
bc_inlet_v = dde.DirichletBC(geom, lambda X: 0., boundary_inlet, component = 1)

bc_outlet_p = dde.DirichletBC(geom, lambda X: 0., boundary_outlet, component = 2)
bc_outlet_v = dde.DirichletBC(geom, lambda X: 0., boundary_outlet, component = 1)
In [ ]:
data = dde.data.PDE(geom,
                    pde,
                    [bc_wall_u, bc_wall_v, bc_inlet_u, bc_inlet_v, bc_outlet_p, bc_outlet_v, observe_u, observe_v, observe_p], 
                    num_domain = 1000, 
                    num_boundary = 500, 
                    num_test = 1000,
                    train_distribution = 'LHS')
Warning: 1000 points required, but 1035 points sampled.
In [ ]:
plt.figure(figsize = (20,4))
plt.scatter(data.train_x_all[:,0], data.train_x_all[:,1], s = 0.5)
plt.scatter(observe_x[:, 0], observe_x[:, 1], c = observe_y[:, 0], s = 6.5, cmap = 'jet')
plt.scatter(observe_x[:, 0], observe_x[:, 1], s = 0.5, color='k', alpha = 0.5)
plt.xlim((0-L/2, L-L/2))
plt.ylim((0-D/2, D-D/2))
plt.xlabel('x-direction length (m)')
plt.ylabel('Distance from middle of plates (m)')
plt.title('Velocity (u)')
plt.show()

3.3. Define Network and Hyper-parameters

In [ ]:
layer_size = [2] + [64] * 5 + [3]
activation = "tanh"
initializer = "Glorot uniform"

net = dde.maps.FNN(layer_size, activation, initializer)

model = dde.Model(data, net)
model.compile("adam", lr = 1e-3, loss_weights = [1, 1, 1, 1, 1, 1, 1, 1, 1, 9, 9, 9])
Compiling model...
Building feed-forward neural network...
'build' took 0.105627 s

/usr/local/lib/python3.7/dist-packages/deepxde/nn/tensorflow_compat_v1/fnn.py:110: UserWarning: `tf.layers.dense` is deprecated and will be removed in a future version. Please use `tf.keras.layers.Dense` instead.
  kernel_constraint=self.kernel_constraint,
/usr/local/lib/python3.7/dist-packages/keras/legacy_tf_layers/core.py:261: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.
  return layer.apply(inputs)
'compile' took 2.103200 s

3.4. Train (Adam Optimizer)

In [ ]:
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = False)
Initializing variables...
Training model...

Step      Train loss                                                                                                                  Test loss                                                                                                                   Test metric
0         [2.48e-02, 2.02e-01, 2.99e-04, 4.93e-02, 1.48e-02, 1.08e+00, 4.28e-02, 1.92e-02, 4.28e-02, 1.07e+01, 1.75e-01, 1.77e+03]    [1.90e-02, 2.09e-01, 2.27e-04, 4.93e-02, 1.48e-02, 1.08e+00, 4.28e-02, 1.92e-02, 4.28e-02, 1.07e+01, 1.75e-01, 1.77e+03]    []  
1000      [6.27e-02, 5.74e-02, 9.86e-02, 5.14e-02, 1.95e-03, 5.07e-02, 9.60e-04, 4.71e-03, 9.44e-05, 4.38e-02, 2.32e-02, 6.09e-01]    [6.26e-02, 6.90e-02, 1.04e-01, 5.14e-02, 1.95e-03, 5.07e-02, 9.60e-04, 4.71e-03, 9.44e-05, 4.38e-02, 2.32e-02, 6.09e-01]    []  
2000      [5.41e-02, 1.43e-02, 7.14e-02, 1.92e-02, 8.87e-04, 5.64e-02, 7.09e-04, 1.61e-03, 2.78e-04, 2.58e-02, 1.09e-02, 2.13e-01]    [4.49e-02, 1.44e-02, 6.41e-02, 1.92e-02, 8.87e-04, 5.64e-02, 7.09e-04, 1.61e-03, 2.78e-04, 2.58e-02, 1.09e-02, 2.13e-01]    []  
3000      [8.22e-03, 7.21e-03, 7.15e-02, 1.60e-02, 5.97e-04, 6.09e-02, 1.21e-03, 2.94e-04, 6.05e-05, 2.48e-02, 1.20e-02, 1.38e-01]    [1.12e-02, 9.50e-03, 5.41e-02, 1.60e-02, 5.97e-04, 6.09e-02, 1.21e-03, 2.94e-04, 6.05e-05, 2.48e-02, 1.20e-02, 1.38e-01]    []  
4000      [5.39e-03, 6.65e-03, 7.77e-02, 1.52e-02, 4.68e-04, 6.17e-02, 2.28e-03, 1.82e-04, 2.79e-05, 2.31e-02, 1.21e-02, 8.60e-02]    [8.32e-03, 1.03e-02, 5.58e-02, 1.52e-02, 4.68e-04, 6.17e-02, 2.28e-03, 1.82e-04, 2.79e-05, 2.31e-02, 1.21e-02, 8.60e-02]    []  
5000      [5.15e-03, 5.85e-03, 8.01e-02, 1.50e-02, 3.66e-04, 6.13e-02, 3.35e-03, 5.75e-04, 1.82e-05, 2.21e-02, 1.18e-02, 6.00e-02]    [7.44e-03, 1.01e-02, 5.90e-02, 1.50e-02, 3.66e-04, 6.13e-02, 3.35e-03, 5.75e-04, 1.82e-05, 2.21e-02, 1.18e-02, 6.00e-02]    []  
6000      [4.01e-03, 4.95e-03, 7.89e-02, 1.48e-02, 3.04e-04, 6.01e-02, 4.61e-03, 1.37e-04, 1.55e-05, 2.12e-02, 1.15e-02, 4.30e-02]    [5.89e-03, 9.95e-03, 6.06e-02, 1.48e-02, 3.04e-04, 6.01e-02, 4.61e-03, 1.37e-04, 1.55e-05, 2.12e-02, 1.15e-02, 4.30e-02]    []  
7000      [6.92e-03, 5.66e-03, 7.63e-02, 1.46e-02, 2.87e-04, 5.83e-02, 6.04e-03, 1.92e-04, 2.14e-05, 2.05e-02, 1.12e-02, 3.45e-02]    [8.10e-03, 1.16e-02, 6.16e-02, 1.46e-02, 2.87e-04, 5.83e-02, 6.04e-03, 1.92e-04, 2.14e-05, 2.05e-02, 1.12e-02, 3.45e-02]    []  
8000      [4.63e-03, 4.86e-03, 7.39e-02, 1.46e-02, 3.15e-04, 5.54e-02, 7.56e-03, 3.90e-04, 1.25e-05, 2.00e-02, 1.10e-02, 2.84e-02]    [6.88e-03, 1.07e-02, 6.13e-02, 1.46e-02, 3.15e-04, 5.54e-02, 7.56e-03, 3.90e-04, 1.25e-05, 2.00e-02, 1.10e-02, 2.84e-02]    []  
9000      [1.92e-02, 6.20e-03, 7.06e-02, 1.49e-02, 3.92e-04, 5.15e-02, 9.01e-03, 8.52e-03, 3.57e-05, 1.93e-02, 1.10e-02, 4.51e-02]    [1.66e-02, 1.01e-02, 6.06e-02, 1.49e-02, 3.92e-04, 5.15e-02, 9.01e-03, 8.52e-03, 3.57e-05, 1.93e-02, 1.10e-02, 4.51e-02]    []  
10000     [8.37e-03, 5.19e-03, 6.77e-02, 1.54e-02, 4.53e-04, 4.72e-02, 1.09e-02, 1.99e-03, 1.87e-05, 1.81e-02, 1.08e-02, 2.50e-02]    [1.18e-02, 1.04e-02, 5.97e-02, 1.54e-02, 4.53e-04, 4.72e-02, 1.09e-02, 1.99e-03, 1.87e-05, 1.81e-02, 1.08e-02, 2.50e-02]    []  

Best model at step 10000:
  train loss: 2.11e-01
  test loss: 2.12e-01
  test metric: []

'train' took 86.386176 s

3.5. Train More (L-BFGS Optimizer)

In [ ]:
dde.optimizers.config.set_LBFGS_options(maxiter=3000)
model.compile("L-BFGS", loss_weights = [1, 1, 1, 1, 1, 1, 1, 1, 1, 9, 9, 9])
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Compiling model...
'compile' took 1.195690 s

Training model...

Step      Train loss                                                                                                                  Test loss                                                                                                                   Test metric
10000     [8.37e-03, 5.19e-03, 6.77e-02, 1.54e-02, 4.53e-04, 4.72e-02, 1.09e-02, 1.99e-03, 1.87e-05, 1.81e-02, 1.08e-02, 2.50e-02]    [1.18e-02, 1.04e-02, 5.97e-02, 1.54e-02, 4.53e-04, 4.72e-02, 1.09e-02, 1.99e-03, 1.87e-05, 1.81e-02, 1.08e-02, 2.50e-02]    []  
11000     [4.29e-03, 4.54e-03, 3.02e-02, 1.00e-02, 1.68e-03, 1.98e-02, 1.47e-02, 1.79e-05, 3.44e-05, 5.96e-03, 3.51e-03, 3.83e-03]                                                                                                                                    
12000     [2.43e-03, 3.03e-03, 3.66e-03, 1.02e-02, 2.40e-04, 1.14e-02, 1.98e-03, 2.38e-05, 1.78e-05, 1.16e-03, 7.61e-04, 1.09e-03]                                                                                                                                    
13000     [1.26e-03, 8.84e-04, 1.73e-03, 1.01e-02, 1.87e-04, 7.47e-03, 1.12e-03, 2.38e-05, 1.23e-05, 3.59e-04, 1.89e-04, 3.40e-04]                                                                                                                                    
INFO:tensorflow:Optimization terminated with:
  Message: b'STOP: TOTAL NO. of ITERATIONS REACHED LIMIT'
  Objective function value: 0.023357
  Number of iterations: 3000
  Number of functions evaluations: 3064
13064     [1.18e-03, 9.02e-04, 1.71e-03, 1.01e-02, 1.73e-04, 7.31e-03, 1.11e-03, 2.88e-05, 5.30e-06, 2.75e-04, 1.96e-04, 3.31e-04]    [7.49e-03, 2.45e-03, 1.57e-03, 1.01e-02, 1.73e-04, 7.31e-03, 1.11e-03, 2.88e-05, 5.30e-06, 2.75e-04, 1.96e-04, 3.31e-04]    []  

Best model at step 13064:
  train loss: 2.34e-02
  test loss: 3.11e-02
  test metric: []

'train' took 236.184983 s

3.6. Plot Results (Adam + L-BFGS)

In [ ]:
samples = geom.random_points(500000)
result = model.predict(samples)
color_legend = [[0, 1.5], [-0.3, 0.3], [0, 35]]

for idx in range(3):
    plt.figure(figsize = (20, 4))
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = result[:, idx],
                s = 2,
                cmap = 'jet')
    plt.colorbar()
    plt.clim(color_legend[idx])
    plt.xlim((0-L/2, L-L/2))
    plt.ylim((0-D/2, D-D/2))
plt.tight_layout()
plt.show()
In [ ]:
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')