PINN with Data

Fluid Mechanics Example


By Prof. Seungchul Lee
http://iai.postech.ac.kr/
Industrial AI Lab at POSTECH

Table of Contents

1. Data-driven Approach with Big data

1.1. Load and Sample Data

Fluid_bigdata Download

In [ ]:
import deepxde as dde
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
Deepxde backend not selected or invalid. Assuming tensorflow.compat.v1 for now.
Using backend: tensorflow.compat.v1

Setting the default backend to "tensorflow.compat.v1". You can change it in the ~/.deepxde/config.json file or export the DDEBACKEND environment variable. Valid options are: tensorflow.compat.v1, tensorflow, pytorch, jax, paddle (all lowercase)
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/compat/v2_compat.py:107: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/deepxde/nn/initializers.py:118: The name tf.keras.initializers.he_normal is deprecated. Please use tf.compat.v1.keras.initializers.he_normal instead.

In [ ]:
from google.colab import drive
drive.mount('/content/drive/')
Mounted at /content/drive/
In [ ]:
fluid_bigdata = np.load('/content/drive/MyDrive/postech/KSNVE/data_files/fluid_bigdata.npy')

observe_x = fluid_bigdata[:, :2]
observe_y = fluid_bigdata[:, 2:]
In [ ]:
observe_u = dde.icbc.PointSetBC(observe_x, observe_y[:, 0].reshape(-1, 1), component=0)
observe_v = dde.icbc.PointSetBC(observe_x, observe_y[:, 1].reshape(-1, 1), component=1)
observe_p = dde.icbc.PointSetBC(observe_x, observe_y[:, 2].reshape(-1, 1), component=2)

1.2. Define Parameters

In [ ]:
# Properties
rho = 1
mu = 1
u_in = 1
D = 1
L = 2

1.3. Define Geometry

In [ ]:
geom = dde.geometry.Rectangle(xmin = [-L/2, -D/2], xmax = [L/2, D/2])
data = dde.data.PDE(geom,
                    None,
                    [observe_u, observe_v, observe_p],
                    num_domain = 0,
                    num_boundary = 0,
                    num_test = 100)
Warning: 100 points required, but 120 points sampled.
In [ ]:
plt.figure(figsize = (20,4))
plt.scatter(data.train_x_all[:,0], data.train_x_all[:,1], s = 0.5)
plt.scatter(observe_x[:, 0], observe_x[:, 1], c = observe_y[:, 0], s = 6.5, cmap = 'jet')
plt.scatter(observe_x[:, 0], observe_x[:, 1], s = 0.5, color='k', alpha = 0.5)
plt.xlim((0-L/2, L-L/2))
plt.ylim((0-D/2, D-D/2))
plt.xlabel('x-direction length (m)')
plt.ylabel('Distance from middle of plates (m)')
plt.title('Velocity (u)')
plt.show()

1.4. Define Network and Hyper-parameters

In [ ]:
layer_size = [2] + [64] * 5 + [3]
activation = "tanh"
initializer = "Glorot uniform"

net = dde.maps.FNN(layer_size, activation, initializer)

model = dde.Model(data, net)
model.compile("adam", lr = 1e-3)
Compiling model...
Building feed-forward neural network...
'build' took 0.092175 s

/usr/local/lib/python3.7/dist-packages/deepxde/nn/tensorflow_compat_v1/fnn.py:110: UserWarning: `tf.layers.dense` is deprecated and will be removed in a future version. Please use `tf.keras.layers.Dense` instead.
  kernel_constraint=self.kernel_constraint,
/usr/local/lib/python3.7/dist-packages/keras/legacy_tf_layers/core.py:261: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.
  return layer.apply(inputs)
'compile' took 3.501978 s

1.5. Train (Adam Optimizer)

In [ ]:
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = False)
Initializing variables...
Training model...

Step      Train loss                        Test loss                         Test metric
0         [1.18e+00, 2.81e-02, 2.01e+02]    [1.18e+00, 2.81e-02, 2.01e+02]    []  
1000      [8.32e-02, 7.61e-03, 4.87e-01]    [8.32e-02, 7.61e-03, 4.87e-01]    []  
2000      [3.26e-03, 2.92e-03, 3.97e-02]    [3.26e-03, 2.92e-03, 3.97e-02]    []  
3000      [1.28e-03, 1.67e-03, 1.85e-02]    [1.28e-03, 1.67e-03, 1.85e-02]    []  
4000      [5.72e-04, 5.96e-04, 2.62e-02]    [5.72e-04, 5.96e-04, 2.62e-02]    []  
5000      [3.22e-04, 3.20e-04, 4.66e-03]    [3.22e-04, 3.20e-04, 4.66e-03]    []  
6000      [2.47e-04, 2.22e-04, 3.97e-03]    [2.47e-04, 2.22e-04, 3.97e-03]    []  
7000      [1.68e-04, 1.28e-04, 2.89e-03]    [1.68e-04, 1.28e-04, 2.89e-03]    []  
8000      [1.32e-04, 8.76e-05, 1.68e-03]    [1.32e-04, 8.76e-05, 1.68e-03]    []  
9000      [8.23e-05, 6.97e-05, 8.99e-04]    [8.23e-05, 6.97e-05, 8.99e-04]    []  
10000     [6.16e-05, 5.54e-05, 8.38e-04]    [6.16e-05, 5.54e-05, 8.38e-04]    []  

Best model at step 10000:
  train loss: 9.55e-04
  test loss: 9.55e-04
  test metric: []

'train' took 18.581455 s

1.6. Train More (L-BFGS Optimizer)

In [ ]:
dde.optimizers.config.set_LBFGS_options(maxiter=3000)
model.compile("L-BFGS")
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
Compiling model...
'compile' took 0.152857 s

Training model...

Step      Train loss                        Test loss                         Test metric
10000     [6.16e-05, 5.54e-05, 8.38e-04]    [6.16e-05, 5.54e-05, 8.38e-04]    []  
11000     [6.13e-06, 5.24e-06, 6.89e-05]                                          
12000     [4.42e-06, 1.44e-06, 5.44e-05]                                          
INFO:tensorflow:Optimization terminated with:
  Message: b'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH'
  Objective function value: 0.000057
  Number of iterations: 2083
  Number of functions evaluations: 2257
12257     [3.66e-06, 1.33e-06, 5.23e-05]    [3.66e-06, 1.33e-06, 5.23e-05]    []  

Best model at step 12257:
  train loss: 5.73e-05
  test loss: 5.73e-05
  test metric: []

'train' took 42.100872 s

1.7. Plot Results (Adam + L-BFGS)

In [ ]:
samples = geom.random_points(500000)
result = model.predict(samples)
color_legend = [[0, 1.5], [-0.3, 0.3], [0, 35]]

for idx in range(3):
    plt.figure(figsize = (20, 4))
    plt.scatter(samples[:, 0],
                samples[:, 1],
                c = result[:, idx],
                s = 2,
                cmap = 'jet')
    plt.colorbar()
    plt.clim(color_legend[idx])
    plt.xlim((0-L/2, L-L/2))
    plt.ylim((0-D/2, D-D/2))
    plt.tight_layout()
    plt.show()