AI for Mechanical Engineering: Heat Transfer
http://iailab.kaist.ac.kr/
Industrial AI Lab at KAIST
- Fins are widely used to increase the rate of heat transfer from a wall$,$
- The selection of a suitable fin geometry requires a compromise among the cost, weight, and etc.
- The heat transfer effectiveness and efficiency is determined by these fin geometries
- ex) $\eta_f = \frac{\tanh \sqrt{\bar{h}PL^2/KA}}{\sqrt{\bar{h}PL^2/KA}}$ for fin of rectangular cross section (length $L$ and thickness $t$)
- However, for complex, geometry, FDM calculation is required to obtain heat transfer efficiency and effectiveness
- Build regression model that predicts efficiency and effectiveness using fin geometries
- Train regession model to predict heat transfer performance using fin parameters
- Inputs: fin parameters:
- Outer diameter of tube $(D_o)$
- Fin spacing $(\delta)$
- Fin thickness $(t)$
- Thermal conductivity of the material $(k)$
- Convective heat transfer coefficient $(h)$
- Outputs: heat fin performances:
- Efficiency $(\eta)$
- Effectiveness $(\epsilon)$
- In order to dimensionally balance the equation, the input variables are convected into non-dimensional forms subsequently decreasing the number of input variables
- $\delta^* = \frac{\delta}{D_o}$
- $t^* = \frac{t}{D_o}$
- $M = \frac{h}{8k}L_{exp}$
- $\eta = f_1(\delta^*, t^*, M)$
- $\epsilon = f_2(\delta^*, t^*, M)$
we build regression model with two methods:
- K-Nearest Neighbor (KNN) Regression
- MLP Regression Model
1.2 K-Nearest Neighbor (KNN) Regression¶
Non-parametric method
We write our model as
$$y = f(x) + \varepsilon$$
where $\varepsilon$ captures measurement errors and other discrepancies.
Then, with a good $f$ we can make predictions of $y$ at new points $x_{\text{new}}$ . One possible way so called "nearest neighbor method" is:
$$\hat y = \text{avg} \left(y \mid x \in \mathcal{N}(x_{\text{new}}) \right)$$
where $\mathcal{N}(x)$ is some neighborhood of $x$
import numpy as np
import matplotlib.pyplot as plt
from sklearn import neighbors
%matplotlib inline
Load Dataset¶
import pandas as pd
df = pd.read_csv('./data/kNN_2.csv')
df.head()
K_ss = 16.7
t_n = np.array(df['t*=t/do']).reshape(-1, 1)
del_n = np.array(df['del*=del/do']).reshape(-1, 1)
Mss = ((np.array(df['h']) * np.array([df['Lexposed']])) / (8 * K_ss)).reshape(-1, 1)
Xss = np.concatenate((t_n, del_n, Mss), 1)
qss = np.array(df['qss/qo (efficiency steel)']).reshape(-1, 1)
Ess = np.array(df['Ess (effectiveness steel)']).reshape(-1, 1)
Yss = np.concatenate((qss, Ess), 1)
Qss1 = neighbors.KNeighborsRegressor(n_neighbors=1)
Qss2 = neighbors.KNeighborsRegressor(n_neighbors=2)
Qss3 = neighbors.KNeighborsRegressor(n_neighbors=3)
indices = np.arange(Xss.shape[0])
np.random.seed(41)
np.random.shuffle(indices)
shuffled_Xss = Xss[indices]
shuffled_Yss = Yss[indices]
N_train = int(0.8*len(shuffled_Xss))
Qss1.fit(shuffled_Xss[:N_train], shuffled_Yss[:N_train, 0:1])
Qss2.fit(shuffled_Xss[:N_train], shuffled_Yss[:N_train, 0:1])
Qss3.fit(shuffled_Xss[:N_train], shuffled_Yss[:N_train, 0:1])
yp1 = Qss1.predict(shuffled_Xss)
yp2 = Qss2.predict(shuffled_Xss)
yp3 = Qss3.predict(shuffled_Xss)
t_ran = np.array(np.random.uniform(np.min(t_n), np.max(t_n))).reshape(-1, 1)
del_ran = np.array(np.random.uniform(np.min(del_n), np.max(del_n))).reshape(-1, 1)
M_ran = np.array(np.random.uniform(np.min(Mss), np.max(Mss))).reshape(-1, 1)
TEST_X = np.concatenate((t_ran, del_ran, M_ran), 1)
TEST_Y1 = Qss1.predict(TEST_X)
TEST_Y2 = Qss2.predict(TEST_X)
TEST_Y3 = Qss3.predict(TEST_X)
plt.figure(figsize=(15, 4), dpi=100)
plt.subplot(1, 3, 1)
plt.title('k-Nearest Neighbor Regression N=1')
plt.plot([0, 1.2], [0, 1.2], color='k', linestyle='--', alpha=0.7, zorder=0)
plt.scatter(yp1[:N_train], shuffled_Yss[:N_train, 0:1], color='gray', edgecolors='k', linewidths=0.5, alpha=0.7, label='Train')
plt.scatter(yp1[N_train:], shuffled_Yss[N_train:, 0:1], color='r', edgecolors='k', linewidths=0.5, alpha=0.7, label='Test')
plt.axis('square')
plt.xlim(0, 1.2)
plt.ylim(0, 1.2)
plt.xlabel('Efficiency $\\eta_{pred}$')
plt.ylabel('Efficiency $\\eta_{GT}$')
plt.legend()
plt.subplot(1, 3, 2)
plt.title('k-Nearest Neighbor Regression N=2')
plt.plot([0, 1.2], [0, 1.2], color='k', linestyle='--', alpha=0.7, zorder=0)
plt.scatter(yp2[:N_train], shuffled_Yss[:N_train, 0:1], color='gray', edgecolors='k', linewidths=0.5, alpha=0.7, label='Train')
plt.scatter(yp2[N_train:], shuffled_Yss[N_train:, 0:1], color='r', edgecolors='k', linewidths=0.5, alpha=0.7, label='Test')
plt.axis('square')
plt.xlim(0, 1.2)
plt.ylim(0, 1.2)
plt.xlabel('Efficiency $\\eta_{pred}$')
plt.ylabel('Efficiency $\\eta_{GT}$')
plt.legend()
plt.subplot(1, 3, 3)
plt.title('k-Nearest Neighbor Regression N=3')
plt.plot([0, 1.2], [0, 1.2], color='k', linestyle='--', alpha=0.7, zorder=0)
plt.scatter(yp3[:N_train], shuffled_Yss[:N_train, 0:1], color='gray', edgecolors='k', linewidths=0.5, alpha=0.7, label='Train')
plt.scatter(yp3[N_train:], shuffled_Yss[N_train:, 0:1], color='r', edgecolors='k', linewidths=0.5, alpha=0.7, label='Test')
plt.axis('square')
plt.xlim(0, 1.2)
plt.ylim(0, 1.2)
plt.xlabel('Efficiency $\\eta_{pred}$')
plt.ylabel('Efficiency $\\eta_{GT}$')
plt.legend()
plt.show()
Ess1 = neighbors.KNeighborsRegressor(n_neighbors=1)
Ess2 = neighbors.KNeighborsRegressor(n_neighbors=2)
Ess3 = neighbors.KNeighborsRegressor(n_neighbors=3)
Ess1.fit(shuffled_Xss[:N_train], shuffled_Yss[:N_train, 1:2])
Ess2.fit(shuffled_Xss[:N_train], shuffled_Yss[:N_train, 1:2])
Ess3.fit(shuffled_Xss[:N_train], shuffled_Yss[:N_train, 1:2])
yp1 = Ess1.predict(shuffled_Xss)
yp2 = Ess2.predict(shuffled_Xss)
yp3 = Ess3.predict(shuffled_Xss)
t_ran = np.array(np.random.uniform(np.min(t_n), np.max(t_n))).reshape(-1, 1)
del_ran = np.array(np.random.uniform(np.min(del_n), np.max(del_n))).reshape(-1, 1)
M_ran = np.array(np.random.uniform(np.min(Mss), np.max(Mss))).reshape(-1, 1)
TEST_X = np.concatenate((t_ran, del_ran, M_ran), 1)
TEST_Y1 = Ess1.predict(TEST_X)
TEST_Y2 = Ess2.predict(TEST_X)
TEST_Y3 = Ess3.predict(TEST_X)
plt.figure(figsize=(15, 4), dpi=100)
plt.subplot(1, 3, 1)
plt.title('k-Nearest Neighbor Regression N=1')
plt.plot([0, 110], [0, 110], color='k', linestyle='--', alpha=0.7, zorder=0)
plt.scatter(yp1[:N_train], shuffled_Yss[:N_train, 1:2], color='gray', edgecolors='k', linewidths=0.5, alpha=0.7, label='Train')
plt.scatter(yp1[N_train:], shuffled_Yss[N_train:, 1:2], color='r', edgecolors='k', linewidths=0.5, alpha=0.7, label='Test')
plt.axis('square')
plt.xlim(0, 110)
plt.ylim(0, 110)
plt.xlabel('Effectiveness $\\epsilon_{pred}$')
plt.ylabel('Effectiveness $\\epsilon_{GT}$')
plt.legend()
plt.subplot(1, 3, 2)
plt.title('k-Nearest Neighbor Regression N=2')
plt.plot([0, 110], [0, 110], color='k', linestyle='--', alpha=0.7, zorder=0)
plt.scatter(yp2[:N_train], shuffled_Yss[:N_train, 1:2], color='gray', edgecolors='k', linewidths=0.5, alpha=0.7, label='Train')
plt.scatter(yp2[N_train:], shuffled_Yss[N_train:, 1:2], color='r', edgecolors='k', linewidths=0.5, alpha=0.7, label='Test')
plt.axis('square')
plt.xlim(0, 110)
plt.ylim(0, 110)
plt.xlabel('Effectiveness $\\epsilon_{pred}$')
plt.ylabel('Effectiveness $\\epsilon_{GT}$')
plt.legend()
plt.subplot(1, 3, 3)
plt.title('k-Nearest Neighbor Regression N=3')
plt.plot([0, 110], [0, 110], color='k', linestyle='--', alpha=0.7, zorder=0)
plt.scatter(yp3[:N_train], shuffled_Yss[:N_train, 1:2], color='gray', edgecolors='k', linewidths=0.5, alpha=0.7, label='Train')
plt.scatter(yp3[N_train:], shuffled_Yss[N_train:, 1:2], color='r', edgecolors='k', linewidths=0.5, alpha=0.7, label='Test')
plt.axis('square')
plt.xlim(0, 110)
plt.ylim(0, 110)
plt.xlabel('Effectiveness $\\epsilon_{pred}$')
plt.ylabel('Effectiveness $\\epsilon_{GT}$')
plt.legend()
plt.show()
1.2 MLP Regression Model¶
import tensorflow as tf
model1 = tf.keras.models.Sequential([
tf.keras.layers.Dense(128, input_dim=3, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(1),
])
model1.summary()
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train = scaler.fit_transform(shuffled_Xss[:N_train])
X_test = scaler.transform(shuffled_Xss[N_train:])
optimizer1 = tf.keras.optimizers.Adam(learning_rate=0.001)
model1.compile(optimizer=optimizer1,
loss='mse',
metrics=['mae'])
model1.fit(X_train, shuffled_Yss[:N_train, 0:1], epochs=500, batch_size=32)
loss, mae = model1.evaluate(X_test, shuffled_Yss[N_train:, 0:1])
print(f'Test Loss: {loss}')
print(f'Test MAE: {mae}')
import tensorflow as tf
model2 = tf.keras.models.Sequential([
tf.keras.layers.Dense(128, input_dim=3, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(1),
])
model2.summary()
optimizer2 = tf.keras.optimizers.Adam(learning_rate=0.001)
model2.compile(optimizer=optimizer2,
loss='mse',
metrics=['mae'])
model2.fit(X_train, shuffled_Yss[:N_train, 1:2], epochs=500, batch_size=32)
loss2, mae2 = model2.evaluate(X_test, shuffled_Yss[N_train:, 1:2])
print(f'Test Loss: {loss2}')
print(f'Test MAE: {mae2}')
yp1 = model1.predict(np.concatenate((X_train, X_test), 0))
yp2 = model2.predict(np.concatenate((X_train, X_test), 0))
plt.figure(figsize=(10, 4), dpi=100)
plt.subplot(1, 2, 1)
plt.title('MLP Regression of Efficiency')
plt.plot([0, 1.2], [0, 1.2], color='k', linestyle='--', alpha=0.7, zorder=0)
plt.scatter(yp1[:N_train, 0:1], shuffled_Yss[:N_train, 0:1], color='gray', edgecolors='k', linewidths=0.5, alpha=0.7, label='Train')
plt.scatter(yp1[N_train:, 0:1], shuffled_Yss[N_train:, 0:1], color='r', edgecolors='k', linewidths=0.5, alpha=0.7, label='Test')
plt.text(0.6, 0.05, 'Test MAE: {:.4f}'.format(mae), fontsize=12)
plt.axis('square')
plt.xlim(0, 1.2)
plt.ylim(0, 1.2)
plt.xlabel('Efficiency $\\eta_{pred}$')
plt.ylabel('Efficiency $\\eta_{GT}$')
plt.legend()
plt.subplot(1, 2, 2)
plt.title('MLP Regression of Effectiveness')
plt.plot([0, 120], [0, 120], color='k', linestyle='--', alpha=0.7, zorder=0)
plt.scatter(yp2[:N_train, 0:1], shuffled_Yss[:N_train, 1:2], color='gray', edgecolors='k', linewidths=0.5, alpha=0.7, label='Train')
plt.scatter(yp2[N_train:, 0:1], shuffled_Yss[N_train:, 1:2], color='r', edgecolors='k', linewidths=0.5, alpha=0.7, label='Test')
plt.text(60, 5, 'Test MAE: {:.4f}'.format(mae2), fontsize=12)
plt.axis('square')
plt.xlim(0, 120)
plt.ylim(0, 120)
plt.xlabel('Effectiveness $\\epsilon_{pred}$')
plt.ylabel('Effectiveness $\\epsilon_{GT}$')
plt.legend()
plt.show()
2. CNN-based Regression Model¶
2.1 Recap: Effective Thermal Conductivity¶
- Effective thermal conductivity $(k_{eff})$ represents heat transfer characteristics in heterogeneous materials such as composite materials or porous materials
- These materials are widely used as wall insulation
- In thermodynamics class, we learnt how to predict effective thermal conductivity of materials depending on temperature
- In porous materials, $k_{eff}$ varies depending on distribution and size of pores
- However, running simulations every time the structure of the porous material changes is very time-consuming
- Therefore, in this problem, we implement CNN-based regression model to predict the effective thermal conductivity based on the distribution of porous martials
- This model immediately calculate the effective thermal conductivity of materials with arbitrary structures
import tensorflow as tf
import keras
from tensorflow.keras import layers, models
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
import os
import csv
Load Dataset¶
N_train = int(0.8 * (len(image_s)))
train_x = np.load('./data/Eff_train_x.npy')
train_y = np.load('./data/Eff_train_y.npy')
test_x = np.load('./data/Eff_test_x.npy')
test_y = np.load('./data/Eff_test_y.npy')
plt.figure(figsize=(4, 4))
plt.imshow(train_x[3], cmap='gray')
plt.show()
Build Model¶
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters = 8,
kernel_size = (3, 3),
activation='relu',
input_shape=(128, 128, 1)),
keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(filters = 16,
kernel_size = (3, 3),
activation='relu'),
keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(filters = 32,
kernel_size = (3, 3),
activation='relu'),
keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(filters = 64,
kernel_size = (3, 3),
activation='relu'),
keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(filters = 64,
kernel_size = (3, 3),
activation='relu'),
keras.layers.MaxPooling2D((2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(1024, activation='relu'),
keras.layers.Dense(64, activation='relu'),
keras.layers.Dense(1)
])
model.summary()
from tensorflow.keras import optimizers
optimizer = optimizers.Adam(learning_rate=0.001)
model.compile(optimizer=optimizer,
loss='mean_absolute_percentage_error',
metrics=['mean_absolute_percentage_error'])
model.fit(train_x, train_y, batch_size = 50, epochs = 200)
Keffs = []
pred = model.predict(test_x)
plt.figure(dpi=100)
plt.scatter(pred, test_y, color='forestgreen', edgecolor='k', linewidths=0.5, alpha=0.5)
plt.plot([0, 360], [0, 360], color='k', linestyle='--', alpha=0.7)
plt.axis('square')
plt.xlabel('k$_{eff}$ Pred')
plt.ylabel('k$_{eff}$ GT')
plt.xlim(0, 360)
plt.ylim(0, 360)
plt.show()
idx = 0
plt.figure(figsize=(8, 8), dpi=100)
for i, idx in enumerate([4, 5, 6, 7]):
pred = model.predict(test_x[[idx]])
plt.subplot(2, 2, i+1)
plt.title('GT: {:.2f}, Pred: {:.2f}'.format(test_y[idx], float(pred[[0]])), fontsize=14)
plt.imshow(train_x[idx], cmap='gray')
plt.show()
Convectional method iteratively predicted temperature using the overall heat balance on the control volume
Rate of conduction into the control volume = $-k \left. \frac{\partial T}{\partial x}\right|_{left} \Delta y - k \left. \frac{\partial T}{\partial y}\right|_{bottom} \Delta x$
Rate of conduction out the control volume = $-k \left. \frac{\partial T}{\partial x}\right|_{right} \Delta y - k \left. \frac{\partial T}{\partial y}\right|_{top} \Delta x$
$\frac{T_{i+1,j}-2T_{i,j}+T_{i-1,j}}{\Delta x^2} + \frac{T_{i,j+1}-2T_{i,j}+T_{i,j-1}}{\Delta y^2} = 0$
Solving for $T_{i,j}$ when $\Delta x = \Delta y$, we have $T_{i,j}=\frac{1}{4}\left(T_{i+1,j}+T_{i-1,j}+T_{i,j+1}+T_{i,j+1} \right)$
For example, temperature at internal node of $4\times 4$ grid can be calculated with those equations
- $T_{2,2}=\frac{1}{4}\left(50+0+T_{2,3}+T_{3,2} \right)$
- $T_{2,3}=\frac{1}{4}\left(50+0+T_{2,2}+T_{3,3} \right)$
- $T_{3,2}=\frac{1}{4}\left(100+0+T_{2,2}+T_{3,3} \right)$
- $T_{3,3}=\frac{1}{4}\left(100+0+T_{2,3}+T_{3,2} \right)$
However, this method 1) cannot measure the temperature between each node and 2) is difficult to apply to complex domains.
Therefore, we apply PINN to solve 2D heat conduction problems
# !pip install deepxde
import deepxde as dde
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import time
Numerial analysis¶
n = 100
l = 1.
r = 2*l/(n+1)
T = np.zeros([n*n, n*n])
bc = {
"x=-l": 50.,
"x=+l": 100.,
"y=-l": 0.,
"y=+l": 0.
}
computation_time = {}
B = np.zeros([n, n])
k = 0
for i in range(n):
x = i * r
for j in range(n):
y = j * r
M = np.zeros([n, n])
M[i, j] = -4
if i != 0: # ok i know
M[i-1, j] = 1
else:
B[i, j] += -bc["y=-l"] # b.c y = 0
if i != n-1:
M[i+1, j] = 1
else:
B[i, j] += -bc["y=+l"] # b.c y = l
if j != 0:
M[i, j-1] = 1
else:
B[i, j] += -bc["x=-l"] # b.c x = 0
if j != n-1:
M[i, j+1] = 1
else:
B[i, j] += -bc["x=+l"] # b.c x = l
#B[i, j] += -r**2 * q(x, y) * K(x, y)
m = np.reshape(M, (1, n**2))
T[k, :] = m
k += 1
#
b = np.reshape(B, (n**2, 1))
start = time.time()
T = np.matmul(np.linalg.inv(T), b)
T = T.reshape([n, n])
Temperature = T
end = time.time()
computation_time["fdm"] = end - start
print(f"\ncomputation time: {end-start:.3f}\n")
plt.figure()
x = np.linspace(-1, 1, 100)
y = np.linspace(-1, 1, 100)
x, y = np.meshgrid(x, y)
plt.pcolormesh(x, y, T, cmap='magma')
plt.colorbar()
plt.title('FDM')
plt.xlabel('x')
plt.ylabel('y')
plt.xlim(-1, 1)
plt.ylim(-1, 1)
plt.tight_layout()
plt.axis("square")
plt.show()
3.2 Physics-informed Neural Network for Square Domain¶
xmin, xmax = -1, 1
ymin, ymax = -1, 1
k = 1
def pde(x, y):
dT_xx = dde.grad.hessian(y, x, i = 0, j = 0, component = 0)
dT_yy = dde.grad.hessian(y, x, i = 1, j = 1, component = 0)
return k*(dT_xx + dT_yy)
def boundary_left(x, on_boundary):
return on_boundary and np.isclose(x[0], xmin)
def boundary_right(x, on_boundary):
return on_boundary and np.isclose(x[0], xmax)
def boundary_bottom(x, on_boundary):
return on_boundary and np.isclose(x[1], ymin)
def boundary_top(x, on_boundary):
return on_boundary and np.isclose(x[1], ymax)
geom = dde.geometry.Rectangle([xmin, ymin], [xmax, ymax])
bc_l = dde.DirichletBC(geom, lambda x: 50/100, boundary_left)
bc_r = dde.DirichletBC(geom, lambda x: 1, boundary_right)
bc_b = dde.DirichletBC(geom, lambda x: 0, boundary_bottom)
bc_t = dde.DirichletBC(geom, lambda x: 0, boundary_top)
data = dde.data.PDE(geom,
pde,
[bc_l, bc_r, bc_b, bc_t],
num_domain=10000,
num_boundary=100,
num_test=1000)
plt.figure()
plt.scatter(data.train_x_all[:,0], data.train_x_all[:,1], s = 0.5)
plt.xlabel('x-direction length (m)')
plt.ylabel('y-direction length (m)')
plt.axis('square')
plt.show()
layer_size = [2] + [64] * 5 + [1]
activation = "tanh"
initializer = "Glorot uniform"
net = dde.maps.FNN(layer_size, activation, initializer)
model = dde.Model(data, net)
model.compile("adam", lr=1e-3)
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
x = np.linspace(-1, 1, 100)
y = np.linspace(-1, 1, 100)
x, y = np.meshgrid(x, y)
samples = np.hstack([x.reshape(-1, 1), y.reshape(-1, 1)])
result = model.predict(samples)*100
plt.figure(figsize=(12, 3))
plt.subplot(1, 3, 1)
plt.pcolormesh(x, y, T, cmap='magma')
plt.title('FDM')
plt.colorbar()
plt.xlabel('x')
plt.ylabel('y')
plt.xlim(-1, 1)
plt.ylim(-1, 1)
# plt.tight_layout()
plt.axis("square")
plt.subplot(1, 3, 2)
plt.pcolormesh(x, y, result.reshape(100, 100), cmap='magma')
plt.title('PINN')
plt.colorbar()
plt.xlabel('x')
plt.ylabel('y')
plt.xlim(-1, 1)
plt.ylim(-1, 1)
# plt.tight_layout()
plt.axis("square")
plt.subplot(1, 3, 3)
plt.pcolormesh(x, y, np.abs(T - result.reshape(100, 100)), cmap='magma')
plt.title('Error')
plt.colorbar()
plt.clim(0, 10)
plt.xlabel('x')
plt.ylabel('y')
plt.xlim(-1, 1)
plt.ylim(-1, 1)
plt.axis("square")
plt.tight_layout()
plt.show()
x = np.linspace(-1, 1, 100).reshape(-1, 1)
y = -0.2 * np.ones((100, 1))
line = np.concatenate((x, y), 1)
plt.figure(figsize=(3, 3))
result = model.predict(line)*100
plt.title('y = -0.2', fontsize=11)
plt.plot(x, T[40, :], color='b', linewidth=2, label='FDM')
plt.plot(x, result, linestyle='--', color='r', linewidth=2, label='PINN')
plt.xlabel('x')
plt.ylabel('T')
plt.xlim(-1, 1)
plt.ylim(0, 100)
plt.legend()
plt.show()
3.3 Physics-informed Neural Network for Disk Domain¶
- The cylindrical coordinate is widely used in various industrial systems
- ex) pipes, wires, heat exchanger shells, reactors, and etc.
- However, the numerial analysis of this system is complex
- Complex energy balance equation for the control volume
- Complex consideration for irregular boundary conditions
r_in = 0.25
r_out = 1.0
k = 1
def pde(x, y):
dT_xx = dde.grad.hessian(y, x, i = 0, j = 0, component = 0)
dT_yy = dde.grad.hessian(y, x, i = 1, j = 1, component = 0)
return k*(dT_xx + dT_yy)
def boundary_inner(x, on_boundary):
return on_boundary and np.isclose(x[0]**2 + x[1]**2, r_in**2)
def boundary_outer(x, on_boundary):
return on_boundary and np.isclose(x[0]**2 + x[1]**2, r_out**2)
geom = dde.geometry.Disk([0, 0], 1) - dde.geometry.Disk([0, 0], 0.25)
bc_i = dde.DirichletBC(geom, lambda x: 50/50, boundary_inner)
bc_o = dde.DirichletBC(geom, lambda x: 0, boundary_outer)
data = dde.data.PDE(geom,
pde,
[bc_i, bc_o],
num_domain=7000,
num_boundary=1000,
num_test=10000)
plt.figure(dpi=100)
plt.scatter(data.train_x_all[:,0], data.train_x_all[:,1], s = 0.5)
plt.xlabel('x-direction length (m)')
plt.ylabel('y-direction length (m)')
plt.axis('square')
plt.show()
layer_size = [2] + [64] * 5 + [1]
activation = "tanh"
initializer = "Glorot uniform"
net = dde.maps.FNN(layer_size, activation, initializer)
model = dde.Model(data, net)
model.compile("adam", lr=1e-3)
losshistory, train_state = model.train(epochs = 3000)
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
test = data.test_points()
result = model.predict(test)*50
plt.figure(dpi=100)
plt.scatter(test[:, 0], test[:, 1], c=result, cmap='magma')
plt.title('PINN')
plt.colorbar()
plt.xlabel('x')
plt.ylabel('y')
plt.xlim(-1, 1)
plt.ylim(-1, 1)
plt.axis("square")
# plt.tight_layout()
plt.show()
4. Thermal Fluid Dynamics¶
- Fluid flow and heat transfer are often interconnected phenomena in engineering and scientific applications
- ex) effective cooling system, power plant, heat exchanger, and etc.
- It is import to solve fluid flow (Navier-Stokes equation) and heat transfer (Convection equation) simultaneously
Load Dataset¶
import deepxde as dde
import numpy as np
import matplotlib.pyplot as plt
cor_fluid = np.loadtxt('./data/Convection/xy_cor.txt')
u_fluid = np.loadtxt('./data/Convection/u.txt')
v_fluid = np.loadtxt('./data/Convection/v.txt')
p_fluid = np.loadtxt('./data/Convection/p.txt')
T_fluid = np.loadtxt('./data/Convection/T.txt') - 273.15
CFD_results = [u_fluid, v_fluid, p_fluid, T_fluid]
u_fluid_max, u_fluid_min = np.max(u_fluid), np.min(u_fluid)
v_fluid_max, v_fluid_min = np.max(v_fluid), np.min(v_fluid)
p_fluid_max, p_fluid_min = np.max(p_fluid), np.min(p_fluid)
T_fluid_max, T_fluid_min = np.max(T_fluid), np.min(T_fluid)
# Properties
rho = 1
mu = 0.01
k_fluid = 0.1 # thermal conductivity
cp = 1 # specific heat capacity
thermal_diffusivity = k_fluid / (rho * cp)
u_in = 1
T = 1
D = 1
L = 2
def boundary_wall(X, on_boundary):
on_wall = np.logical_and(np.logical_or(np.isclose(X[1], -D/2),
np.isclose(X[1], D/2)), on_boundary)
return on_wall
def boundary_inlet(X, on_boundary):
return on_boundary and np.isclose(X[0], -L/2)
def boundary_outlet(X, on_boundary):
return on_boundary and np.isclose(X[0], L/2)
def pde(X, Y):
du_x = dde.grad.jacobian(Y, X, i = 0, j = 0)
du_y = dde.grad.jacobian(Y, X, i = 0, j = 1)
dv_x = dde.grad.jacobian(Y, X, i = 1, j = 0)
dv_y = dde.grad.jacobian(Y, X, i = 1, j = 1)
dp_x = dde.grad.jacobian(Y, X, i = 2, j = 0)
dp_y = dde.grad.jacobian(Y, X, i = 2, j = 1)
dT_x = dde.grad.jacobian(Y, X, i = 3, j = 0)
dT_y = dde.grad.jacobian(Y, X, i = 3, j = 1)
du_xx = dde.grad.hessian(Y, X, i = 0, j = 0, component = 0)
du_yy = dde.grad.hessian(Y, X, i = 1, j = 1, component = 0)
dv_xx = dde.grad.hessian(Y, X, i = 0, j = 0, component = 1)
dv_yy = dde.grad.hessian(Y, X, i = 1, j = 1, component = 1)
dT_xx = dde.grad.jacobian(dT_x, X, i = 0, j = 0)
dT_yy = dde.grad.jacobian(dT_y, X, i = 0, j = 1)
pde_u = Y[:,0:1]*du_x + Y[:,1:2]*du_y + 1/rho * dp_x - (mu/rho)*(du_xx + du_yy)
pde_v = Y[:,0:1]*dv_x + Y[:,1:2]*dv_y + 1/rho * dp_y - (mu/rho)*(dv_xx + dv_yy)
pde_cont = du_x + dv_y
pde_T = (Y[:,0:1]*dT_x + Y[:,1:2]*dT_y) - thermal_diffusivity * (dT_xx + dT_yy)
return [pde_u, pde_v, pde_cont, pde_T]
geom = dde.geometry.Rectangle(xmin=[-L/2, -D/2], xmax=[L/2, D/2])
bc_wall_u = dde.DirichletBC(geom, lambda X: 0., boundary_wall, component = 0)
bc_wall_v = dde.DirichletBC(geom, lambda X: 0., boundary_wall, component = 1)
bc_wall_T = dde.DirichletBC(geom, lambda X: 0., boundary_wall, component = 3)
bc_inlet_u = dde.DirichletBC(geom, lambda X: u_in, boundary_inlet, component = 0)
bc_inlet_v = dde.DirichletBC(geom, lambda X: 0., boundary_inlet, component = 1)
bc_inlet_T = dde.DirichletBC(geom, lambda X: T, boundary_inlet, component = 3)
bc_outlet_p = dde.DirichletBC(geom, lambda X: 0., boundary_outlet, component = 2)
bc_outlet_v = dde.DirichletBC(geom, lambda X: 0., boundary_outlet, component = 1)
bc_outlet_T = dde.NeumannBC(geom, lambda X: 0., boundary_outlet, component = 3)
data = dde.data.PDE(geom,
pde,
[bc_wall_u, bc_wall_v, bc_wall_T, bc_inlet_u, bc_inlet_v, bc_inlet_T, bc_outlet_p, bc_outlet_v],
num_domain = 2000,
num_boundary = 200,
num_test = 100)
plt.figure(figsize = (6, 4))
plt.scatter(data.train_x_all[:,0], data.train_x_all[:,1], s = 0.5)
plt.xlabel('x-direction length')
plt.ylabel('Distance from the middle of plates (m)')
plt.show()
layer_size = [2] + [64] * 5 + [4]
activation = "tanh"
initializer = "Glorot uniform"
net = dde.maps.FNN(layer_size, activation, initializer)
model = dde.Model(data, net)
model.compile("adam", lr = 1e-3)
losshistory, train_state = model.train(epochs = 10000)
dde.saveplot(losshistory, train_state, issave = False, isplot = True)
color_legend = [[u_fluid_min, u_fluid_max], [v_fluid_min, v_fluid_max],
[p_fluid_min, 1.5], [T_fluid_min, T_fluid_max]]
titles = ['U-velocity', 'V-velocity', 'Pressure', 'Temperature']
result = model.predict(cor_fluid)
plt.figure(figsize=(9, 4), dpi=300)
for idx in range(4):
plt.subplot(2, 2, idx+1)
plt.title(titles[idx], fontsize=13)
plt.scatter(cor_fluid[:, 0],
cor_fluid[:, 1],
c = result[:, idx],
cmap = 'jet',
s = 0.3)
plt.colorbar()
plt.clim(color_legend[idx])
plt.axis('scaled')
plt.xlim((0-L/2, L-L/2))
plt.ylim((0-D/2, D-D/2))
plt.tight_layout()
plt.show()
# samples = geom.random_points(500000)
result = model.predict(cor_fluid)
plt.figure(figsize=(12, 7), dpi=300)
for idx in range(4):
plt.subplot(4, 3, 3*idx+1)
plt.title('CFD', fontsize=13)
plt.scatter(cor_fluid[:, 0],
cor_fluid[:, 1],
c = CFD_results[idx],
cmap = 'jet',
s=0.3)
plt.colorbar()
plt.clim(color_legend[idx])
plt.axis('scaled')
plt.xlim((0-L/2, L-L/2))
plt.ylim((0-D/2, D-D/2))
plt.ylabel(titles[idx])
plt.subplot(4, 3, 3*idx+2)
plt.title('PINN', fontsize=13)
plt.scatter(cor_fluid[:, 0],
cor_fluid[:, 1],
c = result[:, idx],
cmap = 'jet',
s = 0.3)
plt.colorbar()
plt.clim(color_legend[idx])
plt.axis('scaled')
plt.xlim((0-L/2, L-L/2))
plt.ylim((0-D/2, D-D/2))
plt.subplot(4, 3, 3*idx+3)
plt.title('Error', fontsize=13)
plt.scatter(cor_fluid[:, 0],
cor_fluid[:, 1],
c = np.abs(result[:, idx] - CFD_results[idx]),
cmap = 'jet',
s = 0.3)
plt.colorbar()
plt.clim(0, 0.2 * color_legend[idx][1])
plt.axis('scaled')
plt.xlim((0-L/2, L-L/2))
plt.ylim((0-D/2, D-D/2))
plt.tight_layout()
plt.show()
Reference¶
- Krishnayatra, Gaurav, Sulekh Tokas, and Rajesh Kumar. "Numerical heat transfer analysis & predicting thermal performance of fins for a novel heat exchanger using machine learning." Case Studies in Thermal Engineering 21 (2020): 100706.
- Adam, Andre, Huazhen Fang, and Xianglin Li. "Effective thermal conductivity estimation using a convolutional neural network and its application in topology optimization." Energy and AI 15 (2024): 100310.
- Edalatifar, Mohammad, et al. "Using deep learning to learn physics of conduction heat transfer." Journal of Thermal Analysis and Calorimetry 146 (2021): 1435-1452.