Time-Series Analysis
Table of Contents
Regression, Classification, Dimension Reduction,
Based on snapshot-type data
We will focus on linear difference equations (LDE), a surprisingly rich topic both theoretically and practivally.
For example,
or by closed-form expression,
or with a difference equation and an initial condition,
High order homogeneous LDE
Linear trends
Non-linear trends
Seasonal trends
Some series may exhibit seasonal trends
For example, weather pattern, employment, inflation, etc.
Combining Linear, Quadratic, and Seasonal Trends
One solution is to apply repeated differencing to the series
For example, first remove seasonal trend. Then remove linear trend
Inspect model fit by examining residuals Q-Q plot
(almost) all the data coming from manufacturing environment are time-series data
Manufacturing application is about one of the following:
Example: material measurements: when $n=3$
For supervised learning, we define two time series
Supervised time-series learning:
Unsupervised time-series anomaly detection
$$p(q_0,q_1,\cdots,q_T ) = p(q_0) \; p(q_1 \mid q_0) \; p( q_2 \mid q_1 q_0 ) \; p( q_3 \mid q_2 q_1 q_0 ) \cdots$$
$$p(q_0,q_1,\cdots,q_T ) = p(q_0) \; p(q_1 \mid q_0) \; p( q_2 \mid q_1 ) \; p( q_3 \mid q_2 ) \cdots$$
For a Markov state 𝑠 and successor state 𝑠′, the state transition probability is defined by
State transition matrix $P$ defines transition probabilities from all states $s$ to all successor states $s'$.
Example: MC episodes
import numpy as np
P = [[0, 0, 1],
[1/2, 1/2, 0],
[1/3, 2/3, 0]]
print(P[1][:])
a = np.random.choice(3,1,p = P[1][:])
print(a)
# sequential processes
# sequence generated by Markov chain
# S1 = 0, S2 = 1, S3 = 2
# starting from 0
x = 0
S = []
S.append(x)
for i in range(50):
x = np.random.choice(3,1,p = P[x][:])[0]
S.append(x)
print(S)
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')