enotori-in-japan-blog.hashnode.dev[srm] 主成分分析主成分 樣本數n,第i個樣本 解釋變數p個,第j個變數 分解成p個主成分,第m個主成分,降維成M個 $$z_{i,m}=\phi_{1,m}x_{i,1}+\phi_{2,m}x_{i,2}+...+\phi_{p,m}x_{i,p}=\sum_{j=1}^{p}\phi_{j,m}x_{i,j}$$$$\begin{aligned} x_{i,j}&=z_{i,1}\phi_{1,j}+z_{i,2}\phi_{2,j}+...+z_{i,p}\phi_{p,j} \\&=\sum_{m=1}...Apr 11, 2025·1 min read
enotori-in-japan-blog.hashnode.dev[srm] 決策樹單一決策樹 $$分割M-1次,得到M個節點R_m,反應變數可為有K個水準的分類變數或連續變數$$節點不純度(Impurity) Classification error rate $$\begin{aligned} E_m &= \frac{\text{# misclassified}}{\text{# samples in } R_m} \\ &= 1 - \max_k(\hat{p_{mk}}) \\ \text{(因為最大機率對應的 } &k \text{ 就是 } \hat{Y}_m\t...Apr 11, 2025·1 min read
enotori-in-japan-blog.hashnode.dev[Srm] 時間數列弱平穩(Weak Stationary) $$\begin{aligned} \operatorname{E}(y_t)&=C \\ \operatorname{Cov}(y_t,y_s)&=R(t,s) \\ \operatorname{Var}(y_t)&=\operatorname{Cov}(y_t,y_t)=\sigma_y^2 \\\\ \rho_k&=\operatorname{Corr}(y_t,y_{t-k})=\frac{\operatorname{Cov}(y_t,y_{t-...Apr 4, 2025·2 min read
enotori-in-japan-blog.hashnode.dev[Srm] 廣義線性模型模型假設 Linear Exponential Family $$f(y;\mu,\theta)=\operatorname{exp}(\frac{y\theta-b(\theta)}{\phi}-S(y,\phi))$$$$\begin{array}{c|c|c|c|c} & \text{Bin}(n,\pi) \text{ (known } n\text{)} & N(\mu,\sigma^2) & \text{Poisson}(\lambda) & \text{Exp}(\lambda) ...Mar 29, 2025·3 min read
enotori-in-japan-blog.hashnode.dev[srm] 線性迴歸符號定義 $$\begin{aligned} {S_{xx}} &:=\sum(x_i-\bar{x})^2 \\ {S_{xy}} &:=\sum(x_i-\bar{x})(y_i-\bar{y}) \\ \operatorname{SST} &=\operatorname{TSS}=\sum\limits_{i=1}^n (y_i-\bar{y})^2 \\ \operatorname{SSR} &=\operatorname{RegSS}=\sum\limits_{i=1}^n (\hat...Mar 15, 2025·3 min read