update some chapter 2 pictures

This commit is contained in:
hwade 2018-11-16 19:32:05 +08:00
parent 701d764f2a
commit ce58ba039e
16 changed files with 9 additions and 7 deletions

6
.gitignore vendored
View File

@ -1,2 +1,4 @@
.ipynb_checkpoints
.DS_Store
.ipynb_checkpoints
.DS_Store
.gitignore
*.ipynb

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 388 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

After

Width:  |  Height:  |  Size: 259 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 143 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 262 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 250 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

View File

@ -8,7 +8,7 @@
|回归算法|基于实例的算法|正则化方法|
|:-:|:-:|:-:|
|![](./img/ch2/2.1/1.jpg)|![](./img/ch2/2.1/2.jpg)|![](./img/ch2/2.1/3.png)|
|![](./img/ch2/2.1/1.jpg)|![](./img/ch2/2.1/2.jpg)|![](./img/ch2/2.1/3.jpg)|
|决策树学习|贝叶斯方法|基于核的算法|
|:-:|:-:|:-:|
@ -16,7 +16,7 @@
|聚类算法|关联规则学习|人工神经网络|
|:-:|:-:|:-:|
|![](./img/ch2/2.1/7.png)|![](./img/ch2/2.1/8.jpg)|![](./img/ch2/2.1/9.png)|
|![](./img/ch2/2.1/7.jpg)|![](./img/ch2/2.1/8.jpg)|![](./img/ch2/2.1/9.png)|
|深度学习|降低维度算法|集成算法|
|:-:|:-:|:-:|
@ -239,7 +239,7 @@ $$
假设函数中有$A$和$B$两个参数,当参数发生变化时,假设函数状态也会随着变化。
如下图所示
![](./img/ch2/2.16/1.png)
![](./img/ch2/2.16/1.jpg)
想要你和图中的离散点,我们需要尽可能找到最优的$A$和$B$来使这条直线更能代表所有数据。如何找到最优解呢,这就需要使用代价函数来求解,以平方误差代价函数为例,假设函数为$h(x)=\theta_0x$。
平方误差代价函数的主要思想
@ -253,7 +253,7 @@ $$
**最优解即为代价函数的最小值**$\min J(\theta_0, \theta_1)$。如果是1个参数代价函数一般通过二维曲线便可直观看出。如果是2个参数代价函数通过三维图像可看出效果参数越多越复杂。
当参数为2个时代价函数是三维图像。
![](./img/ch2/2.16/2.png)
![](./img/ch2/2.16/2.jpg)
### 2.10.3 为什么代价函数要非负?
目标函数存在一个下界,在优化过程当中,如果优化算法能够使目标函数不断减小,根据单调有界准则,这个优化算法就能证明是收敛有效的。
@ -278,7 +278,7 @@ $$\frac{\delta J}{\delta w}=(a-y)\delta'(z)x$$$$\frac{\delta J}{\delta b}=(a-
*注*神经网络常用的激活函数为sigmoid函数该函数的曲线如下所示
![](./img/ch2/2.18/1.png)
![](./img/ch2/2.18/1.jpg)
假设目标是收敛到1.0。0.82离目标比较远梯度比较大权值调整比较大。0.98离目标比较近,梯度比较小,权值调整比较小。调整方案合理。
假如目标是收敛到0。0.82目标比较近梯度比较大权值调整比较大。0.98离目标比较远,梯度比较小,权值调整比较小。调整方案不合理。