随机森林算法。

原理

随机森林是一种集成算法。它的基本单元是决策树。通俗的说,当我们需要根据输入向量对新对象进行分类时,决策树算法依靠的是单个决策树结果,而随机森林是将输入向量放在森林中的每棵树上。每棵树都有一个分类结果,随机森林算法会选择投票最多的类别(在森林中的所有树木上)。

随机森林算法流程

  1. 从数据集中随机选择 k 个特征,共 m 个特征(k <= m)。然后根据 k 个特征建立决策树;
  2. 重复 n 次,k 个特性经过不同随机组合建立 n 棵决策树;
  3. 对每个决策树都传递随机变量来预测结果。存储所有预测的结果,从 n 棵决策树中得到 n 种结果;
  4. 将得到高票数的预测目标作为随机森林算法的最终预测结果(scikit-learn 库的实现是取每个分类器预测概率的平均,而不是让每个分类器对类别进行投票)。

和 CART 算法一样,除了做分类预测,随机森林算法也可以用来做回归预测。

注意在随机森林算法流程中,对训练集的选择,是随机有放回地进行抽样。因为如果不进行随机抽样,每棵树的训练集都一样,那么最终训练出的树分类结果也是完全一样的。这样的话就没必要用到随机森林。而如果不是“有放回”地抽样,会造成每棵树的训练集数量不一致,以及训练结果“过于片面”。

案例

使用 sklearn 库自带的红酒数据集,分别使用决策树和随机森林进行分类预测。

1
2
3
4
5
6
from sklearn.tree import DecisionTreeClassifier 
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_wine

wine = load_wine()
print(wine.DESCR)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
.. _wine_dataset:

Wine recognition dataset
------------------------

**Data Set Characteristics:**

:Number of Instances: 178 (50 in each of three classes)
:Number of Attributes: 13 numeric, predictive attributes and the class
:Attribute Information:
- Alcohol
- Malic acid
- Ash
- Alcalinity of ash
- Magnesium
- Total phenols
- Flavanoids
- Nonflavanoid phenols
- Proanthocyanins
- Color intensity
- Hue
- OD280/OD315 of diluted wines
- Proline

- class:
- class_0
- class_1
- class_2

:Summary Statistics:

============================= ==== ===== ======= =====
Min Max Mean SD
============================= ==== ===== ======= =====
Alcohol: 11.0 14.8 13.0 0.8
Malic Acid: 0.74 5.80 2.34 1.12
Ash: 1.36 3.23 2.36 0.27
Alcalinity of Ash: 10.6 30.0 19.5 3.3
Magnesium: 70.0 162.0 99.7 14.3
Total Phenols: 0.98 3.88 2.29 0.63
Flavanoids: 0.34 5.08 2.03 1.00
Nonflavanoid Phenols: 0.13 0.66 0.36 0.12
Proanthocyanins: 0.41 3.58 1.59 0.57
Colour Intensity: 1.3 13.0 5.1 2.3
Hue: 0.48 1.71 0.96 0.23
OD280/OD315 of diluted wines: 1.27 4.00 2.61 0.71
Proline: 278 1680 746 315
============================= ==== ===== ======= =====

:Missing Attribute Values: None
:Class Distribution: class_0 (59), class_1 (71), class_2 (48)
:Creator: R.A. Fisher
:Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov)
:Date: July, 1988

This is a copy of UCI ML Wine recognition datasets.
https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data

The data is the results of a chemical analysis of wines grown in the same
region in Italy by three different cultivators. There are thirteen different
measurements taken for different constituents found in the three types of
wine.

Original Owners:

Forina, M. et al, PARVUS -
An Extendible Package for Data Exploration, Classification and Correlation.
Institute of Pharmaceutical and Food Analysis and Technologies,
Via Brigata Salerno, 16147 Genoa, Italy.

Citation:

Lichman, M. (2013). UCI Machine Learning Repository
[https://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
School of Information and Computer Science.

.. topic:: References

(1) S. Aeberhard, D. Coomans and O. de Vel,
Comparison of Classifiers in High Dimensional Settings,
Tech. Rep. no. 92-02, (1992), Dept. of Computer Science and Dept. of
Mathematics and Statistics, James Cook University of North Queensland.
(Also submitted to Technometrics).

The data was used with many others for comparing various
classifiers. The classes are separable, though only RDA
has achieved 100% correct classification.
(RDA : 100%, QDA 99.4%, LDA 98.9%, 1NN 96.1% (z-transformed data))
(All results using the leave-one-out technique)

(2) S. Aeberhard, D. Coomans and O. de Vel,
"THE CLASSIFICATION PERFORMANCE OF RDA"
Tech. Rep. no. 92-01, (1992), Dept. of Computer Science and Dept. of
Mathematics and Statistics, James Cook University of North Queensland.
(Also submitted to Journal of Chemometrics).
1
2
print(wine.data)
print(wine.target)
1
2
3
4
5
6
7
8
9
10
11
12
[[1.423e+01 1.710e+00 2.430e+00 ... 1.040e+00 3.920e+00 1.065e+03]
[1.320e+01 1.780e+00 2.140e+00 ... 1.050e+00 3.400e+00 1.050e+03]
[1.316e+01 2.360e+00 2.670e+00 ... 1.030e+00 3.170e+00 1.185e+03]
...
[1.327e+01 4.280e+00 2.260e+00 ... 5.900e-01 1.560e+00 8.350e+02]
[1.317e+01 2.590e+00 2.370e+00 ... 6.000e-01 1.620e+00 8.400e+02]
[1.413e+01 4.100e+00 2.740e+00 ... 6.100e-01 1.600e+00 5.600e+02]]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2]
1
2
3
4
5
6
7
8
9
10
11
12
13
from sklearn.model_selection import train_test_split
Xtrain, Xtest, Ytrain, Ytest = train_test_split(wine.data,wine.target,test_size=0.3)

clf = DecisionTreeClassifier(random_state=0)
rfc = RandomForestClassifier(random_state=0,n_estimators=20)

clf = clf.fit(Xtrain,Ytrain)
rfc = rfc.fit(Xtrain,Ytrain)

score_c = clf.score(Xtest,Ytest)
score_r = rfc.score(Xtest,Ytest)

print("Single Tree:{}".format(score_c) ,"Random Forest:{}".format(score_r))
1
Single Tree:0.8703703703703703 Random Forest:0.9259259259259259

具体参数说明
**

  •  n_estimators:随机森林里树的数量,也就是基评估器的数量。通常数量越大,效果越好,但是计算时间也会随之增加。另外,当树的数量超过一个临界值之后,算法的效果并不会很显著地变好;
  • max_features:是分割节点时考虑的特征的随机子集的大小。 这个值越低,方差减小得越多,但是偏差的增大也越多。 根据经验,回归问题中使用max_features = n_features, 分类问题使用 max_features = sqrt(n_features(其中n_features是特征的个数)是比较好的默认值。 max_depth = None和min_samples_split = 2结合通常会有不错的效果(即生成完全的树)。 这些(默认)值通常不是最佳的,同时还可能消耗大量的内存,最佳参数值应由交叉验证获得;

  • random_state:控制生成森林的模式。当random_state固定时,随机森林中生成是一组固定的树,但每棵树都不一样。并且可以证明,当这种随机性越大的时候,袋装法的效果一般会越来越好。但这种做法的局限性是很强的,当我们需要成千上万棵树的时候,数据不一定能够提供成千上万的特征来让我们构筑尽量多尽量不同的树。因此,除了random_state,我们还需要其他的随机性;

  • bootstrap:bootstrap 参数默认 True,代表采用这种有放回的随机抽样技术。要让基分类器尽量都不一样,一种很容易理解的方法是使用不同的训练集来进行训练,而袋装法正是通过有放回的。 随机抽样技术来形成不同的训练数据,bootstrap就是用来控制抽样技术的参数。