| Trees | Indices | Help |
|
|---|
|
|
Informational Entropy functions The definitions used are the same as those in Tom Mitchell's book "Machine Learning"
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
hascEntropy = 1
|
|||
_log2 = math.log(2)
|
|||
Imports: numpy, math, xrange, cEntropy
|
|||
Calculates the informational entropy of a set of results.
**Arguments**
results is a 1D Numeric array containing the number of times a
given set hits each possible result.
For example, if a function has 3 possible results, and the
variable in question hits them 5, 6 and 1 times each,
results would be [5,6,1]
**Returns**
the informational entropy
|
calculates the information gain for a variable
**Arguments**
varMat is a Numeric array with the number of possible occurances
of each result for reach possible value of the given variable.
So, for a variable which adopts 4 possible values and a result which
has 3 possible values, varMat would be 4x3
**Returns**
The expected information gain
|
Calculates the informational entropy of a set of results.
**Arguments**
results is a 1D Numeric array containing the number of times a
given set hits each possible result.
For example, if a function has 3 possible results, and the
variable in question hits them 5, 6 and 1 times each,
results would be [5,6,1]
**Returns**
the informational entropy
|
calculates the information gain for a variable
**Arguments**
varMat is a Numeric array with the number of possible occurances
of each result for reach possible value of the given variable.
So, for a variable which adopts 4 possible values and a result which
has 3 possible values, varMat would be 4x3
**Returns**
The expected information gain
|
| Trees | Indices | Help |
|
|---|
| Generated by Epydoc 3.0.1 on Thu Feb 1 16:13:01 2018 | http://epydoc.sourceforge.net |