When training a Decision Tree using these metrics, the best split is chosen by maximizing Information Gain. Information Gain is calculated for a split by subtracting the weighted entropies of each branch from the original entropy. The actual formula for calculating Information Entropy is: E = − ∑ i C p i log 2 p i E = -\sum_i^C p_i \log_2 p_i E = − i ∑ C p i lo g 2 p i
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |