Decision Tree Classification (ML_CN_6)
This function finds the K number of training examples closest (nearest neighbors) to the input data and then classifying the input data based on the majority class of its nearest neighbors.
This algorithm works by creating a tree-like model of decisions and their possible consequences. The model starts with a root node that represents the entire dataset and branches out into different nodes that represent possible decisions or features that can be used to split the data into smaller groups.
At each node, the algorithm chooses the feature that results in the greatest information gain, meaning the feature that provides the most information about the class labels of the data points. The process continues recursively until a leaf node is reached, which represents a final decision or classification for the data point.
Decision tree classification is a popular algorithm because it is easy to understand and interpret, and it can work well with both categorical and numerical data.
Sample Request
Build a Decision Tree Classification model named, "ClassicModel"
Building a Decision Tree Classification model
Decision Tree Classification
POST
https://autogon.ai/api/v1/engine/start
Request Body
Sample Request
Make predictions with the pre-built model passing an optional test data.
Predicting with Decision Tree Classification
Decision Tree Classification Predict
POST
https://autogon.ai/api/v1/engine/start
Request Body
Sample Request
Evaluate model metrics
Decision Tree Classification Metrics
POST
https://autogon.ai/api/v1/engine/start
Request Body
Last updated