pyccea.evaluation package
Submodules
pyccea.evaluation.wrapper module
- class pyccea.evaluation.wrapper.WrapperEvaluation(task: str, model_type: str, eval_function: str, eval_mode: str, n_classes: int = None)[source]
Bases:
object
Evaluate selected features based on the predictive performance of a machine learning model.
- Attributes:
- model_evaluatorobject of one of the metrics classes
Responsible for computing performance metrics to evaluate models.
- base_modelsklearn model object
Model that has not been fitted. Works as a template to avoid multiple model initializations. As each model evaluates a subset of features (individual), the base model is copied and fitted for each individual.
- modelsklearn model object
Model that has been fitted to evaluate the current individual.
- estimatorslist of sklearn model objects
Estimators used in the current evaluation. It is one when ‘eval_mode’ is set to “hold_out” and k when ‘eval_mode’ is set to “k_fold” or “leave_one_out”.
Methods
evaluate
(solution, data)Evaluate an individual represented by a complete solution through the predictive performance of a machine learning model.
- eval_modes = ['hold_out', 'k_fold', 'leave_one_out']
- evaluate(solution: ndarray, data: DataLoader) dict [source]
Evaluate an individual represented by a complete solution through the predictive performance of a machine learning model.
- Parameters:
- solutionnp.ndarray
Solution represented by a binary n-dimensional array, where n is the number of features.
- dataDataLoader
Container with process data and training and test sets.
- Returns:
- : dict
Evaluation metrics.
- metrics = {'classification': <class 'pyccea.utils.metrics.ClassificationMetrics'>, 'regression': <class 'pyccea.utils.metrics.RegressionMetrics'>}
- models = {'classification': <class 'pyccea.utils.models.ClassificationModel'>, 'regression': <class 'pyccea.utils.models.RegressionModel'>}