Evaluate the given program against Testcases and score with Metrics

Method __init__ Create a new Evaluation object
Instance Variable program The program being evaluated
Instance Variable testcases the list of testcase to execute the program on
Instance Variable metrics The list of metrices to evaluate the program on
Method evaluate Evaluate the program against testcases and return testcases with scores
Method to_json_object Convert into JSON object
Static Method from_json_object Generate TestCase object from JSON object
Method __eq__ Undocumented
Method get_scores Run metrics for outputs
def __init__(self, program, testcases, metrics=None):

Create a new Evaluation object

program =

The program being evaluated

(type: Program)
testcases =

the list of testcase to execute the program on

(type: List[TestCase])
metrics =

The list of metrices to evaluate the program on

(type: List[BaseMetrics])
def evaluate(self):

Evaluate the program against testcases and return testcases with scores

def to_json_object(self):

Convert into JSON object

@staticmethod
def from_json_object(data):

Generate TestCase object from JSON object

def __eq__(self, o):
Undocumented
def get_scores(self, testcase):

Run metrics for outputs

API Documentation for EXecutioner, generated by pydoctor at 2021-09-20 12:53:40.