EVALUATION
Once you are done with the product definition, it is time to start evaluating its performances.
This is done simply in few steps:
Build the DavinSy agents
The building is totally handled transparently by Maestro backend.
Define the initial dataset and the test dataset
Simply hand pick, and label the records you want to be used to constitute your test model
Define your test dataset
In the same way, you can select some records for testing. Those records will be used to give you the performances of your model as confusion matrix. It is even possible, through a small python development to plug a live sensor to maestro.
Run the agent and measure its performance
Maestro will launch, in its backend, a dedicated instance of your agents, feeding it with the dataset you defined in the previous steps. Eventually, you will get some kpis on the resources consumed by your agent, and some timings needed for training and inference. You will, as well, get a summary of the predictions in the shape of a confusion matrix.
Maestro offers you a convenient way to get the list of faulty predicted records, allowing you to improve your train dataset.
This whole cycle, including compiling, deployment, training and testing can be achieved in less than 5 minutes.