Skip to main content

Model Inference

In machine learning, there are numerous outstanding machine learning libraries available – PyTorch, TensorFlow, Scikit-Learn etc. The need for portability is more critical than ever as various machine learning frameworks emerge and workflows become more advanced. Beacuse of this reason, ONNX is designed to allow framework interoporability by Microsoft. ONNX is a robust and open standard for avoiding framework lock-in and ensuring that the models you create are long-term useable.

Modelify uses ONNX runtime to optimize and accelerate machine learning inferencing. Therefore, Modelify converts the model you develop with any framework to ONNX format for you. However, you must first create a Model Inference object to do that.

There are 3 required parameters;

  • model: your model object
  • framework: the framework name (check out which frameworks are supported here)
  • inputs: inputs structure ( check out here)

Here is an example;

from modelify import ModelInference
from modelify.inputs import Image

my_input = Image(width=28, height=28, channel=3)
inference = ModelInference(model=model, framework="KERAS", inputs=my_input)

Test your Model Inference

To test your model inference is working correctly, you need to add a sample data to your input object.

Here is an example;

from modelify import ModelInference
from modelify.inputs import Image

my_input = Image(width=28, height=28, channel=3)
my_input.add_sample("test_image.jpg")
inference = ModelInference(model=model, framework="KERAS", inputs=my_input)

inference.test()