Mantium provides visibility into the performance of all of your organization’s deployed prompts with dashboards, logs, and alerts. Having the ability to quickly check on usage and alerts is key to running your AI safely, and not exposing your organization to any unintended consequences. Whether you want a high level view with our dashboard or want a deep dive to view the json, we’ve got you covered.
The dashboard makes it easy to see how heavily your prompts are being used, whether they have triggered any human in the loop notifications, as well as your number of model executions.
The logs capture all of the outputs of your prompts and the inputs used to generate them. This information can help you better adjust your prompts. Having an audit trail can also help you compile compliance reports if they’re ever needed.
One of the keys to AI governance is transparency. Mantium’s collaboration centered platform provides the transparency needed to ensure that all members of the project are aware of how prompts were designed, what data was used, and how the AI is being used.
The data used in machine learning models have biases and inconsistencies that can lead to inaccurate or harmful results. While careful prompt design and fine tuning can minimize these issues, undesirable results can still arise. Mantium helps you show that your AI is reliable, explainable, and responsible by offering visibility into design and usage.
Our Human in the Loop feature lets you configure a trigger event that pauses processing and requires a human to intervene when the model generates undesirable output. This lets you refine your models so that you see consistently better performance. It also helps you identify potentially offensive output that your AI may have generated so that you can immediately correct it and protect your business.
Mantium is committed to helping our users build AIs safely and ethically. At the center of this commitment is effective governance of the AI lifecycle. This includes adequate logging, collaboration with domain experts, and human involvement during both inference and training. Mantium’s processes allow collaborators to operate inside of our default guardrails or build guardrails that are unique to their needs.
Collaborate with domain experts to ensure that your AI is being trained on the right data. Use corrections to fine tune your model for your specific requirements and use cases.
Use our default policies to monitor your AI for offensive speech or excessive verbosity, or customize a policy to suit your needs.