Last updated: May 4, 2026
Hansa Cequity builds AI systems that augment human contact-centre teams. This Responsible AI Policy explains how we design, deploy, and govern our AI to keep customers, end users, and agents safe, informed, and in control.
Our platform uses a combination of:
We use a mix of in-house, open-source, and trusted third-party models. We do not use Customer Data, unless otherwise agreed in writing, to train foundation models without explicit, written customer consent. Customer Data processed by third-party model providers is governed by enterprise agreements that prohibit training on that data and the output shall be based on the data based on third party systems.
Where required by law and as a matter of good practice, our voice and chat agents identify themselves as AI at the start of an interaction. Customers deploying our Services are responsible for configuring disclosures appropriate to their jurisdiction and use case; default templates include AI disclosure language. Compliance with applicable disclosure laws and regulations remains the responsibility of the customer deploying the Services.
Consistent with our Acceptable Use Policy, customers may not use our AI to: