A cornerstone of good AI is the quality of the data used to train the model.
Vima’s algorithms, which assess traits, soft skills, and emotions, meet the best standards of data quality. In effect, they are built on the human annotation of videos according to a scientifically-validated protocol and a careful selection of a large group of expert annotators (psychology background, half male/female, verified annotation reliability).
Vima reduces biased assessment by measuring multimodal expressions of traits, states, and soft skills through behaviors. We indeed focus on what you do, not what you say you do.
In addition, Vima works relentlessly to train its algorithms on quality data both in terms of video quality and human labelling, upholding the principle of “quality in, quality out”. For example, individual results are compared to a norm group with similar language and geographical background.
Designing a cross-disciplinary system, such as Vima’s behavioral analysis engine, is crucial to access the validity of both its training data (annotation/ratings of the behavioral traits and skills) and the predictive performance of the trained Machine Learning system.
Vima’s solutions are developed and validated by scientific research carried out by social scientists and artificial intelligence (AI) experts. To safeguard people’s rights and protect the data the company is entrusted with, the company works with an ethical and human rights committee and a data security and compliance officer. Vima also educates and shares its research with customers and partners. Vima also ensures that the risk-benefit ratio for data subjects and customers is adequate and fully respects their dignity and safeguards their decision-making power. The above competencies and procedures underlie Vima’s solutions and are the first pillar upon which the company builds trust.
Transparency is the second pillar upon which Vima builds trust. The company uses clear usage terms in communications to all persons concerned, ensures that data owners control their data, and focuses on developing explainable models. When collecting data for training and testing models, Vima obtains informed consent from research participants in accordance with the Swiss Human Research Act, the Helsinki declaration for research involving human subjects, and with the EU’s GDPR. To ensure transparency, every research protocol or business application is submitted to and approved by an Ethics Committee (anchored in Vima’s organizational rules), and if necessary, presented for feedback to the Scientific Advisory Board or other external specialists.
Vima creates accountability by actively seeking feedback from stakeholders, the Ethics Committee, and Scientific Advisory Board. Our AI customer solutions are designed to be accurate and robust, to support instead of replacing human decision-making.
Vima strives for a non-discriminating, fair representation of individuals belonging to different groups in its models and customer solutions. In building our AI tools, the company aims to respect the values of all stakeholders, not just those of the developers. Therefore, Vima mitigates the human tendency for systematic error or prejudice (bias) across the different stages of model development (selection of training data, quality labeling, feature analysis), including application in an appropriate business environment (correct use of the algorithm and interpretation of output).
Vima’s products are geared towards creating benefits and added value for their users and customers. Vima aims to make its solutions accessible to society at large by promoting growth and increased quality services and products that minimize biased decision-making (organizational development, recruitment, health and safety, personalized consumables, etc.).
Thank you for your message. It has been sent.