Our commitment to ethical AI

We believe in the potential of AI to improve everyone’s lives. At intenseye, we are committed to use AI responsibly and embedding ethical principles into the innovation of AI.

System Overview

Intenseye Ethics Principles

01 Harm-Benefit

Accuracy & Reliability

We provide you with accurate data to help improve the safety of your operation


Data privacy is important to us. Your data is encrypted and deleted as per your custom retention policy.


Keeping workers healthy and safe is the core of our business.


We know a happy and healthy workforce is what drives you to success every day.


Make your cameras work for you, impacting your most important asset, your people.


Our data helps you dedicate your time to where your team needs you most.

Human Control & Oversight

Intenseye allows human oversight to ensure the AI system does not undermine human autonomy and other adverse effects.


Humans have the right to be informed if they are interacting with an AI system. Our Service clearly informs the users whether the information provided is from an AI algorithm or a human.


Our AI models can be understood and traceable by human beings. The customer can demand a suitable explanation of the AI system’s decision process.


By automating the repetitive and mundane aspects of the safety and health inspection, Intenseye enables safety inspectors and customers to utilize their skills for problem-solving while providing them with the relevant information regarding the problems.


Intenseye does not directly engage with the workers and it does provide them with information regarding their own and/or their workplace’s safety violations. Intenseye’s task management workflow integrates worker’s agency to the Intenseye system and enhances the system’s overall impact on information flow and individual agency.


Intenseye requires its users to deploy their technology responsibly, clearly informing workers and limiting their use of technology to its intended and justified goals


In AI systems, privacy is a fundamental right that can be affected. For the prevention of harm to privacy, we have taken counter measurements.

Distribution of Burdens & Benefits

Intenseye does not directly affect the distribution of resources within the society. However, by making the workplace safer, it helps protect the vulnerable groups who often are the ones that take risky jobs.

Equality & Non-Discrimination

To avoid unfair bias, we are constantly evaluating the health of our datasets. We do not rely on data that can lead to unintended (in)direct prejudice and discrimination against certain groups of people. We are removing the information which can lead to identifiable and discriminatory bias in our collection of data processes.

Protecting the Vulnerable

Workers who accept higher risks in their workplace often do so out of desperation. Implementing safety measures for workers helps reduce the disparities within society. Intenseye serves this function.


For our research and development, ethics and legal are the first things we are evaluating. At Intenseye, we make it clear that designing and developing AI algorithms is everyone’s responsibility.


Intenseye AI algorithms support human autonomy and decision-making. Users can make autonomous decisions regarding AI systems.

From Principles to Practice - by AI Ethics Lab

AI Ethics Lab utilized its tool, the Box, to evaluate Intenseye’s system using 3 core values and 18 instrumental principles, and offer guidance.

AI Ethics Report
Cansu Canca
CEO - AI Ethics Lab

Cansu is a philosopher and the founder+director of the AI Ethics Lab, where she leads teams of computer scientists, philosophers, and legal scholars to provide ethics analysis and guidance to researchers and practitioners. She is also an AI Ethics and Governance Expert consultant for the United Nations, working with UNICRI Centre for AI & Robotics and the Interpol in building a “Toolkit for Responsible AI Innovation in Law Enforcement”. Cansu has a Ph.D. in philosophy specializing in applied ethics. She primarily works on ethics of technology, having previously worked on ethics and health. She serves as an ethics expert in various ethics, advisory, and editorial boards.

Prior to the AI Ethics Lab, she was on the full-time faculty at the University of Hong Kong, and an ethics researcher at the Harvard Law School, Harvard School of Public Health, Harvard Medical School, National University of Singapore, Osaka University, and the World Health Organization. She was listed among the “30 Influential Women Advancing AI in Boston” and the “100 Brilliant Women in AI Ethics”. She is also the first technology and AI ethicist in Turkey. She tweets @ccansu.

Schedule a demo
Join our community of EHS professionals. Share your knowledge and experience with peers and develop best practices together.
Schedule a Demo