Core principles of responsible AI use
We design, develop, and deploy AI solutions according to industry best practices. Clients should follow these principles when using AI tools in practice:
Safety and cecurity
Protect employees, customers, and end users from harmful or malicious use of AI. We implement AI systems so they do not endanger users or IT infrastructure. Users must not use AI to compromise the security of any service or system. This includes preventing cyberattacks using AI, following security measures, and not circumventing safeguards.
Privacy and data protection
Safeguard personal data and privacy. Because AI tools may process sensitive information, they must be protected against data leaks and misuse. Do not input personal or other sensitive data into AI systems unless necessary—and always in accordance with applicable law (e.g., GDPR) and internal policies. We design AI projects to meet confidentiality and data-minimization requirements, and we expect the same from users.
Human accountability and oversight
AI is powerful but must remain under human control. A human—developer or user—bears final responsibility for the operation of AI and decisions made with its assistance. We design our AI solutions to be as transparent as possible and to enable human oversight. For important processes (e.g., those affecting customers or employees), human review is required to verify and approve AI outputs. Users should understand AI’s limits (e.g., potential inaccuracies or “hallucinations”) and evaluate outputs for accuracy and appropriateness before use.
Fairness and non-discrimination
AI should benefit all users without bias. We work to ensure our AI systems and models do not contain embedded biases or discriminate against anyone. AI must not advantage or disadvantage people based on protected characteristics (e.g., race, gender, age, nationality). During model development and automated decision rules, we monitor data and algorithms for fairness and inclusivity and remediate any identified bias (e.g., by adjusting data, models, or rules). We expect the same commitment from users.
Transparency
People affected by AI should know when and how it is used. Easy8.ai promotes transparency: it should be clear what tasks AI performs, what data it uses, and how results are produced. If AI-generated content is presented to a customer, they should be informed to avoid confusion with human-created work. We explain the capabilities and limitations of deployed AI tools in understandable terms because openness strengthens trust and supports the verification of AI outputs.
Safe innovation and utility
We encourage responsible experimentation with AI to improve processes and services through intelligent automation. We consider AI a useful tool that—when used ethically—can inspire new solutions and growth. Every deployment is assessed for risk versus benefit; we introduce new AI features only when expected benefits clearly outweigh potential risks. We keep up with advances in AI to put modern technologies to work for clients—always aligned with the principles above (safety, privacy, fairness, etc.).
Acceptable and unacceptable uses of AI
✅ Acceptable uses
AI may be used in the following ways, provided that human oversight and this policy’s principles are respected:
Automation of routine tasks
Moving data between systems, generating reports, or other repetitive activities to save employee time.
Data analysis and forecasting
Supporting decision-making through AI-driven insights, always verified by human review.
Drafting and ideation
Proposing text, code, or other outputs for internal use, subject to human approval before publication or implementation.
Customer service personalization
Using AI (e.g., chatbots) to assist customers, provided it is transparent that they are interacting with a virtual assistant and all other policy rules are followed.
Note: For sensitive activities (e.g., employee evaluation, legal or financial advice), AI should be used with great caution or not at all - final responsibility always lies with a qualified human.
❌ Prohibited uses
The following uses of AI are strictly forbidden:
Breaking laws or enabling illegal activity
No use of AI for fraud, cybercrime, spreading malware, producing prohibited substances, or exploiting/abusing children.
All AI usage must comply with applicable laws.
Disinformation and manipulation
Do not generate or spread false or misleading content.
Do not impersonate others, create deceptive deepfakes, or manipulate public opinion.
Bypassing safeguards
Do not attempt to override safety filters, limits, or protective mechanisms.
Do not “coax” models into producing restricted content or share harmful outputs.
Harassment, hate, or inappropriate content
No generation of content that bullies, threatens, incites violence, or spreads hate (especially based on protected characteristics).
Never sexualize or depict minors.
Violations of privacy and personal rights
Do not surveil, profile, or re-identify individuals without lawful authority.
AI processing of personal data must be lawful, with consent where required.
Do not generate or infer sensitive personal data (e.g., health, politics) without legal grounds.
Violent or harmful applications
AI must not be used to incite violence, cause physical harm, or be deployed in weapons systems.
Easy8.ai does not support autonomous weapons or other “unacceptable risk” applications under emerging regulations.
Violating any of these prohibitions may constitute a serious breach of service terms.
Easy8.ai reserves the right to take appropriate action, including restricting access to AI services or terminating cooperation.
Data protection and privacy in AI projects
Working with AI often involves data, including very sensitive data. Easy8.ai therefore emphasizes data protection, confidentiality, and privacy in all AI activities: Minimize sensitive data sharing: Employees and clients should input only the minimum necessary information into AI systems. Where possible, anonymize or aggregate sensitive data (personal data, trade secrets, etc.) before use. When using external AI tools (e.g., public APIs), carefully consider what information is provided and follow their data-protection terms. Security standards: All AI tools and platforms we use or integrate (including n8n) undergo security review before deployment. We assess their security posture, data-protection mechanisms, and regulatory alignment. Our vendors must also maintain appropriate safeguards (encryption, access controls, etc.). Leak and misuse prevention: Staff are trained not to share sensitive internal or customer information with AI tools without proper authorization. It is forbidden to input any sensitive company or client data into publicly accessible AI chatbots. Access to AI tools and generated data is role-based (need-to-know). AI outputs containing sensitive information are handled confidentially. Compliance with data-protection law: We comply with applicable privacy laws, especially GDPR. Where AI processes personal data, it does so on a lawful basis (consent, contract, legitimate interest, etc.) and only as necessary. We maintain retention and deletion rules for data obtained or created by AI systems in line with data-minimization and storage-limitation principles.
Human oversight and accountability
Human supervision: For every AI solution, we define the role of human oversight. In processes with significant impact (e.g., customer experience, finance, HR decisions), AI acts as a recommending tool; a human has the final say. Clients should follow the same rule—e.g., an AI agent in HR or healthcare must not make irreversible decisions without human confirmation. Where AI performs “material decisions” (e.g., candidate assessment, credit decisions, medical recommendations), a human review mechanism must be in place. Verification of AI outputs: Users must not accept every AI output as true or optimal. They should critically evaluate results—checking correctness, relevance, and potential biases or errors. Generative AI can sometimes produce incorrect or fabricated information. Users are responsible for ongoing oversight and timely intervention. Important outputs must be reviewed by a human before use in decisions or publication. Training and awareness: Easy8.ai ensures our employees understand how AI tools work and how to interpret their outputs. We provide training on risks (e.g., bias, model limitations) and safe handling. We also recommend clients educate their teams on responsible AI use. Remediation mechanisms: We maintain procedures for reporting incidents or undesirable AI outputs. If an employee or customer notices behavior contrary to this policy (e.g., inappropriate content, clearly erroneous data), they should promptly report it. We will correct issues—by adjusting models, inputs, or strengthening human oversight. Continuous monitoring and the ability to intervene are essential.
Preventing bias and discrimination
We operationalize our fairness commitment through concrete practices: Data quality and diversity: Training data and automation rules must be representative and impartial. We mitigate systematic data issues that could lead to discriminatory outcomes (e.g., ensuring CV-screening models do not draw inappropriate conclusions from gender or age). Bias testing: We test AI solutions to detect bias, examining outputs across user groups and scenarios for consistency and fairness. If we identify skew (e.g., lower accuracy for a particular group), we adjust the model or logic to prevent discrimination. Openness and correction: While eliminating all bias may be difficult, we actively work to reduce it. We encourage users to report unfair behavior; we analyze such reports and take corrective steps. Our goal is for AI to help everyone equally without conferring or imposing undue advantage or disadvantage.
Transparency in AI use
AI content labeling: When content is generated by AI (e.g., automated emails, reports, chatbot replies), we recommend appropriate labeling or disclosure. Recipients have the right to know whether they are interacting with a human or an AI assistant. We can help clients configure clear labeling and tone for virtual assistants. Explainability: Although some models operate as “black boxes,” we aim to make decision logic understandable. For critical applications, we prefer approaches that support explainability (Explainable AI). Where feasible, we provide insight into inputs and parameters that drive AI outputs. Proactive communication: We inform customers about significant AI features. If changes (e.g., model upgrades) could affect system behavior, we notify clients in advance. We welcome questions about how our AI is designed, including data sources, algorithms, and controls. Regulatory transparency: Under the EU AI Act and related regulations, certain AI systems will be subject to transparency duties (e.g., labeling AI-generated content, maintaining dataset and logging records for audits). Easy8.ai aligns with these principles and will support clients in meeting applicable obligations.
Compliance with laws and standards
Easy8.ai designs and deploys AI solutions in compliance with applicable laws and regulations. Because AI-related rules are evolving, we will keep this policy aligned with current requirements. We focus in particular on: EU AI Act: The EU’s AI Act establishes a framework for safe, transparent, and ethical AI. Easy8.ai already aligns with its core principles—for example, we do not deliver AI systems considered “unacceptable risk,” we uphold transparency, and we enable human oversight. Once the AI Act fully enters into force and applies, we will ensure full compliance and help clients understand impacts and prepare. Data protection (GDPR): As noted above, we strictly follow GDPR, including conducting Data Protection Impact Assessments (DPIAs) where required (e.g., large-scale processing of sensitive data or systematic monitoring). We assist with necessary documentation and appropriate measures. Other relevant rules: We also follow laws and standards on information security, cybersecurity, intellectual property, non-discrimination, and software liability. For international projects, we consider applicable foreign AI regulations and recognized standards (e.g., ISO/IEC for AI and security). Each user is responsible for ensuring their specific AI use complies with the laws applicable to them; when in doubt, seek legal advice. Our internal teams (Legal, Security, Management) continuously evaluate compliance of AI projects and respond to regulatory developments. We expect the same diligence from customers when using AI in their organizations.
User responsibilities and enforcement
Following this policy: Reviewing and agreeing to this AI Usage Policy is mandatory. If a user violates the policy—e.g., uses AI for illegal activity or repeatedly ignores safety instructions—we may limit or terminate access to our services. We may also take preventive action where we detect potential misuse, as permitted by our service terms. Incident cooperation: In case of a security incident or policy breach related to AI (e.g., a data leak via an AI tool, discovery of problematic AI-generated content), users must promptly inform Easy8.ai and provide necessary details for investigation. We will remediate together and implement lessons learned. Respecting third-party terms: Some third-party tools and models we integrate have their own terms of use. Users must respect those terms in addition to this policy. Maintaining confidentiality: If we provide access to internal AI configurations, scripts, or models we have developed, users must keep such information confidential and not disclose or misuse it.
Policy updates and review
AI technologies and related risks evolve. Easy8.ai will review and update this policy regularly. We will conduct at least an annual review and make interim updates when significant changes occur (e.g., new legislation, new internal AI use cases, or material industry incidents). We will publish updates on our website and notify customers and partners of material changes. We encourage all users to consult the current version regularly so the policy remains clear, current, and effective.
Closing
Easy8.ai is committed to responsible AI for the long term. Thank you for following these principles with us. Together we can use AI to deliver value and innovation without compromising what matters most—trust, safety, and fairness in the digital world.
Effective Date: 1 June 2025
Last Updated: 20 August 2025
This policy will be reviewed on a regular basis to ensure it reflects current legal, ethical, and technological standards. Updates may be issued at any time, and employees will be notified of significant changes.