AI Directive
Dealing with Artificial Intelligence (AI) within the company interfaceforce eK
1. Objective and scope
This policy defines the responsible, safe, and ethical use of artificial intelligence (AI) within our organization. It applies to all employees, partners, and external service providers who use or develop AI systems as part of their work.
2. Principles of AI use
- Transparency: The use of AI must be transparent for all involved. Decisions supported or made by AI must be documented and, where possible, explained.
- Responsibility: For each AI application, a responsible person must be appointed who is responsible for its use, monitoring and further development.
- Data protection and security: The processing of personal data by AI systems is carried out exclusively in accordance with applicable data protection laws (e.g., GDPR). Appropriate technical and organizational measures must be taken to protect the data.
- Fairness and non-discrimination: AI systems must not make discriminatory decisions or reinforce existing biases. Training data and models must be regularly checked for bias.
- Sustainability: The use of AI should be resource-efficient. Energy consumption and computing effort must be considered when selecting and using AI solutions.
3. Areas of application
AI can be used in the following areas:
- Automation of repetitive tasks (e.g. data analysis, text creation)
- Support in decision-making processes (e.g. forecasts, recommendations)
- Interactive systems (e.g. chatbots, virtual assistants)
The use of AI is not permitted for:
- Surveillance without legal basis
- Manipulation of information or opinions
- Creation or distribution of deepfakes without labeling
4. Requirements for AI systems
- A risk analysis must be carried out before use.
- Systems must be regularly reviewed for functionality, security and ethical implications.
- For critical applications, a human control instance must be ensured (“human-in-the-loop”).
5. Training and awareness-raising
All employees who work with AI systems must receive regular training. The goal is to provide a basic understanding of the opportunities, risks, and limitations of AI.
6. Further development and evaluation
This policy is regularly reviewed and adapted to technological and legal developments. Feedback from practice is incorporated into further development.
7. Contact and reporting point
If you have any questions about AI usage or reporting incidents, please contact the internal AI coordination office:
rs@interfaceforce.de