Responsible AI Usage Charter for IT Training Centres

January 11, 202612 min read37 views
AI EthicsChange ManagementAI AdoptionTrainingResponsible AI

Responsible AI Usage Charter for IT Training Centres

For Certified Information Technology Training Centres

Scope: Cybersecurity, Cloud, DevOps, Software Development

Authors: Mohamed Ben Lakhoua & Manus AI Date: January 2026 Version: 1.1


Preamble: Co-Pilot, Not Autopilot

Artificial intelligence is profoundly transforming the IT profession. In this context, certified training centres have a responsibility to prepare students to work with AI, while developing the foundational skills that make them autonomous and accountable professionals.

This charter rests on a single guiding principle: AI as co-pilot, never as autopilot. Generative AI tools (ChatGPT, GitHub Copilot, Claude, Gemini, etc.) are powerful assistants that must augment human capability, not replace it. A competent IT professional must understand what they produce, be able to explain it, and take full responsibility for it.

This charter draws on the UNESCO Recommendation on the Ethics of AI[1] and is addressed specifically to vocational training centres and universities training students in technical IT disciplines.


1. Core Principles

1.1 Transparency and Honesty

Any use of generative AI tools within a training context must be explicitly declared. Students must indicate:

  • Which tools were used (name, version if applicable)
  • For which specific tasks
  • To what extent AI contributed to the final output

This transparency applies to practical work, projects, reports, and presentations. It does not apply to formal assessments (exams, certifications) where AI use is generally prohibited unless explicitly stated otherwise.

1.2 Accountability and Human Oversight

The student remains fully responsible for the content they produce, even when using AI. This means:

  • Verifying the technical validity of solutions proposed by AI
  • Understanding the code, architectures, or configurations generated
  • Being able to explain and justify their choices
  • Correcting errors, hallucinations, or approximations produced by AI

A professional cannot invoke "the AI said so" as an excuse for a technical error, a security vulnerability, or a poor architectural decision.

1.3 Development of Foundational Skills

AI use must never short-circuit the learning of fundamentals. Students must first master core concepts before using AI to accelerate their work. For example:

  • Understand algorithms before asking AI to code them
  • Master security principles before using AI to detect vulnerabilities
  • Know cloud architectures before generating Terraform templates

AI is a skills multiplier, not a substitute for skill.

1.4 Ethics and Compliance

AI use must comply with:

  • GDPR: never transmit personal or sensitive data to public AI tools
  • Software licences: verify that AI-generated code does not violate copyright
  • Security rules: do not expose secrets (API keys, passwords, sensitive configurations) in prompts
  • Institutional policies: respect the specific rules of the training centre

2. Permitted AI Uses by Discipline

This section details pedagogically relevant AI use cases within each technical discipline.

2.1 Cybersecurity

Use CasePermittedConditions
Log analysis and anomaly detection✅ YesStudent must understand the attack patterns identified by AI
Vulnerability report generation✅ YesStudent must technically validate each vulnerability before reporting it
Security policy drafting✅ YesStudent must adapt content to the specific organisational context
Exploitation script generation (pentesting)⚠️ ConditionalOnly in a controlled lab environment, with supervision
Malware analysis✅ YesAI may assist with decompilation or code explanation, but student must validate the analysis
Automated incident response❌ NoStudent must develop decision-making capability in crisis situations

Guiding principle: AI can accelerate analysis, but the student must always understand the nature of threats and be capable of conducting a manual investigation.

2.2 Cloud & Infrastructure

Use CasePermittedConditions
IaC template generation (Terraform, CloudFormation)✅ YesStudent must understand every resource and parameter in the template
Cloud cost optimisation (FinOps)✅ YesStudent must validate recommendations and understand their impact
Cloud architecture design⚠️ ConditionalAI may propose patterns, but student must justify architectural choices
Configuration debugging (Kubernetes, Docker)✅ YesStudent must understand the root cause, not just apply the fix
Automation script generation✅ YesStudent must be able to read, modify, and maintain the script
Resource sizing❌ NoStudent must learn to calculate resource requirements (CPU, RAM, storage)

Guiding principle: AI can generate infrastructure code, but the student must master architectural principles (high availability, scalability, security).

2.3 DevOps & SRE

Use CasePermittedConditions
CI/CD pipeline generation✅ YesStudent must understand every pipeline stage and be able to debug it
Runbook and documentation writing✅ YesStudent must validate technical relevance and adapt to context
Metrics analysis and alerting✅ YesStudent must understand SLIs/SLOs and be able to define meaningful thresholds
Test generation (unit, integration, e2e)✅ YesStudent must understand what the tests cover and their scope
Post-mortems and incident analysis⚠️ ConditionalAI may help structure the report, but root cause analysis must be human-led
Incident management decisions❌ NoStudent must develop judgement in production situations

Guiding principle: AI can automate repetitive tasks, but the student must understand SRE principles (observability, resilience, toil reduction).

2.4 Software Development

Use CasePermittedConditions
Code autocompletion (GitHub Copilot, Tabnine)✅ YesStudent must read and understand every suggested line of code
Simple function generation✅ YesStudent must be able to rewrite the function without AI
Refactoring and optimisation✅ YesStudent must understand why the refactored code is better
Unit test generation✅ YesStudent must validate coverage and relevance of tests
Debugging and bug fixing⚠️ ConditionalAI may suggest leads, but student must understand the root cause
Complex algorithm design❌ No (learning phase)Student must first master data structures and fundamental algorithms
Software architecture⚠️ ConditionalAI may propose patterns, but student must justify choices (SOLID, DDD, etc.)

Guiding principle: AI can accelerate code writing, but the student must master programming paradigms, data structures, and clean code principles.


3. Prohibited Uses

The following uses are strictly prohibited in all training contexts:

3.1 During Formal Assessments

  • Using AI during an exam, certification, or knowledge test (unless explicitly authorised by the instructor)
  • Submitting work entirely generated by AI as one's own
  • Using AI to circumvent the learning objectives of an exercise

3.2 Privacy and Security Violations

  • Transmitting personal data (GDPR) to public AI tools
  • Exposing secrets (API keys, passwords, tokens) in prompts
  • Sharing proprietary or confidential code with public LLMs
  • Using AI to generate malicious tools outside a controlled pedagogical framework

3.3 Plagiarism and Intellectual Property Violations

  • Presenting AI-generated code as entirely one's own creation (without declaration)
  • Using AI-generated code that violates open-source licences
  • Copy-pasting code without understanding how it works

4. Instructor Responsibilities

Trainers and teachers play a key role in governing AI use. They must:

4.1 Define Clear Rules

For each training module, the instructor must specify:

  • Which AI tools are permitted or prohibited
  • Which assignments may use AI (and to what extent)
  • How AI use must be declared
  • Evaluation criteria (e.g. "you will be assessed on your ability to explain the code, not just produce it")

4.2 Teach Responsible Use

Instructors must integrate into their courses:

  • Awareness sessions on AI ethics
  • Demonstrations of AI limitations (hallucinations, biases, technical errors)
  • Exercises where AI is explicitly used as a pedagogical tool
  • Practical cases of detecting AI-generated errors

4.3 Adapt Assessments

Assessment methods must evolve to measure understanding rather than mere production:

  • Prioritise oral exams where students explain their code
  • Include real-time debugging questions
  • Assess the ability to critique and improve AI-generated code
  • Use projects where AI is one tool among many, not a magic solution

5. Student Responsibilities

Learners commit to:

5.1 Developing a Reflective Practice

  • Question AI responses systematically: "Is this correct? Why? What are the alternatives?"
  • Verify the technical validity of any code, configuration, or recommendation generated
  • Document their working process: "What did I ask the AI? What did I modify? Why?"

5.2 Respecting Academic Integrity

  • Declare AI use in accordance with institutional rules
  • Never submit AI-generated work without having understood and validated it
  • Cite sources when AI has provided factual information or references

5.3 Protecting Data and Security

  • Never transmit personal or sensitive data to public AI tools
  • Use anonymised or fictitious data for exercises
  • Respect the security policies of the institution and partner organisations

5.4 Preparing for the Professional World

  • Understand that companies have their own AI usage policies
  • Develop a professional posture: AI is a tool, responsibility remains human
  • Be able to work without AI (in case of outage, access restrictions, or company policy)

6. Implementation Framework

6.1 Institutional Adoption

This charter must be:

  • Validated by the academic leadership and the institution's governing body
  • Integrated into internal regulations and course syllabi
  • Reviewed annually to adapt to technological developments

6.2 Instructor Training

Instructors must receive training on:

  • The capabilities and limitations of generative AI tools
  • Pedagogical methods adapted to the AI era
  • Detection of undeclared AI use
  • Best practices for integrating AI into teaching

6.3 Tools and Resources

The institution must provide:

  • Supervised access to AI tools (educational accounts, controlled environments)
  • Technical guidelines specific to each discipline
  • Pedagogical support for students who wish to use AI responsibly
  • A reporting channel for ethical questions or charter violations

6.4 Sanctions for Non-Compliance

Violations of this charter may result in:

  • A warning for a first minor infraction
  • Non-validation of a piece of work or a module in cases of plagiarism or undeclared use
  • Disciplinary sanctions for serious violations (transmission of sensitive data, exam fraud)
  • Exclusion in cases of repeat offences or serious ethical breaches

7. Guiding Principles for Edge Cases

When facing a situation not explicitly covered by this charter, apply the following questions:

QuestionPrinciple
Am I learning something by using AI here?If not, the use is probably counterproductive
Can I redo this work without AI?If not, I must first master the fundamentals
Can I explain and justify what AI produced?If not, I should not use it
Am I exposing sensitive data?If yes, the use is prohibited
Am I respecting the pedagogical intent of the exercise?If not, I am bypassing the learning

8. Conclusion: Preparing Tomorrow's Professionals

The goal of this charter is not to restrict AI use, but to frame it — to ensure that students develop the skills that will make them autonomous, accountable, and sought-after professionals in the labour market.

A competent IT engineer in 2026 and beyond will need to:

  • Master the technical fundamentals of their domain
  • Know how to use AI as a productivity multiplier
  • Understand the limitations and biases of AI
  • Take responsibility for their technical decisions
  • Respect professional ethics and security standards

This charter is a living document, intended to evolve with technologies and pedagogical practices. It rests on a simple principle: AI as co-pilot, never as autopilot.


References

[1] UNESCO (2021). Recommendation on the Ethics of Artificial Intelligence. Available at: https://www.unesco.org/en/artificial-intelligence/recommendation-ethics


Appendix: Examples of AI Usage Declarations

Example 1: DevOps Project

AI Usage Declaration In this project, I used ChatGPT (GPT-4) to:

  • Generate an initial Terraform template to deploy an application on AWS ECS (prompt: "Create a Terraform module to deploy a Docker container on ECS with an ALB")
  • Debug a Kubernetes configuration error (prompt: "Why is my pod in CrashLoopBackOff?")

I then:

  • Adapted the Terraform template to the project's specific architecture (added VPC, security, monitoring)
  • Validated the Kubernetes solution by consulting the official documentation and testing several configurations

I am able to explain every Terraform resource and every Kubernetes parameter used.

Example 2: Cybersecurity Project

AI Usage Declaration For security log analysis, I used Claude (Anthropic) to identify attack patterns in a 10,000-line Apache log file.

The AI detected:

  • 3 SQL injection attempts
  • 12 port scans
  • 1 directory traversal attempt

I then:

  • Manually verified each alert by consulting the raw logs
  • Confirmed 2 genuine SQL injection attempts (1 false positive)
  • Wrote an incident report explaining the nature of the attacks and mitigation recommendations

I am able to detect these attacks without AI using grep, awk, and manual analysis.


Version: 1.1 Publication date: January 2026 Author: Mohamed Ben Lakhoua, AI Adoption Architect & Transformation Leader Contact: [email protected] | www.metafive.one Licence: Creative Commons BY-SA 4.0 (free to reuse with attribution)

Share this article

Comments (0)

You must be signed in to post a comment.

Sign In to Comment

No comments yet. Be the first to share your thoughts!

Mriguel
METAFIVE.AI · AI Assistant