TABLE OF CONTENTS
- Entrance
- Human control and accountability
- Significant human contribution
- Transparency and use of sources
- Privacy and Confidential Data Protection (GDPR)
- Risk assessment and management
- Analyzing the misuse of AI in education
- Concluding remarks
Entrance
These guidelines further elaborate on the principles and values of the University of Akureyri's (UNAK) policy on the use of artificial intelligence. Their goal is to provide practical guidance on the responsible and ethical use of generative AI (Generative AI), including Large Language Models (LLMs), such as ChatGPT, Gemini, Grok, Claude, etc., in education, teaching, research, and support services.
The guidelines are in line with UNAK values and are based on international criteria (e.g. Mann et al., 2024). The emphasis is that use is generally permitted unless otherwise specified in the course description or project instructions.
Human control and accountability
Although AI tools can be powerful tools, the ultimate responsibility for all content submitted or published by UNAK always lies with the human user (student, teacher, researcher or employee).
Artificial intelligence:
- Has no sense of morality
- May perpetuate or amplify biases found in training data (e.g., gender, ethnicity, etc.)
- May give false or misleading information ("hallucinations")
- May base results on outdated, incomplete, or unreviewed data that can provide unreliable information about the latest developments or facts
Users shall therefore:
- Criticize and verify: Verify all information, statistical findings, and references with trusted, up-to-date sources
- Ensure quality and coherence: Ensure that outputs are logical, coherent, and meet the professional and competency criteria of the project
- Correct errors and biases: Be aware of errors, systemic biases or possibly outdated knowledge and correct them
- Take responsibility: Take full responsibility for the content and ethical aspects of the material, as if it were your own work
Submitting a project based on AI output without independent review and accountability is considered a violation of academic integrity
Significant human contribution
Artificial intelligence should support human creativity and thinking, not replace it. To be considered the author of a work created with the help of artificial intelligence, a user must make a significant intellectual contribution.
Such contributions include:
Initial conceptual work and definition of a research question or objective.
Development of one's own reasoning, interpretation, or analysis.
Organization and structure of the project.
Design of targeted and thoughtful prompts.
Critical selection, processing, and integration of AI output.
Interpretation of data or results, even if AI has assisted in data processing.
Not considered a significant contribution:
- Using only generic or unelaborated prompts for AI without supporting them with independent analysis, processing, or thoughtful changes to the model's results.
- Copying and returning raw, minimally altered or unreviewed AI outputs and presenting it as their own work.
If there is any doubt as to whether one's own contribution is considered significant, guidance should be sought from a teacher or instructor.
Such use is considered plagiarism and a serious violation of UNAK's rules.
Transparency and use of sources
All use of generative AI must be transparent and clear in projects, research, and administration.
Standard disclaimer (as a model, e.g. in the introduction, methodology chapters or acknowledgements):
"All use of generative AI in this project follows the ethical standards of the University of Akureyri on the use of artificial intelligence. The author(s) have made a substantial contribution to the work, which has been carefully checked for accuracy, and take full responsibility for the work."
A more detailed description is necessary if:
- AI had a significant impact on results or methodologies.
- It was used to write code, perform data processing, create visuals, or develop core arguments.
- Results are based on AI outputs to the extent that reproducibility may require prompts, implementations or settings to be recorded and delivered as an appendix, in a methodology chapter or in supplementary data, as appropriate. This is especially true in research and projects where transparency, methodology and the possibility of repetition are important.
The goal is to ensure that evaluators can understand, evaluate and, if necessary, replicate the processing process and results.
Note: The level of transparency shall be comparable to that required for the use of specialised tools (e.g. statistical programs) or the assistance of colleagues. Special requirements of courses and project guides take precedence.
Privacy and Confidential Data Protection (GDPR)
Users should exercise extreme caution when handling sensitive data:
- Personally identifiable data, sensitive research data, trade secrets or unpublished intellectual property may not be entered into AI models unless there is clear written authorization.
- The processing shall be recorded and rules on the handling of data shall be followed.
- Further guidance on data security and privacy can be found in Appendix B.
Risk assessment and management
Purpose
UNAK shall carry out regular risk assessments for new or changed artificial intelligence tools used in education, teaching, research, support services or administration. The goal is to promote a responsible implementation process in line with the policy's emphases and international norms, especially requirements according to the EU AI Act (2024) and GDPR.
Description of the risk assessment
UNAK shall use a simple and efficient risk assessment model when new tools are introduced or significant changes are made to existing systems.
Six main categories shall be examined and evaluated on a scale from 1 (low risk) to 5 (very high risk):
| Group |
Explanation |
| Privacy |
The risk of personally identifiable data or confidential information being leaked or misused. |
| Bias and discrimination |
Risk that the results or recommendations of AI systems are biased or discriminate against individuals or groups. |
| Data collection and reuse |
The risk of inappropriate or unclear collection, storage, or use of data on students, staff, or others without adequate authorization, knowledge, or consent. |
| Security and reliability |
The risk of errors, faulty functionality, or incorrect recommendations that may adversely affect users or the school's operations. |
| Impact on assessment |
The risk that the use of a tool undermines the objectives of the assessment or distorts its results. |
| Context and interpretation |
The risk that the use of a tool is unclear, opaque, or impedes the repeatability of results. |
When assessing risk, the following shall also be considered:
- The nature and extent of use (e.g. in teaching, administration, research).
- Potential risk mitigation (e.g. through human supervision, clear procedures, restriction of access, etc.).
The risk assessment shall be documented and maintained by the AI Project Manager.
Decision and measures
If a risk assessment reveals a high or very high risk (level 4 or 5 in any category), the following shall:
- Carry out a more detailed assessment in consultation with UNAK's security and privacy team.
- Ensure clear management approval before implementation.
- Establish clear rules for use and human supervision.
If the risk is considered low to medium (levels 1–3), implementation can proceed according to the general processes and conditions of the policy.
Analyzing the misuse of AI in education
Treatment of suspected abuse
If there is suspicion that a student has used artificial intelligence in an unauthorized manner, the rules that apply to violations of academic integrity at the University of Akureyri shall be followed.
The University of Akureyri uses automatic analysis tools (e.g. Turnitin) to detect possible plagiarism. However, the additional functionality of such systems to detect the origin of text with respect to AI is considered unreliable and should therefore not be used as a basis for decisions on violations related to the use of AI.
Research (Mobin & Islam, 2025) supports this precautionary view and shows that even the most powerful systems can be unreliable, especially when texts are from different sources or models. Therefore, the results can be misleading (false positives) or omit actual fractions (false negatives).
Emphasis on professional evaluation
Instead of relying on automatic technology, the situation should be assessed with professional insight and direct dialogue with the student, e.g. through an oral review or questions about the work process. Trust, professionalism and consistency shall be the guiding light in the handling of the case.
Concluding remarks
The University of Akureyri encourages responsible exploration of the possibilities of artificial intelligence to support learning, teaching, research, support services and administration. At the same time, it is emphasised that all such support is used in accordance with responsibility, critical thinking and respect for the ethical and academic values of the academic community.
The guidelines will be regularly revised in line with technological developments and discussions within the University.