There is a need now to manage artificial intelligence (AI) in an ethical and responsible manner, said Daniela Gallego, Director of Ethical Management at Tec de Monterrey’s Vice Presidency of Integrity and Compliance.
Gallego Salazar stated this in her presentation “How to Manage the Use of AI with Academic Integrity” during the National Teachers’ Meeting.
The Tec specialist explained that it is important to integrate AI into academic training without compromising the knowledge objectives while encouraging the development of students’ skills.
“Tools are neutral, and we humans decide what we’re going to use them for,” she said.
Because of this, Gallego Salazar shared with teachers some of the ethical risks of AI from the perspective of academic integrity, as well as strategies for its responsible use.
- What are the ethical risks of AI?
- Risks from the perspective of academic integrity
- Strategies for managing AI with academic integrity

What are the risks of AI?
In March 2022, Elon Musk, along with other technology leaders, issued a letter calling for a halt to AI research. In this letter, the businessman said that this development meant “profound risks for society and humanity,” said Gallego.
This document, promoted by the Future of Life Institute, urges the need for government regulation to ensure that AI is developed safely and ethically.
The Tec leader also recalled the intervention of Pope Francis during the G7 meeting, in which he said that technology needs to be approached from a perspective of responsibility and ethics.
“Although technology has great creative potential, it can also aggravate inequalities and foster a throwaway culture,” the Catholic leader said.
These are the risks listed by the Director of Ethical Management at the Tec’s Vice-Presidency of Integrity and Compliance.
1. Use of personal and institutional data
AI tools are fed with information to learn and improve. This process includes the collection of personal data such as locations, interests, jobs, photos, music and food tastes, and health.
2. Propagation of biases, prejudices, and stereotypes
AI, when trained on historical data, can perpetuate biases and prejudices present in that data.
This can result in the propagation of stereotypes and discrimination, affecting marginalized groups and perpetuating social inequalities.
3. Lack of precision
Lack of accuracy in AI systems can have serious consequences.
Facial recognition errors, incorrect medical diagnoses, and wrong decisions in systems can have negative impacts, highlighting the need for rigorous monitoring and constant updating of these technologies.
Risks from the perspective of academic integrity
The Tec specialist mentioned that these tools make it easier to perform intellectual tasks, such as synthesizing information, detecting main ideas, creating content from a text, and paraphrasing it.
This may eliminate students’ interest in doing the work themselves.
“Artificial intelligence eliminates the utility of doing unmonitored activities, which can compromise learning and academic honesty,” she noted.
“Artificial intelligence eliminates the usefulness of doing unmonitored activities, which can compromise learning and academic honesty.”
Strategies for managing AI with academic integrity
Therefore, Gallego Salazar insisted that humans need to continue to play a central role in decision-making and ethical evaluation, something that AI cannot do yet.
Among the strategies are:
1. Personal and institutional data protection
- Generate awareness: Encourage the habit of looking for information based on how the data collected will be used.
- Data protection: Implement measures to protect personal and institutional data.
- Informed consent: Explain the importance of obtaining written consent when intending to use a person’s image or voice.
“It’s crucial in education to protect truthfulness and encourage critical thinking,” she said.
She mentioned that tools such as TecGPT are designed to ensure responsible information management.
However, Gallego warned about the risks of bias in the data.
The ease with which students can use AI for home-based assignments makes it difficult to verify authorship and can introduce bias into learning.
The Director of Ethical Management stressed the importance of students learning not only how to use technology but also how to develop organizational and teamwork skills.
She recommended that professors establish clear guidelines for the use of AI in research and education. This includes protecting human dignity, ensuring consent, and maintaining the accuracy of information.
“Students need to learn to differentiate between emotions and reactions and to understand the responsibility involved in using AI,” he said.
2. Promotion of truthfulness, fairness, and security.
- Reflect on truthful information: Discuss with students the value of truthful information for individual and collective life.
- Contrast ideas: Promote the contrast of ideas and divergent points of view through argumentative dialogs.
- Distinguish between objectivity and subjectivity: Teach students to distinguish between scientific objectivity and subjectivity, as well as between reasons and emotions.
- Contrast information: Promote the habit of contrasting information and reflect on the representativeness of the data generated by AI.
3. Responsibility, explainability, and social-environmental welfare
- Informed decision making: Promote the habit of making informed and reasonable decisions, with awareness of the impact they may have on others.
- Evaluation of AI tools: Recommend which AI tools to use and explain which ones to discourage, while always evaluating their risks and benefits.
- AI Explainability: Explain that AI is not an author since it does not assume responsibility for the content it produces.
4. Promote academic integrity
- Avoid dependence on AI: Encourage students’ interest in doing the work themselves, despite the ease with which these tools make it possible to synthesize information, detect main ideas, create content, and paraphrase texts.
- Value of unmonitored activities: Keep in mind the usefulness of doing unmonitored activities to maintain a commitment to learning and academic honesty.
Gallego Salazar pointed out that it is essential to promote competencies and clear communication on the permitted use of AI.
For example, tools such as GPTZero can help detect AI-generated texts, but it is also important for teachers to know students and their writing styles in order to regulate the use of AI in an ethical manner.
Finally, Daniela Gallego Salazar, Director of Ethics Management at the Tec’s Vice-Presidency of Integrity and Compliance, suggested teachers read and consult the Quick Guide: Guidelines for the Ethical Use of Artificial Intelligence via the following URL:
https://tec.mx/es/integridad-academica/inteligencia-artificial
2024 National Teachers’ Meeting
The National Teachers’ Meeting is the annual meeting of Tec de Monterrey’s higher education academic community with learning spaces, institutional orientation, roundtables, and moments of celebration.
Virtual talks were held in this edition, and the central theme was artificial intelligence to promote human flourishing.
From July 2 to 4, teachers were encouraged to explore and learn how artificial intelligence is shaping a cultural change in which human flourishing needs to play a leading role.
READ MORE: