Professor Velda Gamez is committed to incorporating technology into her classroom by using Artificial Intelligence (AI) and Augmented Reality (AR) to identify and reduce bias; enter the legal environment via simulations; and encourage the understanding of legal language.
This law professor at Tec de Monterrey’s School of Social Sciences and Government aims to teach her students about a discipline in which emerging technologies provide new perspectives that impact people’s lives.
Teaching at Santa Fe campus, she has incorporated technological and artificial intelligence tools into her classes as part of a process of constant evolution, impacting hundreds of students across the country and promoting an interdisciplinary perspective into the bargain.
“For me, technology is an everyday tool. Incorporating it into my classes also enables me to discover how human beings interact with this technology, as well as break down traditional barriers to knowledge.”
CONECTA shares how Gamez has set herself the goal of encouraging her students to shift paradigms with the support of technology, creating competitive professionals who seek above all to create a fairer world.
“Although technology is a tool, the final decision always has to be made by humans.”

Using AI to identify and reduce bias
The professor challenges young people to combat both external and internal biases by detecting them with generative AI tools.
“We feed the tool with data on certain criminal sentences to identify patterns related to bias, especially with regard to gender and children. This helps us see how these biases seep through and lead to unfair rulings.
“We’ve analyzed crimes such as homicides committed by women against their aggressors. By doing so, we see that when you remove bias and understand the context, this enables us to judge things from another perspective.”
Through this exercise, students also explore how these discriminatory biases arise from matters to do with socialization and how technology can replicate these when they become part of the algorithm.
The professor explains how this practice, under the heading of (bureaucratic and oppressive) minotaur justice, teaches them in turn to think of these AI technologies as helpers, although decision-making will always fall to humans.
“Although technology is a tool, the final decision always has to be made by humans. AI can give us a prospective sentence, but it will have to be a qualified and specialized judge who decides.
“If we’re looking to using AI as a shortcut in our discipline, we’re going to make mistakes and work double to fix them. What these technologies do is help us to broaden our thinking, through a kind of Socratic dialogue.”
“Although technology is a tool, the final decision always has to be made by humans.”

An ally in constructing attractive “real” experiences
For Gamez, another of the main benefits of incorporating this type of tool into her teaching has to do with the possibilities of immersion based on recreating professional experiences from real life.
To this end, the professor has used virtual reality (VR) or augmented reality (AR) tools to transport her students to trial hearings and simulations of oral trials in the virtual space of the metaverse.
“This was inspired by their being used in this way in Chile (...) What we do with VR is a kind of role play, where they can take on the role of judge or an attorney discussing the case.
“As part of the exercise, students also begin to see the impact of certain technological biases on real life, in terms of reliability, trustworthy evidence, and other elements.”
Through this type of immersive experience, the professor also helps her students to familiarize themselves with these technologies, as well as determine whether they themselves harbor any type of cognitive dissonance or even internal bias, which Gamez encourages them to break.
“We feed the tool with data to identify patterns related to bias and see how these can lead to unfair rulings.”

Promoting reflection through interdisciplinary learning
According to the researcher, there are currently very few lawyers who understand the range of legal applications for artificial intelligence and can truly exploit their potential.
By mentioning some practical uses of AI in the legal sphere, such as sorting and filtering data or creating complex chronologies around a crime, she shares how she applies it on an interdisciplinary basis.
“There is a major issue in our field, which it that it tends toward reading blindly, to interpreting words and misusing words as synonyms within laws.”
Through a multidisciplinary approach, the professor challenges her students to use AI tools to navigate the many varied uses that a single word can be given in the same law, encouraging dialogue and reflection in the process.
“This exercise enables them to identify and work on analyzing these documents and to reflect on the consequences.
“For instance, we’ve been working on the General Victims Act in class, where they reflected on how a single word, ‘victim’ in this case, is not used in the same way throughout the law and this is something that has repercussions, chiefly on sentencing.”

The end goal: Graduates with complex and competitive training
Finally, the professor remarked that the use of these types of technology in the legal field is no longer something new and it is therefore essential that future graduates adapt to them and the constant changes they bring.
“At the end of the day, technology and artificial intelligence will be used; I believe that this is something else we’ve learned since the pandemic: how fast these tools have evolved and how they have become a part of everything.
“I think this is a good thing, how technology lowers the cost of many processes, because this also includes lowering the cost of things like mobility, format, reading time, and analysis of complex cases like these.”
This being the case, Gamez concludes that if used correctly for support and not as a replacement, these new technologies will enable graduates to be more efficient and effective in all processes.
What’s more, she explained how she continues to teach different uses of AI to educate her students based on challenges such as understanding how media coverage of cases can create data regarding the expectation of guilt.
To conclude, the researcher encouraged professors to continue familiarizing themselves and experimenting with these technologies, recommending that whenever they incorporate one into the classroom, they should do so for a specific purpose and, above all, they shouldn’t be afraid to make mistakes.
ALSO READ: