S5 E7: Learning w/ AI: Benefits & Challenges w/ Dr. Perez
Listen Now!
-
Assistive technology definition from ATIA: “Assistive technology (AT): products, equipment, and systems that enhance learning, working, and daily living for persons with disabilities.”
SITES: Assistive Technology https://sites.ed.gov/idea/idea-files/at-guidance/
AEM Center: https://aem.cast.org/get-started/resources/2010/aim-basics-families
-
Artificial Intelligence (AI) is when computers mimic human behavior. It’s been around well beyond recent years; however, generative AI is very new, which is where computers can interact with humans through natural language prompts. The computer provides responses in human language, utilizing huge data sets to provide an answer based on inferences that are made from the data. That’s the basics of AI.
There are a lot of instruments that have AI already embedded in it, like spell checker or text to speech, which are assistive technology. This is even more so nowadays even with our phones, home technology, and email services.
Historically, people have had significant fearful reactions to new technology, even including radio.
We can view AI and other technologies as tools that can support learning and teaching practices.
Assistive technology and AI can be considered helpful for not just young learners but also older populations who are navigating their job, their life, and the world.
Assistive technology is essentially any piece of equipment used to maintain or enhance the functional capabilities of a person with a disability. This makes things more accessible to everyone.
When you live in an information-rich society, everyone gets overburdened by information overload. AI can take over that cognitive load for us by summarizing information and be more efficient with how we approach information.
There is an emotional reaction a person can have when presented with a new tool to use. At the same time, not having access to tools can create an emotional burden.
Universal Design for Learning (UDL) suggests that there is no “Average” person or “Average” human performance. Now, with AI, human performance is enhanced. Steve Jobs talked about a computer being like a bicycle for the mind so you can get further than you would if you were walking; allowing you to do things you could not do before. UDL helps to make adjustments to the bicycle to fit the individual and their needs. It’s sort of like the ‘bike metaphor’ is now about using an electric bike that helps you to ‘coast’ when you don’t want to pedal (e.g., certain tasks that could be automated). At the same time, you are still in control as the person ‘riding the bike’ in the first place. It’s not a substitute for being a critical thinker, deeper learning, and evaluating the validity of what AI is providing.
AI provides access but access does not necessarily equate to learning, especially deep learning to understand the content, use the content, and a foundation of memory that can in turn be utilized, contextualized, and maintained over time.
Personalization is not the same as individualization. Individualized learning is adaptations that are based upon inputs from the learner. Personalized learning is based on a relationship between the learner and the teacher/mentor/etc. It is deeply important for a learner to have a relationship with their teacher (personalized learning), which can be lost with simple individualizing learning. The human element and connection is crucial to learning. Personalization can also help learners to reflect on what they need, which can change over time and across contexts.
How technology is used is as important as giving access to it. For example, interactive whiteboards (very expensive) could be very underutilized in that basically a white sheet on the wall could serve the same purpose for a projector to show a screen on the wall.
AI could be a great tool to get started on something that is complicated and dense. This may be especially important with the incredible increase of information available since the creation of the internet.
At the same time, rushing to use AI may be an avoidance reaction when feeling overwhelmed or anxious. It is important to reflect on if AI is used to get into deeper learning or if it’s sort of a ‘bandage’ to get through an uncomfortable moment.
AI information can be biased based upon who is creating the AI in the first place.
AI is based on a lot of data sets from the past, while also updating data in the present; however, the data may be biased in terms of the the past without integrating the present.
People may look to AI to gain certainty to deal with their own anxiety of uncertainty. However, the person may lose sight of recognizing that multiple things can be true at the same time rather than looking for one piece of information to justify one side of an issue.
‘Hallucinations’ in AI is when information sounds good but it’s inaccurate (e.g., fake papers and authors, etc.).
-
Alexis Reid 00:00
Welcome back to season five of the Reid Connect-ED podcast. We want you to take a moment to imagine a world where you can push a button and get your desired meal item or service in seconds, or an option to ask a question and immediately receive an accurate answer. These ideas may have seemed like science fiction or dreams we only saw in Star Trek, The Jetsons, the movie Her, or other TV or movie depictions of the future. However, the future is now in our modern world, there are daily technological advancements that enter into our culture and daily lives, one of which is artificial intelligence. AI, as educators, we can often feel inundated with the newest approach to supporting learners, and any new technology or approach may elicit a response that brings us into our own moral dilemma, or even flight fight or freeze mode. Should we let students use technology to learn? What if they take advantage or use it to do the work for them? What if they take over and our roles are no longer important? New approaches to teaching and learning in general can feel like too much, and the less we know about them, the more fearful we may become in these situations where students of all ages start using new technologies we're not familiar with or comfortable with ourselves as educators can trigger a chain reaction. However, today, we're going to explore some of the possibilities, because we know that there are learners who benefit greatly from AI and have greater access and opportunities than ever before when they utilize it well. In this episode, we set out to show both sides of the story, because the use of AI in teaching and learning is inevitable and can also serve as an important tool to create access for learners with disabilities. Today, we're joined by our friend and colleague, Dr Luis Perez. Luis Perez is a disability and digital inclusion lead at CAST but the views shared on this podcast episode are all his own. He holds a doctorate in special education and a master's degree in instructional technology from the University of South Florida. Luis is recognized with an International Society for Technology and Education. ISTE making it happen award in 2020 Luis has published three books on accessibility, mobile learning and UDL, mobile learning for all by Corwin press, dive into UDL ISTE and learning on the go by cast publishing. He currently serves as the education and learning strand advisor for the Assistive Technology Industry Association ATIA. And not only does he have so many accolades professionally, he's also a great photographer and bird enthusiast and all around great guy. We're so glad to have you here. Luis, thank you.
Luis Perez 03:10
I'm really glad to be with you both, and I look forward to our conversation on this important topic.
Alexis Reid 03:16
All right, so let's dive into this. This is going to be kind of a broad topic, and I imagine a very fluid discussion, because all three of us have, you know, great thoughts, interest, expertise and ideas around AI. So Luis, I'm gonna put you on the spot and ask you just a big, broad question that I'm sure will be answered throughout this entire episode. But let's start with like, what is AI? What is artificial intelligence?
Luis Perez 03:45
That's a good question, and there's many different ways you can answer that question right, depending on your background and your perspective on it. But in general, I think most people would agree that it's basically when machines may make human behavior so machines, you know, start to be involved in decision making. They start performing tasks that typically we associate with human performance. And AI has been around for a long time. It's not new. But what is new in the last couple of years was Jenna AI, or generative AI, and it's our ability to interact with AI through, you know, natural language prompts, where you can have a conversation with the AI, and then you can get some responses back in natural language, and also the use of these very large data sets, including the internet, to be able to look for patterns in that information or in that data, and then be able to perform some tasks based on the inferences that are made from it. And there's a lot more to it. There's a lot of statistics involved. Which I try to stay away from, but that's the basics of AI, yeah,
Alexis Reid 05:04
that's super helpful. And in the introduction, you know, I've been sitting on this for a while, I have this TED talk in my mind of, you know, the benefit of learning and developing executive function skills as you know, that's my expertise. And you know, part of it starts with this idea of, you know, when Jerry and I were younger. In the 80s, we had the cartoon show the Jetsons, where computers are helping to predict what the family needs, or you could push a button and get what you want. And you know these, again, seemed like science fiction a long time ago, but now, you know, when you say the word Siri or Alexa, which some people confuse me for, you know, these, these seem like common terms where we're asking computers and digital technologies to help us figure out, you know, different pieces of information we're curious about, or the weather or what to do or how to approach something. And you know, it seems like it's becoming more and more ubiquitous, but also there's like this hesitation around even thinking about it. And I'm so glad you said that this has been around for a while, because it has,
Luis Perez 06:12
yeah, and I think it's another factor with AI is that often we're using it without even realizing it, because we're using tools that often have aI embedded in them. And so explicitly, you may not be using an AI tool, but maybe you're using a spell checker or a grammar tool, right? I'm not going to mention names of tools, but there are a number of commercial, you know, spell checking and writing assistance tools. Well, a lot of those are using AI to help you improve your writing. Or in our world of assistive technology, you might be using text to speech, and those voices have become more accurate. They've become more natural sounding, and a lot of that is through the use of AI in the background. So that's important to recognize is that you know often you're using AI, sometimes without even knowing it. And I recently was listening to a podcast where they use the words Boomer or Doomer when it comes to AI, and the idea is that you could be a lot of people are falling into those two camps. Of like, the boomers really think AI can do everything from like, you know, toast your bread and put the butter on it, and then the Doomers are like, it's gonna replace us all. And, you know, it's the doom of the planet. The truth is that it's somewhere in between those two extremes. And like anything, you know, we've said the same thing in the past about radio. If you go back to the 1920s or whenever radio came about, and you read the newspaper reports, you know, we had the same concerns, like there were people on either camp, or when TV came out, right, or when the internet came out, which is more, you know, closer to our like lifetime. But with all of these technologies, people have kind of fallen into those two camps. And the truth is, they're just tools, right? And it really depends on how you use them and how you use them to support specific learning strategies or teaching strategies.
Alexis Reid 08:28
Absolutely. And you and I both are huge advocates for universal design for learning, and I've been really excited to see some educators reflecting on their experience with AI and just different digital technologies in the classroom, where they're thinking about how these tools are really allowing for them to establish a framework of Universal Design in their learning environments, so that they are focusing more on inclusive environments. And Gerald, I wonder if you can talk about the psychological side of it too, real quick.
Gerald Reid 09:00
Yeah, it seems like AI is becoming that thing that kind of has, like a common ground for everybody, like everybody could be using AI. So it's not like, it doesn't make you necessarily feel like you're different for using some sort of technology like this to help yourself. I'm really curious Louise about it almost seems like AI is like, like, steroids for learning. And maybe that's, I'm not sure if that's very metaphor to use. I truly think that like it's, and I'm specifically thinking of like a chat GPT, right, something where you you, you're asking it to give you any answer you can possibly imagine. I was gonna, before this episode, I was gonna actually do an AI search and say, AI, what are the downsides of AI? And then I was scared, because I didn't want to hurt the feelings of AI, and then AI would get mad at me.
09:53
I don't think it will do that. We're not there yet.
Gerald Reid 09:59
Well, yeah. Go ahead.
Luis Perez 10:00
Well, I think you are bringing up an important point. You know, when it comes to Universal Design for Learning, historically, like a phrase that we've used, is that, you know, the average is a myth, right? The average when it comes to human performance, is a myth. And there are people working in the AI field that say that that's even more true with AI in the sense that it really calls into question, like, what's average human performance? Because now human performance, to your point, Jerry, is enhanced, right? The metaphor that I've always used in terms of, like, how we use technology, and it's one that I'm proud is actually holding up quite well. Is that of an adjustable seat for learning and the bicycle, right? And this is actually a quote from Steve Jobs, right? That a computer is like a bicycle for the mind, in the sense that, you know, with a bike, you can get further than you could just by walking, right? So it's an enhancement that allows you to modify human performance in a way that allows you to do things that you couldn't do before. And so with Universal Design for Learning, really, our goal is to modify that bike, or provide adjustments to that bike so that you can be in the optimal position to do what you want to do. So on a bike, you know, you will raise the seat, you'll change the handlebars, you change your positioning so that you can get the most out of your, you know, human capabilities, right, your your leg drive, and your aerodynamic performance and all of those things, right? And so to your point, it's now like we've gone from a bike that you pedal and with AI, it's now an electric bike, and you can choose at times to pedal and at times you can coast. But many people think about AI more like a robot vacuum. You know, robot vacuum, you program it, and it goes out and it cleans the house, and you don't do anything. It does it by itself. And there may be situations where we want to do that with AI, right? It's a task where it's really monotonous, really routine. So we might want to program the you know, robot to go and do its thing and then come back and report back to base. But I would argue that with AI in the context of learning, what we really want is an electric bike where you're in control, right? You have agency over what it's doing. So it is enhancing your capabilities, but it's not replacing them, right? So in the context of learning, what this means is like critical thinking, right? You're using AI, but it's not a substitute for you having AI literacy and being a critical thinker and evaluating the validity of what that that AI is providing. So I feel like we have to be very careful in that we support learners and incorporating AI in ways that, you know, really enhance what they're doing, but at the same time, doesn't remove the need for, you know, deeper thinking, deeper learning and so on.
Gerald Reid 13:18
I absolutely love that bike metaphor. I think that's perfect, and it's a good adaptation of my steroid metaphor. But I think we're kind of saying similar things. But you I really like what you said about the bicycle metaphor. So in terms of learning, because I really want to kind of get into the details about what this looks like in the classroom and how students use it in terms of learning, and like deeper learning and memory and holding things in long term memory and understanding things contextually. How do you feel about the importance of not just doing a Google search or an AI search to get information versus using that information? And I know this is kind of a leading question, because I might know what you're going to both say about this, but you know, in terms of learning, you know, the use of learning is just as important as accessing the learning. So AI is access, we can say, but it's not actually, you know, access doesn't equate to learning necessarily. Just because you access something or can see it or be exposed to it doesn't mean that you truly can use it, understand it, contextualize it, remember it in the long term. And I think, you know, I was talking to my students, my grad students, and I said, you know, I think this semester, I actually want to, I actually want to have you have an exam, because I feel like we kind of moved away from exams. And I remember when I took exams, I didn't love them, but some of the stuff I really had to commit to memory long term, that I really studied over the course of the semester, and try to commit to memory and use it and understand it and think about it and talk about it like those are the things I remember. And I think there's certain foundations that you really have to remember, because if you don't have foundations of certain content level, it's hard to be creative. It's hard to be able to. Use it in a critical thinking way, because you kind of need the foundation to use it.
Luis Perez 15:04
I think, Jerry, you're right on right, like AI, can be sort of an entry point to learning. There's a lot more that needs to take place. And to me, I'm going to bring it back. Alexis, this is going to make you happy. I'm going to bring it back to Universal Design for Learning, right? One of the things that really bothers me is when people say that we're going to use AI personalized learning. And I'll tell you, that sounds good in theory, but I'll tell you what bothers me about it too often, what they're saying is we're going to individualize learning, and that's different, right? We're providing you with like, different activities, or we're providing you with different information that's individualized, you know, based on, I don't know, certain inputs that you provided to the system, but Personalized learning is based on a relationship between you as a learner and a mentor, or, you know, when we think about the educators that have been really influential in our lives, rarely, when I ask educators like, who was the best teacher you ever had, rarely do they say, you know, the teacher that taught me the Quadratic equation or
Gerald Reid 16:19
the smart right? It's,
Luis Perez 16:21
yeah, no, it's usually the teacher that I had the biggest connection with, the teacher that you know cared and believed in me and pushed me forward and and so on. So, like, that's to me, is personalized learning, right? That is the connection that is necessary, but like that true, personalized learning to take place, which is, you know, teachers that bring compassion, they bring Well, passion to begin with. Like they're passionate about their craft, they're passionate about their topic, that passion is then contagious, right? And then they're compassionate with their students, right? They give them grace when it's needed, and then they push them when that push is needed. Yeah. So I can think of many teachers that have performed all of those, you know, functions for me, being compassionate when I needed it, and then being, you know, like, Hey, you're not doing what you're supposed to be doing. You need to get back on track. So to me for learning to be truly personalized, that human element needs to be there. So I just want people to be careful when you hear those words thrown around personalized learning, ask, is it really personalized, right? Is the person there first of all, like, where is the person in this or is it just individualized? Because you know, you can individualize things by just having a learner sit in front of a computer all day. There have been many charter schools have to try that. And do you think the learners love that sitting in front of a screen for several hours a day? No, they didn't, because that personal touch was missing from that interaction.
Alexis Reid 18:04
Well, Louise, I'm typically happy and love the things you say, but I especially you know that that piece of connection is so important, and I think the personalization is helping to empower learners to figure out what they need across, you know, the landscape of their academic careers, right? Because what they need one day might be different than the next, and what they need to begin a journey of learning new content or critically thinking about something might shift and change over time. And I think the personalization does, as you say, come from that connection and relationship, because the educator, like Maria Montessori said, is that the educator and the teacher becomes the guide, and ideally the guide by their side to help them figure out and empower them to have agency around this is what I need. This is why it's important. This is why it's helpful. And I don't know how you feel, Luis but in that big surge where every when the federal government started giving Ray funds for schools to have, you know, iPads and Chromebooks and computers in their classrooms, a lot of what I heard from school districts is a lot of that technology just ended up locked away in a closet because they were afraid of how Students were going to use it, right? And I kind of want us to tap into this. There's so much opportunity, and we'll talk more about how powerful AI and assistive technology can be for learners who absolutely need it, but I also want us to talk about these fears that surround anything new, right? Sometimes educators and just adults in general, they fear when young people have too much power or they know more than we do, because they're afraid they might take advantage of it. And to Jerry's point, you know, we don't want to just give something really powerful to somebody. We want to help them to understand why it's helpful. We want. Help them to understand how it can empower them so their their learning journey becomes personalized because they're utilizing the tools and the resources and the guides to access what they need and to be able to dive deeply into their learning journey versus, you know, just kind of going along with it, yeah.
Gerald Reid 20:20
And you know, Alexis, in terms of personalization, and you're both saying this, this popped in my head too, is if you learn something when you're 10, it could be very different than learning the same exact thing when you're 21, or 35 or 50 or 65 right? And so in terms of personalization, maybe that's also something to consider about when we're using AIS, you know, having that person who understands you and understands the context of you being you, all the factors involved in terms of how your brain has developed, in terms of the stressors you have, in terms of your life experiences, in terms of what you learned already, your background knowledge or lack of background knowledge, you know, in terms of your life experience that you've had or not have, and so, you know, in terms of just using AI to be a shortcut to learning things or to just accessing information, maybe that's also a missing piece in terms of personalization, is that the person matters. You know, it's the same thing about psychotherapy. We can give psychotherapy like set cognitive behavioral therapy, to literally anybody. But it's, you know, in terms of the research, the research says it's not necessarily always the therapy that's going to make a difference if the person gets better or not. There's also factors in terms of the therapist and also the patient, in terms of, you know, what they bring to the table. That's a very important part of, you know, the therapy experience. I think it also applies to learning. I'd say,
Luis Perez 21:42
Oh no, absolutely. And typically, you know, the concept is that human in the middle. That's something that's mentioned in AI circles quite a bit, that we want to make sure that we keep the human in the middle in terms of, like, you know, it's not just about the technology, it's about our interaction with technology within a learning ecosystem. So to your point, Alexis, about like the past, we've just kind of dropped technology in, right? So let's buy a bunch of iPads. Let's buy a bunch of Chromebooks or PCs, or my favorite was interactive whiteboards. So I'm dating I'm dating myself because I did technology integration back in the heyday of interactive whiteboards, what I found is they were being used as like glorified sheets that you could just go to Walmart and buy and put on the wall to project, right? So it's like, you're paying 1000s of dollars when a sheet hung on the wall could do the same thing. So we want to think about how is technology being used in a meaningful way, right? So it's not just to drop information into people or have them ask for information, but it's to, like, actually do something with it. So in terms of, like, AI and a big part of AI literacy, right? We with UDL, universal design for learning, we always say that the goal is not to make information accessible. The goal is to help you take accessible information and turn it into useful knowledge, which is a different thing, right? You take the information and then you're able to do something with it. You're able to apply it to a novel situation, or you're able to apply it to a challenge that you have, right, that you need to solve. And so that's part of that agency component that we really are pushing quite a bit with the latest update to the universal design for learning guidelines. So quick plug UDL guidelines that cast out org. But check those out. They've been recently updated with, you know, the latest findings from the literature on learning and so, yeah, we need to keep agency. Need to keep the human and agency in the middle right that people are not just using AI to just ask questions, but they're asking questions. They're getting information back, and then they're doing something with it, whether it's solving a problem in the community or solving a personal challenge that they have, that AI can help with. But again, it's not the robot that you send out into the world. It's the bicycle that you're riding and controlling and having it do things for you, but you're still in control all the way through.
Alexis Reid 24:35
Big shout out to Luis and so many other colleagues who participated in diving into the research for these new UDL guidelines, UDL guidelines 3.0 that was an enormous amount of work that not only did you dive into the research and really make sure that everything that was stated in the new guidelines were empirically based, but you were so thoughtful about again. But we're talking about keeping the the learner, the human center, at the core of every single decision you made as you were putting those pieces together. So thank you all for doing all that great work.
Gerald Reid 25:11
Yeah, I can echo that. And UDL is the guidelines are great because it's an integration of different different fields, right? And I love the idea of integration, and it's kind of a metaphor for what we're talking about with AI, you want to be able to integrate information into a whole, rather than just being new information that you're just going to hold on to and grab onto or or say, Okay, now I know this, but you really want to be able to integrate things into a larger whole. That's certainly what psychotherapy is about. Just for a plug for psychotherapy, right? We're always trying to integrate different aspects of ourselves and what we're learning into our psyche to help ourselves. You know, whatever therapy you're using. So I want
Alexis Reid 25:49
to go back to the metaphor of the bicycle too. Because the way we're talking about AI, you know, it we're not going to just put, you know, thinking about me. I'll use me as an example. I'm five foot three. You're not going to put me on a bike that somebody who's six five would be using, right? You wouldn't just say, Okay, here you go. Here's the tool. Just use it to get where you need to go. You would say, Okay, do you need to adjust your seat? Do you need to adjust the handlebars? This is, let me show you how you use the electric component to give you an extra boost. You would walk me through that process. And I have these conversations with students all the time, especially high school and college students who are using AI and chat GPT all on their own because they see it as a shortcut. And our conversations really revolve around, okay, what's the purpose of using this technology in this tool? You know, they're often hiding it from me because they they don't want me to think they're cheating, and I try to normalize that. I'm like, You know what? You know you're asked to read a lot of really complex texts right now, yeah, sometimes that's difficult for you. Definitely, I can understand why you would want to find a shortcut to be able to access this information, because it's exhausting even thinking about how much you need to get through tonight. But, you know, the conversation is more around, why is this helpful? Why is this something you're going to you know? What is it actually boosting and amplifying for you in this situation? Really need to help learners and all people across the lifespan to better understand how it can become a tool. And I always say, you know, is it a Is it is it a toy or a tool? And now I think we can consider, you know, is this just a band aid to kind of help you get through a moment that feels like it might erupt, or confused, or yeah, you want to give up or something, or feeling overwhelmed? Is this something that just getting you through a moment, or something that's helping you to really dive deeply into the information, the content and the work, to be able to participate in a way that feels meaningful to you?
Luis Perez 27:55
Yeah, and I will, I would say that in some ways, kind of like the development of AI I saw, in some ways, as inevitable. And what I mean by that is that, you know, ever since the internet came about, we've been creating so much information, right? And we know about information overload, right? This is, this is just a fact of life, that there's just so much information that we've created since, you know, the birth of the Internet, like and wasn't that long ago, right? To like more information by many, many factors than we had created from the birth of mankind or womankind to that point, right? And so in some ways, we need AI in order to be able to still be able to, you know, make sense of all this information. So I think about some of the really beneficial aspects of AI, for instance, being able to analyze large amounts of information and do things like detect that you have cancer, right, or predict that you might be able, you know, come down with cancer, things like that. Where like being able to take a look at large amounts of data and be able to get some insight from it, we needed tools like this, right, that could go in and look at large amounts of data and make inferences from it, and provide us with those insights in a way that we can better understand so I see it as a necessity in the way that, like you know, you need a tool to kind of sift through all the information that We have at our fingertips these days, but at the same time is it's not one without, you know, things that are we need to be cautious about.
Alexis Reid 29:48
I kind of want to tap into and talk about both sides of like, even where the information is coming from, right? Because even though there's so much information on the internet, it's being per. Introduced by humans, right? So when AI does its work to navigate and sift through all the information that's accessible through the internet, you know, sometimes it's it's not pulling from like every possible perspective. So I just want to raise that as a potential concern too. And and these are other points of conversations I have with my students that, you know, it might give you a starting point to know where to, you know, enter the conversation, but it's not going to give you everything and loose. I wonder if you can talk a little bit to
Gerald Reid 30:32
that too. Yeah. Can I ask a follow up to that is, you know, could it be biased or, but, but not only can it, can it be biased, but can it? Can it almost lead us, you know, almost like in therapy, like I can ask a question, I'm like, Oh, that was a leading question. I'm kind of leading them to think a certain way, which I shouldn't do, right? So, you know, is it possible that AI is kind of leading us in a direction and then it kind of influences the directions that we take? To Alexis point, if it's not considering all different, you know, ways of thinking.
Luis Perez 31:02
Well, it definitely is an issue, you know, bias and the fact that a lot of the inferences are not made from representative data samples, and that just has to do, like you said, Alexis, you know who's putting together the AI, right? Who's at the table in terms of developing the AI? That's something that you know, I'm really passionate about, is making sure that computer science departments are more diverse, software development teams are more diverse. That's just going to have a whole bunch of different benefits, right? That's the saying that inclusion promotes innovation, yes. And so when we include more perspectives, we include more different ways of looking at problems and challenges, we actually end up with better solutions. So I think to generate the create the best AI that we can, we have to have women be part of those development teams. We have to have people with disabilities. We have to have people of color, because
Alexis Reid 32:16
people who aren't digital natives too, perhaps perspective, absolutely.
Luis Perez 32:23
I mean, you know, you could have anthropologists be part of that conversation. You should have anthropologists. You should have liberal arts people. You should have ethicists. Yeah, so that you're thinking about the ethical implications. You're thinking about all these things that we're talking about, but in many cases, I think the challenge. I don't know if this is going to make sense, but when I think about AI, it's sort of like the technology of the present in the future, but it's often using data from the past, great point, and that's where the point of tension is right. So we're making decisions that have an impact in the present. We're thinking about the future potential of these tools, but a lot of the insights are based on data from the past, and that data hasn't always been representative. So what I would say is for learners to always take it as a first draft, yeah, and to always assume that there's going to be bias in it, because it's being generated in a biased society. So it's not separate from our society, just like when we when we find artifacts from history, from the past, right? There are evidence of what was happening in that society, and so AI is going to be our artifacts in the future.
Alexis Reid 33:45
Yeah, and I always say that bias often comes from that which we don't know. That's why education is so important, and that's why, like just keeping things in silos, you know, or that information sometimes does come from a silo, it's just so important to educate learners of all ages, you know, all of us, for Humanity's sake, to understand that it's never just one thing, and we need to broaden our perspective. And again, you know when we think of, when I think of AI, at least it always is like a stepping stone, especially from an executive function perspective, that it can help us synthesize, prioritize and organize some of the information that's out there to figure out what we do with it next, rather than it being everything. Yeah.
Luis Perez 34:30
And what I would say is, you know, people have been trying to tackle this issue of bias and AI, but I'm adamant that they're asking the wrong question. The question is not, how do we mitigate bias in AI is, how do we mitigate bias in society? Like you mentioned the word Band Aid and just a few minutes ago, if you're trying to address bias in a data set being used by AI that. A band aid. You're still using the AI in like, a biased environment, yeah, so you can't separate it from like, larger efforts to like, address bias in society, address representation, right? Representation matters. So again, going back to who's being involved in the development of AI who's making decisions about it. And so we as consumers play a role in this here. And the role that we play is we can be, you know, whenever decisions are being made about AI, we can ask questions like, hey. You know, tech companies are already required, in many cases, to release information about who's part of their boards. You know, who's part of the leadership team? How diverse is that leadership team? We should be asking the same thing about AI companies, right? Who's on your board, who's part of your leadership team? What's the makeup of your workforce if you're developing AI, you know, things like that that really have an impact on representation and and bias, beyond the technology, right? But really impact the ecosystem which is being developed and deployed, yeah,
Gerald Reid 36:16
and you're saying, like, AI is based on past information, and, you know, part of therapy making a metaphor here is to use the past to understand patterns, but also to be present, because every present is every present moment is unfolding. It's new. And to be able to understand the present, and to not just base yourself on the past, but also to be open to the present in this similar light with in terms of bias, like, you know, I think sometimes it's hard when we're using AIS, because we want certainty. And like so much of the kids growing up now, they just want certainty. They just want answers. They just want to know. And I'm, I don't blame them for that. Like, you know, part of anxiety is not having certainty. So you look for certainty to feel less anxious. And I'm afraid that, you know, I keep saying, I'm afraid, you know, I'm looking at the potential downside of kind of using AI to just get certainty, when sometimes, as we were saying, answers are not so concrete, they're not so black and white, where sometimes multiple things could be true at the same time, rather than one, one side arguing their side, the other side arguing their side, is that, as you know, this conversation kind of elucidates that is that multiple things could be true within within the larger context at the same time, and that's maybe something that could be lost. And that actually, you know, could that type of thinking happens in dialog, where you can conversate with people, talk to people, understand different perspectives, understand common ground, understand differences, and be able to acknowledge, okay, this is true, and that's also true at the same time, and that type of dialog may not happen when you're just kind of, you know, running to AI to get answers. I'm not sure. Maybe AI will adapt? Well, maybe AI will adapt to be able to help with that, but that's one of my concerns.
Luis Perez 38:07
Yeah, and I think I don't want to oversimplify, like AI is also generating data of the present. So it's not as simple as I made a scene, but in general, right? These large data sets, a good portion of them are from historical data, right? But the current data leads to some other issues that we need to be aware of, which is something called hallucinations. So hallucination and AI is when it sounds good, but it's completely inaccurate. Interesting, and I'll give you an example, because this is one way that I use AI, and just from experience, I've learned, like, some of the limitations. So I do a lot of research, right? And often with research, I have to put together a bibliography. And if you try to do that with AI, there will be authors who don't exist. There will be there will be papers that have never been written. Wow. There will be journals that were never created, wow. But one thing that I found it to be really good at is, if I have my bibliography, I have a love hate relationship with APA style, right, with getting the, you know, the periods in the right place and the right formatting. But if I have an existing bibliography, and I ask, AI like, Please format this in APA style. It does an amazing job. That's nice, yeah, of doing that, because with APA, like, you're running a set of rules, right? And so I found that that's, like, really effective, efficient. It saves me a ton of time. But just. Like saying creative bibliography, for me, that's when, like, the issues of those hallucinations, you know, oh, this is, sounds good, the Journal of so and so from but then you, you do a search, and you find that it doesn't actually exist. So we need to be aware of, I think those are the two big challenges, one is bias, and there's a human component to that, like, it's not just an information challenge, it's a human challenge, of like, addressing bias and representation. And then the other one is hallucinations. And at some point, I think, you know, with technology, we usually ask for metrics. We want metrics, right? Like with captioning, right? You wouldn't say, Hey, it's 80% accurate. That's good enough. Go, go and be part. I think you were part of a concert recently that had a sign language interpreter. So that that brought to mind this idea to me, right? So, like you wouldn't say, Hey, it's 80% accurate. That should be good enough, you know, the other 20 words or 20% you know, just kind of make it up. Is that a
Gerald Reid 41:13
love song or a heartbreak song, never know that 20% was missing.
Luis Perez 41:19
So it's the same with with AI like, for people to, like, really have a trusting relationship with it. These companies need to be more transparent, just full stop. They need to be more transparent about, like, the accuracy levels, the there needs to be some metrics at at some point that we can then use to judge one company from the other. So that's something that I really am in favor of. It's like, in addition to, like, the sort of, like transparency in terms of who makes up these companies and what their backgrounds are, because that's important, it's like some metrics about their performance, because right now, we're just assuming that they work. Yeah, in many cases, they don't.
Alexis Reid 42:08
And I'm thinking about the act, the accuracy and inaccuracy, and I'm thinking of some of the students I've worked with who have either learning disability or ADHD and are trying to just, you know, get things done and kind of like, just kind of hang with what they're asked to do, and sometimes they're making up articles to have enough references and citations in their papers. So with or without AI, we want to we want to help to educate learners, especially who are figuring out how to learn best for themselves, how to navigate situations, how to be able to gain accurate information that they can use in the things that they're doing,
Luis Perez 42:45
absolutely and, I mean, I again, we are trying to give you sort of, you know, both sides. I'm kind of a boomer, not only I'm not quite in that generation, age wise, but I'm a boomer in the sense that I do see potential, right? I do see potential in the assistive space for AI. So I'll give you an example, audio description. So audio description for those that are not familiar, when you have a video, if there's somebody who's blind and who's not able to see what's in the video, you have to describe what's happening so that they get the same or equivalent experience, right, and they get the same meaning. And so that's been really difficult in the past, because you have to hire voice talent that then goes into a studio like the one you're recording in right now, right? And that takes time and so on. But there are companies out there that are trying to scale this up using AI, using synthesized speech, and using AI to generate transcripts on the fly that then can analyze the video and insert the descriptions in specific points. So that's a great use of AI where, like, it's really going to have a huge impact, because there are so many videos out there that are not accessible, which means that our individual, you know, individuals who are blind, are not able to get that information. And so there's so much potential there. But at the same time, I have to be a Doomer sometimes, like, there are issues related to bias and issues related to accuracy and transparency that we still need to address if we're going to get the most benefit. So going back to the bike metaphor, is like we're putting it together while we're riding it. So at times it may be a little wobbly. At times, we may hit some bumps on the road. But that doesn't mean that it's not worthwhile, right? Trying to use it and trying to get the most out of it. So I hope that we provide it. Sort of a balanced perspective on it. Don't turn away from it, right? I don't know that that's a choice at this point. It is part of our lives, right? It's basically infused into our lives. But at the same time, approach it critically and think about the limitations as well, as well as the benefits.
Alexis Reid 45:20
Most definitely, Jerry always talks about seeing both sides, and I'm really glad that we engaged in this conversation and so many great you know, bubbles and discussion points came up along the way. And by no means is this exhaustive in thinking about all of the potential cautions and benefits of using AI, but really it is a great starting point to think a little bit differently about how AI can be beneficial. You know, we talked broadly about how it's used in life, because I think I see pretty much every environment you walk into as a potential learning environment, like I'm here in the studio right now and just looking around using my senses, I'm noticing new things that I'm not familiar with, and I'm learning from just by interacting with it. So, you know, we talked about AI from this broad perspective of how it can help pretty much everyone, but I really want us to be more mindful about how it creates greater learning opportunities and access for individuals who have a specific disability that they need to be able to gain access that we can think for more of an inclusive, holistic, humanistic perspective of how this can be beneficial, and how we have open conversations about our fears and the cautions along the way, because, like motorized bikes that sometimes will drive along with cars, they're not quite cars. There might be different rules and different signals and and different cautions that they need to be mindful of too as they go along and and as you say, Louise, this is a learning process. We need to figure out how to better integrate all of this into our educational landscape, but hopefully this conversation diffused some of the fears of the educators and parents and students alike that are out there listening, and we're so grateful for the conversation, and I'm sure you'll be back on the show at some point, and we'll continue, but again, Thank you always for your friendship, collegial input on everything you are, like my tech guru, I learned so much from you just being a part of my life. And thank you for all your expertise you share with both of us in our worlds and in the show.
Luis Perez 47:39
Well, I appreciate the opportunity to be part of the show again and always happy to contribute to these conversations, but I always like to leave people with something practical that they can try out. So recently, I discovered this tool called Google's notebook LM for language model, and it is amazing. And I think it would be really helpful for some of the people that you work with, because what happens is you can upload a document, like, let's say, a PDF document. And you know, you could take a Word document and export it to PDF if you need to, but you upload the document and then it gives you all kinds of things about that document. So for instance, they'll give you a summary, it will give you an outline. It can generate an FAQ from it. It can generate questions so that you can use those questions to, like, test yourself on how well you understand the content. Wow, but, but the most amazing thing about it is that it generates a podcast for you.
Alexis Reid 48:49
Oh, my goodness gracious. Wait, what? So?
Luis Perez 48:53
So recently, you should, you should, you should most try this out, because it will like poof, it'll blow your mind. So I uploaded a paper that I co published with a colleague of mine, Sam Johnston. We published a paper on a universal design for learning and intelligence, you know, focusing on collective intelligence. And you know, it's a pretty meaty topic in a pretty meaty document in terms of the content, but we uploaded the document to notebook LM, and it created a nine minute podcast, and it sounds like an NPR interview. They call it a deep dive, and it's basically two people basically having a conversation. Of course, these two people don't exist, wow, but they have a conversation about your paper, and you end up with a, you know, seven to 10 minute podcast that you can then listen to on the way to work or on the way to school. And I was very impressed by how like, accurate and concise it was, because, again, I'm feeding it the information, right? I'm limiting it to that document and the ideas of that document, so it's free for now. It's kind of a project, right? This is the Google way of putting something out and getting you to be the beta tester. And so it's free, but I would encourage people to give it a shot, because, again, in the spirit of UDL, universal design for learning, it's something you can use to, you know, really take dense information and convert it into other formats, or at least get different representations of that information, like an outline, an executive summary and so on.
Gerald Reid 50:48
And this will be the last reconnected podcast. I
Alexis Reid 50:54
was just thinking we should run our podcast through it to synthesize it into short, little chunks of information.
Luis Perez 51:01
Wow. No, I wouldn't go that far, Jerry, because, you know, they don't have your sense of humor. They don't have your jokes, good or bad jokes. I don't think they can play the guitar as well as you can. So they may have the voice quality, but not the guitar and the sense of humor and personality that you have, and Alexis as well.
Alexis Reid 51:28
You're very kind. That's amazing. And in fact, that's what I you know, a lot of my students, they use AI primarily to kind of synthesize information and prioritize where to start, because sometimes, you know, long texts, as I mentioned before, just become so overwhelming that they don't know how to begin, and then they end up just either not reading and not engaging with the text, or just spending so much time reading and re reading, listening and re listening, and it's really difficult for them to synthesize that information. So that's an amazing tool. See what I mean. Listeners, Louise has all of the great tips, and he and I are actually working on a piece together that will share a few more really practical ideas of how AI can help both educators, from a standpoint of, you know, being a sounding board, to come up with more creative lesson planning, to consider accessibility as you are planning, and from a learner's perspective, to be able to organize and really bolster executive function skills through using AI to help you get started.
Gerald Reid 52:35
Louise, this is a real quick, random tangent, but it just popped in my head. You're emphasizing the importance of synthesizing a lot of complex information. Like your example was, like, there's so much complex information, how are we ever going to even grasp it? Or Alexis said, How are we gonna even get started? It reminds me of 2008 when the housing crisis happened. If you watch the movie, The Big Short, as far as I understand, the whole issue is that nobody actually understood, like the metrics that were being used, like only a small percentage of people actually understood it. And I guess we're making money off of it, because nobody else actually understood what was happening and how it was about to crash. So I guess, you know, maybe the benefits of AI could be that it helps more people to understand very complex things. Again, of course, as long as it's not biased or leading us in a certain direction, that's intentional, obviously, that would be the bad part of it, but certainly I'm glad
Luis Perez 53:30
you brought that up, because, you know, I some listeners may not know this, a lot of people don't know this, But I'm actually have a background in political science, so I really think about things like that economic crisis right in context, and the issue like when it relates to UDL, and kind of, you know, stretch, I'm gonna make a stretch here, but it does relate to UDL, Because the issue with that crisis was not like intelligence, it was ethics, in the sense that like these people, like like you said, Jerry, they came up with really complex instruments. Most of them came from like the people that created these instruments came from some of our best schools in the land, right? And they're really intelligent people, and they came up with these complex instruments on purpose, so that, again, there was no transparency. Are you seeing some parallels? Complex, opaque, right? Not transparent. But the thing is, they did this without thinking about the impact on other people, right, and the implication, so maybe they knew the implications. But greed and, you know, again, comes back to ethics. Was not those things were not there. There, and so we end up with a situation where lots of people lost their homes and our economy almost tanked, right? So similarly, with AI, ethics has to be part of the conversation. Responsible Use, social justice, all of those things need to be part of the conversation, or else we can go down roads that you know, looking back, we may wish that we hadn't gone down those roads. So anyway, I'll get down from my soap box. You'll get me really, you get me really going on this, because now you're tapping into my political science background.
Gerald Reid 55:40
Yeah. Thank you. Thank you for sharing that, and thank you for bouncing off my random thought that now AI is going to figure out why I had that random thought in the context of our conversation. It's going to be able to predict while I have that thought again in the future.
Luis Perez 55:56
Thank you. Thank you both for, you know, allowing me to be part of this conversation and being, you know, sharing with your audience and keep up the great work and really appreciate you both.
Gerald Reid 56:09
Yeah, we appreciate you so much. Louise, thank you. Thank you, Louise, take care soon. Thanks, Louise, thanks.
Gerald Reid
Thanks for tuning in to the Reid Connect-ED podcast. Please remember that this is a podcast intended to educate and share ideas, but it is not a substitute for professional care that may be beneficial to you at different points of your life. If you are needed support, please contact your primary care physician, local hospital, educational institution, or support staff at your place of employment to seek out referrals for what may be most helpful for you. ideas shared here have been shaped by many years of training, incredible mentors research theory, evidence based practices and our work with individuals over the years, but it's not intended to represent the opinions of those we work with or who we are affiliated with. The reconnected podcast is hosted by siblings Alexis Reid and Dr. Gerald Reid. Original music is written and recorded by Gerald Reid (www.Jerapy.com) recording was done by Cyber Sound Studios. If you want to follow along on this journey with us the Reid Connect-ED podcast. we'll be releasing new episodes every two weeks each season so please subscribe for updates and notifications. Feel free to also follow us on Instagram @ReidConnectEdPodcast that's @ReidconnectEdPodcast and Twitter @ReidconnectEd. We are grateful for you joining us and we look forward to future episodes. In the meanwhile be curious, be open, and be well.
In this episode, we interview Dr. Luis Perez to discuss the benefits and challenges of utilizing Artificial Intelligence (AI) in learning environments and classrooms. What used to seem like science fiction is becoming more of our everyday reality, as AI has become integrated into many areas of our life. Dr. Perez explains what AI is and how it can enhance learning, individualize learning, create access to information, and be an entry point to learning, while also acknowledging the potential limitations relative to deep learning, human connection, bias, and foundational knowledge.
Be curious. Be Open. Be well.
The ReidConnect-Ed Podcast is hosted by Siblings Alexis Reid and Dr. Gerald Reid, produced by and original music is written and recorded by www.Jerapy.com
*Please note that different practitioners may have different opinions- this is our perspective and is intended to educate you on what may be possible.

