AI at TXST: From Research to Reality
Eddie Sanchez (00:07):
This is Eddie Sanchez.
Rodney Crouther (00:08):
This is Rodney Crouther.
Eddie Sanchez (00:09):
And you are now listening to Enlighten Me.
Rodney Crouther (00:12):
Hey Eddie. We're getting close to the end of the year, man.
Eddie Sanchez (00:15):
Yeah, I can't believe that. We're almost in 2025.
Rodney Crouther (00:18):
And it's stupid, but one of my favorite things at the end of the year is getting my wrapped list from Spotify where it compiles the music you've been listening to and the algorithms every year getting a little bit better.
Eddie Sanchez (00:31):
And I actually use Apple Music. I know I'm the weirdo that uses that, but they provide you an end of year wrap up also. It's like you listen to so-and-so many hours of this artist and you might like this artist because they're related to this artist. So yeah, I definitely see how the algorithm connects these different musicians and feeds 'em back to you.
Rodney Crouther (00:53):
And you can see the algorithms to artificial intelligence of it getting better because the recommendations it gives me throughout the year are getting better. I've noticed over the last five years.
Eddie Sanchez (01:03):
I'm definitely seeing the algorithm affect not just my Apple Music, but also my YouTube profile. It feeds me, even my Reddit, right, cause I'm on Reddit a ton. It's just interesting how it kind of feeds me back, it's this revolving door, right? It feeds me that content that I'm already interested in. It is like, here's this other little side content that you might like as well.
Rodney Crouther (01:25):
Yeah, that's a good point. A couple of years ago, AI was this new thing we didn't understand and now it's just kind of creeping into a lot of different aspects of our lives.
Eddie Sanchez (01:33):
Even the streaming services, Netflix does the same thing whenever you finish watching a movie and you might like this movie or we recommend this film for you. So the AI that helps to run these algorithms, it's kind of scary because I feel like it definitely knows me better than I know myself. It really knows what I like.
Rodney Crouther (01:50):
It feels like it's kind of getting in your head. It's not just artificial intelligence, it's become an intelligence intelligence, but we both use ChatGPT, so it's beyond our entertainment and these kind of fun applications. It's a tool that we're seeing all over the place now.
Eddie Sanchez (02:05):
Yeah, it's definitely not just for entertainment purposes, right, right. ChatGPT is a tool that faculty members are using, students are using. I know that AI is starting to infiltrate all these different industries across the world, and so I want to wrap my head around exactly what AI is, how it's growing, the direction it's going to in the upcoming years. And I reached out to a couple of our faculty members who are experts in the area and they talk to me about how AI is being used in industries all across the world.
Rodney Crouther (02:35):
Yeah, I mean we talk about it on YouTube and Spotify, but it's kind of everywhere already.
Eddie Sanchez (02:41):
Yeah, it really, really is. It's not just faculty members, but it's staff as well that are implementing AI across the university and across the community. So it's definitely something that's going to be making a big difference 2025 and in the years to come.
Rodney Crouther (02:57):
Cool, so who did you talk to first?
Tahir Ekin (03:01):
My name is Tahir Ekin. I'm a professor at Department of Information Systems and Analytics.
Eddie Sanchez (03:09):
So what is your definition of AI?
Tahir Ekin (03:12):
Obviously this has been a big topic of conversation and we have adjusted the definition over time, but to me AI is a system that can make decisions in an automated fashion and that resemble human decision making.
Eddie Sanchez (03:33):
How did you first get involved in AI? How long have you been working in the field?
Tahir Ekin (03:38):
No, I got my Ph.D. in decision sciences and back then AI was more, I guess, categorized in touring terms in which we were just suggesting that an artificial intelligence system was more of artificial general intelligence where we can't distinguish the system from a human and decision sciences and applied statistics in general, although they are foundational blocks of AI, they weren't necessarily classified under AI. So yeah, I started my training in more applied statistics, mathematical modeling, a field where we call as operations research, which is based on decision making, also automated decision making, and my focus was on more decision making under uncertainty and statistical modeling. I also got some industry experience working for some government contracts focusing on healthcare fraud detection. It's been almost 15 years now. I have been using statistical methods, data mining methods, or as we call now artificial intelligence methods to uncover potential cases of fraudulent transactions. These problems have been difficult to solve, and this was like 15 years ago. We were still solving relatively small problems, but with the advances in computing again last decade, we have been really trying to solve larger problems in an efficient manner.
Eddie Sanchez (05:14):
Can you give me an example of one of these larger problems that might be affected by uncertainty?
Tahir Ekin (05:18):
So actually these problems are all around us. One of the, I guess examples I can give could be from disaster preparedness. Just think about the fact that before a disaster, I mean right now obviously we have been having more hurricanes and we just had Beryl, just think about we have so much that we don't know before a hurricane, but proactively, we still start try to be ready and ensure that the aftermath will be as manageable as it gets. Unfortunately, most of the time fail, so especially in the last 10 years, we have had so much effort to plan basically for these disasters, and there are so many different decisions involved and so many different decision alternatives in these ones as well. And when it comes to the randomness, again, even in 24 hours in advance, we really don't necessarily know where the hurricane is going to hit, for instance. So how do we make those decisions in advance that will on average would ensure that the least amount of people are impacted? And from the computational perspective, what we try to do is simulate what could happen. And depending on that, we try to understand what could happen in each scenario for each decision alternative. And on average, we try to ensure that we have enough personnel in basically particular locations to ensure that we are as safe as we can be.
Rodney Crouther (06:55):
So is Dr. Ekin working on any new AI research?
Tahir Ekin (06:58):
So right now, actually, I'm working on an Air Force funded project, which focuses on more adversarial machine learning, adversarial artificial intelligence, and the objective is to explore the security of artificial intelligence methods. Just think about it. We are just talking about, let's say automated tools, but what if basically a smart agent and adversary impacts the data that is inputted to that algorithm that you are interacting with. And then that may change the responses even in let's say communicating with copilots, ChatGPTs of the world, the response that you may be getting. We are, for instance, almost 95% are when it comes to autonomous or assisted driving. Just think that basically the idea is that these vehicles have all the sensors, radar sliders, which process the data around them almost real time. But what if a small perturbation in one of the traffic signs, which may be even invisible to the human eye, could change the input they are getting and instead of making a stop, they may accelerate. So these are the things that have been keeping people up at night, especially the last decade. So that's one of the challenges with autonomous systems, and that's why I guess we are a little bit hesitant to give the full reins to AI in critical domains such as healthcare or criminal justice, where automated decisions impact human lives directly.
Eddie Sanchez (08:38):
What excites you most about the potential of AI in higher education?
Tahir Ekin (08:42):
Personalization excites me the most because when you think about higher education, although we have silos and we try to serve different set of students in different departments, different academic programs, we still do some sort of one size fits all. But because of the structure of academic programs, we may not have enough flexibility in providing you actual curriculum, the exact curriculum that would help you in what you want to achieve. So you may still basically get exposed to three weeks of irrelevant material to your aspirations here and there. So in terms of curriculum building, in terms of that flexible education, I think we have lots of potential.
Rodney Crouther (09:34):
OK. Yeah. I know we joke in the office a lot sometimes about AI and accidentally creating a terminator or something, but I think there are a lot of people with legit fears about what AI is going to do.
Eddie Sanchez (09:46):
I definitely had my concerns, but after talking to these professionals and individuals who are working with AI, I'm assuming on a daily basis, they all were very positive. And obviously we have to be considerate about how we go about utilizing or developing it, but nobody was afraid that we were going to create what a T 2000—
Rodney Crouther (10:11):
T 1000, T 1000 Skynet. And it sounds like there are a lot of opportunities that we can be excited about coming forward.
Tahir Ekin (10:19):
I'm more of a positive person, I would say, in terms of potential of AI, and I think we should be careful. But I mean, in reality, we are still far away from the Terminator, the Terminator aspect and full automation. And again, we just still have full control. And I think obviously I understand the concern about basically changing the place of humans in our society. That's a little bit scary. That's scary for governments, is scary for people because let's say this is what I was paid to do and I was being paid 40 hours a week, now I can do it in two hours a week. So what does it mean? Am I going to lose my job or am I going to do other things with that increased efficiency thanks to automation? Or what if they lay me off? Most of the jobs are automated, so that's part of the scale, which is understandable. I'm a little bit more center maybe towards the positive side. Overall, it's not necessarily an option to be outside the movement. It's almost like when you think about smartphones, right? Think about how our lives were 20 years ago, even 10 years ago, and how they are right now. So it'll have that immense impact. I think some of the things we have been dreaming through the movies are bound to happen. We'll see how fast, but overall, I think most of us are trying to figure this out on the fly.
Eddie Sanchez (11:52):
If there's one thing that you would like for people to know about AI, what would it be?
Tahir Ekin (11:56):
It's dynamic. It keeps changing, right? And everyone should be open to lifelong learning.
Rodney Crouther (12:07):
It is good to hear from faculty talking about the potential benefits of AI and it being a positive for job creation and new skills for young employees coming up, but kind of bringing it home. Did you talk to anybody about how we're actually using AI on campus?
Eddie Sanchez (12:24):
Yeah, I did actually. I had a chance to talk to the VP of IT here at Texas State University, and he talked a ton about how he uses AI in his own day-to-day responsibilities and how he's encouraging faculty and staff and students to learn how to properly use and consciously use AI in a way that'll help them to be much more efficient in the tasks that they're completing.
Rodney Crouther (12:53):
And Texas State is very focused on putting new technology in the actual day-to-day use at the risk of sounding cliche, making a better tomorrow.
Eddie Sanchez (13:03):
Yeah, exactly.
Matt Hall (13:05):
My name's Matt Hall, and I'm the vice president for Information Technology and Chief Information Officer here at Texas State.
Eddie Sanchez (13:12):
So Mr. Hall, can you tell me what being the VP of IT, what you do?
Matt Hall (13:17):
Well, I just call it we're the electronic plumbers. We keep the electrons flowing. That's the network. We've got about 3,300 wireless access points. We have the whole network, all of our payroll systems, student systems, banner systems. We have responsibility for over 1,156 applications. There's 200 people in central IT and another a hundred people in IT across campus. We try to build communities of practice for security and availability and data and analytics. There's about 52 products and services, but we're trying to be the silent service. So you shouldn't know about us unless something happens like a Microsoft outage, for example. But other than that, we just want people to be enabled with the technology tools, and that's what we try to do is provide the fuel for your car to take you where you want to go.
Eddie Sanchez (13:59):
Could you define AI for us from your perspective?
Matt Hall (14:01):
What I try to put a general definition around it. I say anything that approaches human cognition or can pair it or do automation similar to what a human can do. I can take a picture of this table in your podcast table and it's going to understand context. So it can take an image and ingest it and then run it through. And that requires human-like thinking. But one of the things I'll caution people is about anthropomorphizing the technology. It's what one, I don't know who to attribute this to, but one of the definitions I've heard about large language models, generative AI is a stochastic parrot. So you put the information in, it has a lot of context, it has these vector analyses that are going on and it can really figure out what you want to do, but it's not alive. It's not a thinking model. It is a stochastic model. And I'll just leave you with this. Approaching human cognition is your definition, but even on one of our basic open source websites, there are 227 and growing every day different large language models. So there's no singular AI tool or singular AI thing. So it can vary based upon context.
Eddie Sanchez (15:10):
In one of your initial interviews that you had with Texas State University, you mentioned wanting to drive initiatives in AI fluency. What does that mean?
Matt Hall (15:19):
If you're fluent in Spanish, nosotros podemos hablar español. So we know how to speak, we need to know how to speak AI. So if I'm speaking with someone from Latin America or from Spain, I need to speak Spanish. Well, we need to speak prompt engineering. So one of the basic things we do is since we were one of the first schools in the United States to turn on Microsoft Copilot for the entire institution, you get a copilot.microsoft.com, log in with your Texas State ID, and you are attached to what we call the Microsoft graph. So one of the things in fluency is we want you to be able to interact with the things you generate in Microsoft collaboration tools. We also want you to be able to use good prompts. So let's say you're a professor and you're developing a new course on artificial intelligence. And I just did this.
(16:03):
I just taught an AI honors course at my previous institution. I developed the syllabus in a co-creation way with ChatGPT, and we co-created a syllabus and I would say, Hey, generate a syllabus on artificial intelligence. It would come back and say, this is a 14 week course, and it would generate. I said, no, I don't like, I like the third week. Let's change that. And within an hour of co-creating back and forth, I created a reasonably good syllabus. So understand how to use AI, but understand the consequences and understand the veracity check because first of all, don't trust what it puts out, but it's a good starting point. And we've talked to department chairs, I've talked to all the deans about it, very supportive faculty and academic administrators. So we want to be able to maximize the use of the AI tools in administration and teaching and learning. And then what are the ethical components involved? When I did my syllabus, I said, look, I co-created this with Midjourney, with ChatGPT, with Google Bard, et cetera. I give attribution.
Rodney Crouther (17:04):
So as a working tech professional, what advice did he have for our students? How should they approach engaging with AI?
Matt Hall (17:09):
Wear it out, use it every chance you get. There's again, attribution, old guy memory problem here, but attribution on this is not clear, but the sentiment stands. I told my students in the AI course, you're not going to lose your job to AI, but you're going to lose your job to someone who has mastered AI better than you. So we have to have our students and our faculty really understanding how to use these tools and maximize their use. So I encourage continuous use with attribution. It's unethical to have anything create something for you, and then you claim authorship. So if you give attribution, I wear these tools out constantly and you've got to practice it because it's going to be a highly competitive world. Now we're all hyped about it right now. We're having podcasts about it right now. You don't hear about podcasts around just HTML, what is anymore?
(17:59):
That was 1992, but within five years, it's embedded in everything. And one of the things that's going to happen is there's going to be the AI sidecar, small language models, smaller models, purpose-built models in devices in your PCs. It's going to be everywhere because right now it's shoot the request to the cloud, it processes up in Azure or AWS, and then the results come back to you. It's going to be a lot of localization because huge power being consumed for every single transaction and every single training. So wear it out and practice is my advice.
Eddie Sanchez (18:33):
So what sort of AI is currently being used for the university's IT infrastructure?
Matt Hall (18:38):
Microsoft Copilot, it's everything associated. There's two types. One is the generic Copilot, which is copilot.microsoft.com. Anybody at Texas State, if you have a Texas State ID, you got it. You got it right now, go play with it. You can generate images, you can generate text, you can generate code, you have to play with it. Microsoft GitHub Copilot, which is for software developers to aid them in their software development. And then we have a smattering of ChatGPT 4.0 that we're experimenting with. And then further, I've got about 300 licenses of Microsoft Copilot 365, which is, I'll give you an example of a premium feature is in Microsoft Team. Let's say you and I are having this meeting right now. What I could have done is turned on Teams, and this would've been really good for your editing or your show notes. It would've given a transcript, an already transcript, and then it could have provided a summary. If Billy joins the meeting halfway through, Billy can type "catch me up," and it'll give you everything that happened. And oh, Matt was chitchatting about the weather, and Sally talked about this, and Billy talked about that. It's a game changer.
Eddie Sanchez (19:43):
So what are some of your goals for integrating AI more deeply into the university, and how do you plan on addressing some of those potential challenges that might arise along the way?
Matt Hall (19:53):
Well, you start with what's important to university. The most important thing for Texas State is student success. So what drives student success? We want people to graduate in a four year timeframe. We want 'em to graduate with no debt. We want 'em to get out of the class, what they put into the class. We want 'em to have really valuable job opportunities, and that the skill. So if we could skill our students when they leave Texas State and they have some form of credential in addition to their degree and experience that they can say, look, I have experience with Microsoft Copilot. I have experience with these large language models. We want our students to have a competitive advantage or at least competitive parity with other students from other institutions. So we make sure that our students and faculty are armed with the same tools and everybody else we don't want to get out of parity.
(20:39):
That's probably the most important thing for me. And when I think President Damphousse hired me because of my AI background in what I was doing at University of Central Florida and what I'd done up to that point with AI. So it was really, automation is key for efficiency, but it's also key to our workforce and our student success. And imagine the amount of data we collect, just so you get a sense of our student systems. We have a student system called Banner. There are 10 billion — that's like Dr. Evil billion — 10 billion records in our data sets. Imagine what you could do if you could ask it questions and have a conversation with that data. What could you learn? What insights could you gain that would help our students meet their graduation and financial and job goals? That's what I want to use these tools towards, and it's a collective journey. So having this conversation and bringing more people into the conversation across campus can sort of generate ideas, I hope.
Rodney Crouther (21:34):
OK. Let's address the elephant in the room, especially when you're talking about AI in an academic environment. And Eddie, I know you teach a class. How do you guarantee that your students are using AI ethically and, just say it, not using it to cheat?
Eddie Sanchez (21:48):
I think that talking about it is probably the most effective way to go about making sure that our students are using it in an ethical manner. It's something that's here, it's something that students are aware of. There's no sense in trying to ignore it, right? Or just don't use it because that's not going to happen. That's not the reality of it.
Rodney Crouther (22:07):
Yeah, I mean, I don't think that's realistic for anybody to say, just don't use it.
Eddie Sanchez (22:12):
And with my own students, there is a lesson plan that we introduce AI and how to use it ethically and things to consider. And Mr. Hall, he let me know the importance of learning the skills of how to use AI so that way we're not left behind as professionals and as students. So I think he has some very good insight on how as a professor, as a faculty member, rather, I can make sure that our students are ahead of that curve.
Matt Hall (22:42):
So the key thing for use of a tool, like whether it's a car or an airplane or a hammer or AI tool, how do we enforce the responsible use of a screwdriver?
Rodney Crouther (22:53):
It's a good point. I remember a time before computers in the classroom, people cheated before AI. Whatever tools you have, you still have to have ethics.
Matt Hall (23:02):
We teach people, we make things available. We evangelize and bring people into the AI conversation. And I think the best way to have responsible AI is have constant conversations about it, feature examples, engage public discussions, and then people who have particular cases like student conduct have a session with our team that explores all of these issues in your unique context. Because ethics or responsible use can differ from place to place if you're using AI in policing versus AI in medicine. And one of the questions in AI in medicine, how can you not use AI? And that would be the inverse of it. If you don't use AI, you're not ethically treating your patients because having a good outcome, providing people good doses, providing non-biased data sets to people for their particular circumstance, for evidence-based and personalized medicine, these tools really help inform that. So it's really context driven as well.
Eddie Sanchez (24:01):
What excites you most about the potential of AI in higher education?
Matt Hall (24:05):
It's really going to be an efficiency bringer, right? I mean, it's going to be embedded in everything, and it's going to accelerate human cognition. So really it's your brain on steroids. And it's the same way you think about the Dewey Decimal system organized a body of knowledge, so you could easily retrieve it. Google's vision was to make the world searchable, make all the knowledge graph of the entire world searchable, and think about what's a pre-Google world look like? We lived in a world without phones. I watched my kids grow up in a world with a mouse in their hand, and it was a Google world. This is an AI world where when I was at Microsoft, I worked at Microsoft for many years. We were developing something called new user interfaces, NUI, the ability to have a human computer interaction that isn't intermediated by a keyboard.
(24:51):
It's really about talking to something and then getting an audio response. This is the future of AI, is that AI is going to be ambiently in everything and the way we interface with these devices, and it's going to be a different experience and a higher fidelity experience. Think about what's happened in two years. What's going to happen in two more? Siri is going to get better. Apple's going to get better. All these companies are going to get better and they're going to be natively engaged. So the future is, it's everywhere, but the future is also, you're not going to see it as a separate entity. It's going to be integrated into everything.
Eddie Sanchez (25:24):
Is there anything else you would like to add to this conversation, Mr. Hall? Any words of advice? Anything to —
Matt Hall (25:29):
Do anything, do this. Go to futuretools.io. Type in something you're passionate about, whether it's teaching, whether it's learning, whether it's skiing, whether it's you enjoy tacos, whatever. And just play with the tools. Get some of the tools out there. Use copilot.microsoft.com with your Texas State address if you're a Texas State citizen listening to this, but use these tools. Try it. Try it, try it. And my final advice would be to try it. Use them and play with them. They'll change your life. It's life changing.
Eddie Sanchez (25:58):
We'll be right back after this.
Rodney Crouther (26:11):
Yes, it's really exciting that our students have all these opportunities to engage with artificial intelligence while they're studying. But all of these conversations, Eddie, are giving me the impression that we really should stop talking about AI as what's coming. It feels like it's already here, even if you're not in tech, it's kind of in the marketplace in the job market already.
Eddie Sanchez (26:31):
Yeah, exactly. And it's a tool that is readily available to all of us, including professionals in different industries. And so because of that, I was interested in learning how AI was being used in the marketplace, and I had a chance to speak with a faculty member from the McCoy College of Business, and he spoke to me about AI and how it's being used in the marketplace and how it's expanding and some of the positive things that we can look forward to in the years to come.
Rodney Crouther (26:57):
Oh, who'd you talk to from McCoy?
Sam Lee (27:00):
My name is Sam Lee, and my department is Information System and Analytics.
Eddie Sanchez (27:11):
Can you explain to me what Information Systems and Analytics is?
Sam Lee (27:14):
OK. So the information systems and analytics basically is two areas, but they are closely related. So basically information systems is about the use of the information and to solve business problems and closely related to the computer science, but we focus more on the application in the business. And for analytics, we use the analytics methods and also the technology again to solve the business problems. And the traditional is related to the decision science and operations research.
Eddie Sanchez (28:01):
Could you give us a little breakdown of how you would describe AI? Artificial intelligence?
Sam Lee (28:09):
AI has been around for many, many years. When I was in the school, like 1990s, we studied AI already. So AI basically means we use a machine to learn, and also we are able to use a machine to reason the logic of knowledge. Then we use that to solve problems. But why people are so excited about the AI recently, because ChatGPT is a giant breakthrough of the AI, AI can be able to perform human being's task in the very high cognition level. Right now, people are excited about this because AGI, so AGI is artificial general intelligence. ChatGPT is able to perform task or to produce that kind of conversation and the result like human beings, creativities and imaginations. So it's a big step toward to the AGI.
Rodney Crouther (29:29):
Hey, I thought I heard him mention something called AGI there. I'm not really familiar with that, Eddie.
Eddie Sanchez (29:35):
Yeah, I hadn't heard of that term either, but he described it as artificial general intelligence and he talks a little bit about the difference between that and regular AI.
Sam Lee (29:47):
Yeah. So the difference between those two is for example, I can provide you some examples. For example, right now, and we use some form of GI in the limited domain or limited human intelligence, and I believe we have used the intelligence to recommend the movies watch in Netflix and also for the web Google sites. And we have the popup advertisement. Very likely it is generated by, we thought the AI behind the thing. So this kinds that we say weak AI, because when you use that, you don't feel that you interact with the human beings. But that's is type of what you say, weak AI, not general AI. So for the general AI, we use the machine and we feel like we interact with human beings and the whole bit of difference. For example, for right now, if we use the Google or website, we do some Google search and we very likely we have some popup advertisement, or you go to some sort of commerce site, they have some popup advertisement, but it's generally behind the things by the AI. But you think about the AGI or potential, right now we say ChatGPT and the people mentioned that is generated AI. It's not AGI yet, but it's one big step closer to that.
Rodney Crouther (31:46):
OK. So it sounds like AGI will be artificial intelligence. That sounds a little less artificial and more like you're talking to an actual person.
Eddie Sanchez (31:55):
And I think that's ultimately the goal of a lot of these AI scientists to make an intelligence that is as close to human intelligence as possible.
Sam Lee (32:05):
You are exactly right.
Eddie Sanchez (32:08):
What are some big advancements that you're looking forward to seeing within the next five and 10 years when it comes to AI, if we're using it ethically? And what are some of the points that kind of scare you? And you have some concerns about it?
Sam Lee (32:21):
Because I'm in the business schools, I would rather answer from the business perspective. So for now, the AI still need to be designed. We are trying to design and develop AI tools to be a complement of the human being's task. So we don't want it to be a replacement of a human being. So we want to design these AI tools to be productive. So in the end of a day, it could be a benefits to our business and to our society, but of course there'll be some bad actors, but we also need these governancy and we need this government right to control that. It's not going to be zero percentage. There'll be some people there. If we can control that, then that means this technology will benefit the people and we are not going to be in kind of situation, we are not able to handle that.
Eddie Sanchez (33:33):
So Dr. Lee, do you feel positive about the future of AI?
Sam Lee (33:36):
I think this is very important for us to remember. The AI need to be controlled. We need to have the policies, we need to have regulations. We need to have the best practice to control this AI for our future.
Rodney Crouther (34:17):
Eddie, you've found some really good experts, and I for one, feel a lot more positive that we'll get Data from Star Trek instead of T 1000.
Eddie Sanchez (34:27):
Yeah, that is the goal. That's the plan with individuals like our Texas State faculty and staff members leading the way. Ideally we will get Data instead of T 1000, but we still are a little bit off from AGI. But I think that's kind of a part of the beauty part of the journey is that we're in the process of reaching that end zone.
Rodney Crouther (34:54):
And yeah, it's nice to hear that we've got Texas State grads that are already out there being a part of that change.
Eddie Sanchez (35:00):
Yeah, definitely. Our students are getting an opportunity to learn how to use the AI and are involved in the growth and development of it. So definitely Texas State is doing big things when it comes to AI.
Rodney Crouther (35:12):
Thank you for joining us for another season of Enlighten Me. We're going to be taking a little bit of a break through the holidays and we'll be back in February 2025.
Eddie Sanchez (35:22):
We'll see you guys next year.
Rodney Crouther (35:25):
This podcast is a production of the Division of Marketing and Communications at Texas State University. Podcasts appearing on the Texas State Podcast Network represent the views of the host and guest not of Texas State University.