LIFE Podcast

AI & Your Wellness: Finding Truth in a Digital World

Dr. C Season 1 Episode 13

In a world filled with digital noise, how do we protect our peace and find the truth? The rise of artificial intelligence brings incredible tools, but it also amplifies misinformation, impacting our mental and emotional wellness. In this episode, Dr. C welcomes back the brilliant Dr. Jerry Washington to continue their vital conversation from "Lies, Scams, and Bad Advice." They explore the new frontier of AI-driven disinformation and provide a calming guide to navigating it. This isn't about fear; it's about empowerment. Learn to build your digital resilience, cultivate mental clarity, and transform your relationship with information from a source of stress into a tool for growth. It’s time to move from being overwhelmed to being aware, and this conversation is your first step toward digital well-being.

In this episode, you will learn:

  • How AI tools like deepfakes and biased chatbots are shaping your reality.
  • Actionable daily habits to build your "information immune system."
  • Turning a disinformation crisis into a win for digital literacy.
  • How to stay hopeful and resilient in the age of AI-fueled falsehoods.

Have a question? Ask Dr. C.

Thank you for listening! We'd love to hear from you!

Dr C:

And welcome back, Dr. Washington, to the live podcast. We're so happy to have you back. One of uh your episode was one of the ones that uh I had received so much feedback from and questions. And so you're back today. And thank you because today we're gonna be talking about AI and well-being. So we talked about misinformation, disinformation, malinformation. So today we get to talk about AI and well-being. So let's let's jump right into it. Let's start with our first pillar of learning. So the term AI is everywhere, but many of us feel overwhelmed. So, from a wellness perspective, um, how is AI actively creating and amplifying misinformation through maybe defakes or chat box algorithm biases? Can you tell us a little bit about that?

Dr Washington:

Yeah, so I'd like to try to to widen the the aperture just a little bit before we get that deep. So, you know, what we're calling artificial intelligence is is really a large language model. It's uh what do you call it, uh machine learning, you know, and things like that. So we've termed it artificial intelligence. And from my research, I I try to understand that definition of what artificial intelligence is. Um, because when we say that, it implies a meaning onto artificial intelligence that it may not be. So, first thing is that when you're looking at things like deep fakes, the chat box, algorithmic biases, those can all be mitigated by the human. So we have the agency to read and to validate, look across different platforms to make sure that how we're interacting with the artificial intelligence is meeting our instinct. I call that epistemic responsibility. So the responsibility always stays with us with what kind of information the artificial intelligence or how we're interacting with it should be.

Dr C:

So that sense of distrust is so corrosive to our well-being and it makes us feel isolated. So that brings us perfectly into the pillar of Inspire. In the face of this, could you share a community story showing how people turned maybe an AI disinformation uh situation, right, a crisis into an opportunity for digital literacy?

Dr Washington:

Yeah, absolutely. There are so many examples out there, Wanda. I mean, it's it's if you just watch the news right now, you can see all of those different examples in play, especially if you're practicing some form of media literacy or critical media literacy and trying to understand why certain images and certain videos or certain things like that are being perpetuated out there. A great example, and I'll just give you a specific one, would be wrapped around when it comes to public speaking and in politics. So there there have been several videos that have been created and pushed out as being real. You have the one where the Pope delivers a message, you've had one where even Trump and Obama have said things and and they're not correct. But what I like in media right now, and I know we give media a hard time, but what I like in the media is that a lot of the outlets are bringing those things to light and saying, this is incorrect, this is how you know it's incorrect, and this is what you should do about it. That is important when because media literacy has two sides to it, and digital literacy has two sides to it. So you you you want a protective side so that you can discern when something is fake, but you also want a creative side where you can display that and and make sure that you're putting things out in a world that that is factual or you're debunking.

Dr C:

Yeah, yeah. Yeah. So turning a moment of fear into a moment of empowerment by by really right uh gaining or or enhancing your your skills and your knowledge when it comes to artificial intelligence.

Dr Washington:

Exactly.

Dr C:

I love that. I love that. And it shows that we have agency, right? So let's give our listeners that agency and move to our third pillar, flourish. What are some actionable tools or daily habits that can build our AI-aware information immune system?

Dr Washington:

One would be that the first piece would be to experiment with artificial intelligence, but that activity has to be very self-reflective. So if you're using artificial intelligence to produce something or to put something out in the world, is understanding what you're using it for and what you're trying to do with it or what you're trying to influence. And then you gotta, of course, ask yourself the questions is what I'm putting out there factual? If not, then then why am I putting it out there? And if I'm putting it out there as a joke, then I need to make sure I'm, you know, letting people know it's a joke. Or if I'm using an image, just recently I shared an image. I knew that certain pieces of the image was incorrect. So I corrected that in the body uh of my post just to make sure everyone knew. The biggest piece though, Wanda, is playing with the technology. You have perplexity, you have the cloud uh versions from you know, you have the open AI versions, you have the Google Gemini's. It's playing with the technology, learning the technology, but also understanding how powerful the technology could be and how to use it ethically. I wrote a paper with my brother that talks about artificial intelligence stewardship, and I'll share that with you so you can share it with your readers. Yes, absolutely. But we wanted to reframe our understanding of digital technology usage around a stewardship instead of a disruption type model.

Dr C:

Wow. So that's that's interesting when you say play with the models, right? Get to know them, get to know what they can do. Just recently, I believe last night, I received an AI prompt that we've been using through our sorority. And I don't know if you saw the pictures that I put up this morning, and I said, it's so scary that I literally uploaded like a couple pictures of myself, told it I wanted to be in this nice blue flowing dress, and you know, and the when the I knew what was going to happen because I saw my other sorority sisters do it, and I was like, Oh, it's pretty cool. And it's kind of like all over, you know, social media right now. But when I saw the picture pop up, I was like, uh oh. Right. I felt scared, yes, excited, scared again, right? Because I'm like, wow. I mean, if you zoom in, you could see certain features aren't, you know, right, but you have to zoom in and you have to take time to really look at the picture, right? And it made me think, and then everybody on Facebook, oh my god, it's so gorgeous, so gorgeous. I'm like, I told you all in the comment that it's an artificial picture, yeah. And everybody's like, Oh, you're gorgeous, it's so yeah, but it because it's artificial intelligence, right? So I'm like, I don't know how to take this. You're telling me I look beautiful because I'm AI step in, or am I beautiful without AI? So that's that's the funny part, but definitely I felt a sense of hesitation. Like, I need to start looking at pictures a little bit more closely, right? Not just you know, scrolling and thinking that everything that I'm seeing is real. Yeah, and it brought me back to thinking of when I was a kid, you know, hearing like but don't believe everything you hear and only half of what you see, right? Right, right, yeah. So I feel like that lesson is something that I am now continuously telling myself again.

Dr Washington:

Yeah, that is pretty you have to do that. You you have to continue to learn, you just you have to continue to wrestle with these ideas. You know, this probably could be a separate podcast, but just to mention so in 2023, I published my book, Simulated Realities, and my goal with that book was to start talking about what you're talking about right now, is is creating these realities of ourselves that's not based in in anything real, yeah. Um and and what that does and and how that influences our behavior. So this is symbiotic, this reactionary type type thing happening in our culture. I pull on John John Baudrillard's idea of simulations and simulacra in the book. And I I rewrote the book. I haven't republished it. Of course you did. The reason why I haven't republished it is because I keep adding stuff to it because things are moving at such a fast rate. So they just released Sora 2, which I touch on in my book in 2023 before the video type technology was was in full swing. And I couldn't have imagined in 2023 the realness of these videos. It's getting gravity right, it's getting water correctly, it's it's getting human movements and and things. And you can see some of the different things where the the the technology doesn't quite understand how reality works, but it's it's getting closer, and and pretty soon it really will be indistinguishable between what what reality is and implies the some semilacra in there of making an imitation of something that doesn't exist. That's that's a huge huge thing for our society to rap uh to to to uh wrestle with. And it's just funny that Baudrillard was talking about this in the 80s, you know. Uh so yeah.

Dr C:

That's insane. So uh so much, so much. And it's so funny because I was listening to a speaker, and he's an AI guru, and he mentioned while I'm here educating you all on the latest of AI, I'm falling behind on the latest on AI. And I was like, it took me a minute, like, wait, what? But then he explained it like it's moving so fast that literally what you knew yesterday is almost obsolete, what you know today, right? It's like you're retraining every day because it's moving so quickly. So thank you for for talking about that because I think one of the things that we don't realize is that we need to also continue to tweak our strategies as the as the technology grows, right? There'll be new techniques to kind of figure out what's real and what's not. So we'll have to we'll have to bring you back every six months so that we can get like the latest in um AI. Yes, it is too fast. Well, we're gonna take a very short break. Uh, when we come back, we'll talk about how to stay resilient and hopeful. So stay with us. Welcome back. Uh I'm with Dr. Jerry Washington on the live podcast. We've learned about the problem and discussed actionable habits now for our final pillar of Evolve. Dr. Washington, with this constant flood of AI falsehoods, um, how can we stay resilient and hopeful as individuals in and our communities?

Dr Washington:

Uh this is gonna sound cliche, um, but uh believe in humanity. Um there's some really yeah, there's some really great authors that that have tried to to frame what's going on and to give people a way, if if not to give them a way of thinking about it, and and those thinking tools can can help that. So we know from from you know ancient times, you know, you have the the printing press, you have all of these different new technologies that when they came about, it said it's gonna be the end of society. I mean, think about the the story of the Luddites, right? The Luddites were trying, they did not want this new technology that's gonna take the jobs. All we we we we adapt, we we continuously adapt. But adaption comes with time. So adaption plus time, right? Plus learning, and you and I both know learning is about change, right? Change in some type of behavior. And so we just need to we just need to continue to learn, understand that this technology isn't determinative. There's there's it's not doing it. We are creating it, we are guiding it. And just like any other social tool, because that's what it is, it's a social tool, something that we're using to mediate between each other. That means we can shape it socially. That's why I write about it, that's why I continuously try to learn about it, is so that I can shape how this tool moves and how it's becoming whatever it's going to be. And we you just have to be involved with that. Not everyone can do that, right? Because there's typically the either-or framing that I don't know why we do that. My brother says we do that because we have two arms and two legs, two eyes. We only see things in binaries, but we we we we have to rely on some people, right? The experts out there that are pulling this apart and trying to understand it in different ways. Support those folks, go to their workshops, read their books, do all of those things so that we can all stay abreast because knowledge is a social construction. And and as long as we continue building knowledge, we will continue to thrive as as humanity.

Dr C:

Well, I appreciate that. Yes, and it's I I wasn't expecting, I definitely wasn't expecting to not lose hope in humanity, right? But I I love that. I love that because if we're not relying on each other, yeah, we we have some we will have bigger problems as as we see sometimes, right? When we turn on each other versus working together. So before we close, I just want to first of all thank you again for coming back. And like I said, in in a couple months, we're gonna have to have you back so we can get our AI um update from from the from the uh pro. But I just wanted to kind of recap key takeaways for our listeners, right? So under the learn pillar, we discovered that AI acts as an amplifier of information, and it could be missed, dis or malinformation through tools like deep fakes and algorithms directly impacting our mental well-being. And then we also got inspired by understanding by hearing stories of transforming disinformation into a learning opportunity and really understanding and providing communities an opportunity to turn a crisis into a empowering opportunity, right? To really think about digital literacy and connection. And then under Flourish, you gave us some some great opportunity, some great, oh, I can't talk, some great practical habits that we should continue to do. And I know even in our first session, you talked about the three-second pause, right? And really checking our sources and diversifying our information diet. And now with artificial intelligence, it's are we we need to stay focused on continuing to debunk and not just taking things at face and value, right? So building that muscle. I I I kind of refer to like working out, right? And and trying to get buffed. Every day you're lifting, every day you're lifting more and more and more. And so you're working that muscle, you're developing that skill. And that's the same way I see this as well. And then finally, in Evolve, you said, do not lose hope in humanity, right? So, how can we be active agents of clarity? So making sure that we're doing our own work, but also finding hope and resilience in our community and in our own mindfulness and our own mindful actions. So, did I get that right?

Dr Washington:

You got it right. You said it better than I did. Yes.

Dr C:

Is there anything that you wanted to add that I may have missed?

Dr Washington:

You know, again, I think driving home that this technology is not deterministic, it's not on its own trajectory. We have agency, just like we see in other social movements. We can shape where this technology goes. If we don't want it to do certain things, then we need to hold people accountable.

Dr C:

Safeguards.

Dr Washington:

Yep.

Dr C:

Yeah, yeah. Well, thank you so much, Dr. Washington. I appreciate your time, your knowledge, um, and really spending time with us to inform and educate us as well. So I I uh I wasn't joking. We'll definitely have you back because I'm sure three months from now we're gonna have something else, something uh more things, more AI things, right? Thank you. All right, have a great one, and um, we look forward to having you back.

Dr Washington:

Thank you.

Dr C:

So, to everyone listening, the digital world does not have to be a source of stress with intention and awareness. You can navigate it with confidence and peace. Stress free. Until next time, keep on learning, stay inspired, continue to flourish, and never ever stop evolving. I'm your host, Dr. C, and this is the Life Podcast.