CAN-BIND Podcast

Episode 13: Artificial Intelligence and Mental Health: The Future of Care? (Part 2)


Released: August 7, 2025

Podcast Transcript

Dr. Kuhathasan: Hello, and welcome back to the CAN-BIND Podcast. I’m Nirushi Kuhathasan, and you’re listening to part 2 of our special two-part series on artificial intelligence and mental health. If you haven’t already, I recommend listening to part 1, where we explored the fundamentals of AI and its use in research and clinical care.

In this episode, we continue the conversation, diving into how AI is being used within the CAN-BIND program, and some of the ethical questions that come with it. Happy listening!  


Dr. Parikh: Hello everyone, and welcome to the CAN-BIND podcast. We’re fortunate today to have Dr. Frank Rudzicz, an associate professor at Dalhousie University who holds the distinction of two separate chairs, both in the Faculty of Computer Science at Dalhousie. He’s a member of the CAN-BIND research network and really our leader in understanding how AI can be used both for medical research and, increasingly, for clinical care. Frank, welcome. 

Dr. Rudzicz: Thank you. It’s a pleasure to talk to you.

Dr. Parikh: So you’ve given us a really good background of, you know, What is AI, and some of the potential applications in healthcare. Tell us a little bit about your involvement with dCAN-BIND and how that came about and what sort of AI questions that you’ve been engage in through CAN-BIND.

Dr. Rudzicz: Yeah, thanks. I got involved with CAN-BIND once I moved to Dalhousie. I was previously in another university in Ontario, and I moved here. I started trying to make connections with people who were working in data science or AI in healthcare. I met Dr. Rudolf Uher, a psychiatrist at Dalhousie and Nova Scotia Health, and he was one of the leads of this data science team within CAN-BIND. So the data science team focuses to a large extent on producing algorithmic tools that people in the network can use to analyze their data. That includes machine learning and AI, but also other statistical methods of data analysis. So when I joined the team, there was already a lot of data collected of clinicians talking to their patients. And I mean, it’s actually a really unique and valuable data set that is very rich in content and relatively large. Data of this type is very hard to come by so it’s really fantastic that CAN-BIND exists and has already collected a lot of this data. Given this data, there’s a lot of different tasks we can perform on it. The main one, of course, was just trying to follow the same technique I mentioned earlier with regards to using AI to look for symptoms of outcomes that you care about. So in this case, looking for symptoms in the voice that might indicate worsening issues of depression. That’s the main task. There’s subtasks that are more technical that we still have to solve. So when you have multiple people talking at the same time, kind of segmenting or breaking up the conversation into regions where person A is talking versus person B is talking… that remains a bit of a challenging task. So some of my students are working on that, and that can be used across a lot of different use cases. And then finally one of the students in CAN-BIND is working on using large language models to automatically summarize these conversations and extract information that will be clinically useful. So what that means is, you know, you might have a conversation that can be very long but only a small kind of constellation of utterances within the conversation will be useful to the clinician to come up with an assessment or a plan. So one student is working on summarizing and trying to figure out how we can use large language models to sort of integrate into the workflow in a way that’s maximally efficient.

Dr. Parikh: So with your work with CAN-BIND, would you say it’s mostly still at the level of research or would you say that it’s trying to develop clinical applications?

Dr. Rudzicz: Yeah, I mean, we’re still very much at the level of research. And there’s like a range here also. So some of us are working on some of the more fundamental just tool building exercises. So it’s very kind of low level. But I think the desire is to produce these tools that are going to be useful soon and safely for clinical practice. We don’t want to rush tools out, you know, into practice before they’re ready, but the goal is always to translate this technology into something that will be helpful for society. So we’re all very much motivated with that end goal in mind. And I think a network like CAN-BIND is really great because it can combine people who are working on the tools, the technological people like myself and my students, and then the clinical people who have the domain expertise and the real world need for these tools. That’s a really great combination.

Dr. Parikh: And are there connections to industry or other entities which might transform a validated tool, but might transform it into something that is, you know, used across the country or across the world?

Dr. Rudzicz: That’s the hope. So I’ve been involved in a lot of entrepreneurship endeavours in the past. So one of my former students and one of my former postdocs founded a company that developed AI scribes. There’s a company called Mutuo, which was purchased recently that produced an AI scribe called Autoscribe. So we have a lot of experience in this area and I think that’s, you know, entrepreneurship is one of the best ways to kind of take validated research out of the lab and try to convert it into real world applications. And yeah, indeed, we’re looking forward to trying to do that again. 

Dr. Parikh: You know, we’ve really been talking about all the advantages of AI. But as with anything that humans do, there’s a good side and there’s a bad side. One of the most simple things I wonder about are the environmental implications of AI in terms of power consumption. You know, we’re told to put off light bulbs and things like that, but I’ve understood that AI involves a lot of computing and there are massive server farms where there are hundreds of computers linked, and they’re the data processing centres so that when we ask an AI question, presumably it goes to one of these places and all that. I mean, if we ask a question of AI, how much power are we using?

Dr. Rudzicz: An awful lot. So I think it’s important to differentiate sometimes between the training phase of these AIs and then what we call inference or like live use of these tools. The training is by far the more power hungry phase of the machine learning process. That’s where you’re constructing these models by reading data over and over and over again and that by far, is the most power hungry. A single query to ChatGPT, I mean it’s hard to measure these things exactly, but it’s been said that it consumes about as much electricity as leaving one traditional light bulb on for 20 minutes and it produces on the order of, I think, one gram of CO2 per call. And this is in contrast with a single Google call, which is about 10 times less. A single query to Google is like leaving a lightbulb on for two minutes or something like that. So it doesn’t sound like too much to have a lightbulb on for 20 minutes, but if you’re continuously chatting with ChatGPT and trying to get an answer in a form you actually want, those pseudo 20 minutes start adding up, and then you multiply that by the vast number of people who are using these tools… it’s a big concern. I think the big, kind of, motivation in AI research is scaling. So everyone wants to do more and scale things exponentially. So add more data and more compute… and unfortunately, that’s just not going to be feasible any longer. So I think going forward, a lot of people in the AI community are going to have to try to pair things down and build much smaller models. We see some examples where this might be possible. Listeners might have heard of DeepSeek, this kind of Chinese alternative to ChatGPT, which has been said to consume, you know, orders of magnitude less energy to train and to run. It’s hard to tell if that’s actually the case or not, but what I think the way forward is for healthcare AI in particular, is just really fine tuned specific models for specific tasks. If we’re trying to decide whether or not a patient should be admitted to the ICU or not when they present at the ER, you don’t need a model that can also write poetry or screenplays or compose pleasant response emails or anything like that. You just need it to do one thing. Models that are very performant on very specific tasks can, you know, be efficient enough to run on your watch. We don’t need everything to use these massive models. So I think going forward, we need to kind of have an ecosystem of a variety of models as opposed to these massive monolithic models that are not very sustainable.

Dr. Parikh: So power consumption is one concern. What about the more direct concerns, you know, the ethics of AI, the potential for manipulation or misuse of AI. And even a very simple thing,  because we’ve talked about AI and healthcare: will AI and healthcare, I mean, it’s one thing to use people as scribes or AI as scribes, but will it de-skill our clinicians so they won’t know what to do because they’re relying on AI to figure it out for them?

Dr. Rudzicz: Mhm. I think there’s two parts to that question. So the first one has to do with like ethical concerns and that’s a big question. That would take multiple podcast episodes to kind of even scratch the surface of.

I just want to mention quickly that there is a large community of researchers within AI who are aware that these models can be used, even if they’re being used for aims which we consider to be good… like we want to give patients the best possible outcomes. If you try to optimize that kind of metric, on average, you end up disadvantaging certain groups, and there’s evidence, for example, that AI can exacerbate already existing biases in healthcare against people of colour, against women, against young people, and so on. But there’s a community within AI who, realizing this, are developing techniques to identify such biases in the first place, and then remove those biases or reduce the harms of those biases from these models. 

So that’s a big concern and we need to continuously work on it and make people aware that these techniques exist and demand that we apply methods of fairness to deployed models in particular in healthcare or any other domain where personal information or personal data is potentially at risk. So I think that’s very important. It’s important to know that this is a challenging area, but there are people working on it and there are solutions. Can you remind me what the second question was again?

Dr. Parikh: Will this end up de-skilling our healthcare professionals because they’re not used to thinking, basically, because AI is thinking for them.

Dr. Rudzicz: So the second part of that question had to do with the loss of skills and this is also a big concern. I think it might be instructive to think about the introduction of calculators into the instruction of math. So when calculators started to show up in elementary schools and high schools, there was also this concern that kids won’t learn how to do math anymore because they’ll just enter everything into these calculators and they won’t learn how to multiply. They’ll end up de-skilling or lose their abilities. What’s happened is not exactly that.

So what has happened is that even though maybe students are less efficient in performing large multiplications in their heads, they tend to be approaching more challenging problems in math,  trying to approach more fundamental theoretical issues related to more advanced math. And the same might be true in the use of AI. So like automating away sort of the more mundane aspects of work is an area that AI excels at. You know it might be the case that using AI scribes, for example, means that doctors are less likely to learn how to wrestle with EMRs effectively on their own, but that frees up their time and their attention to talk to the patient, spend more time with the patient, to think more deeply about the underlying issues that the patient might have, as opposed to just struggling with computer interface.

I think it’s not a matter of like turning a big switch in terms of overall skill. It’s just that doctors will focus on different things and hopefully they’ll be able to focus on things that matter more to them and to their patients.

Dr. Parikh: You mentioned earlier about biases that can be introduced through AI where certain biases that already exist can get amplified as the systems learn about, you know, the real world and how it discriminates and so on. But there’s a related situation we often hear about and that’s hallucinations in AI. So can you just tell us what is a hallucination in AI and what are the risks or what’s being done to address that?

Dr. Rudzicz: Hallucinations are another example where there’s no kind of clear cut definition of what they actually are, but in general they refer to instances in generative AI, so AI that’s producing responses like ChatGPT, of claims or references to things that just don’t really exist. So I’ve experienced this in terms of trying to use ChatGPT as an alternative to looking for recent academic papers, and ChatGPT will give you a response that looks on its surface very much like a nice academic response with actual links to things that look like papers. There’s author lists, and there’s years, and there’s journal names, and all that kind of stuff, but then when you click on the link, it just doesn’t exist, right? So that’s a hallucination. The model thinks that there’s a link that exists, but when you go and check it out, it just doesn’t exist. Hallucinations can refer to facts that aren’t actually facts, or claims that don’t have any rationale behind them. So that occurs usually when you’re looking for information for which there’s not as much precedent for. So the model thinks that there should be an answer for something, it doesn’t find one in its own internal representations, so it’ll just kind of make things up without realizing it’s making things up.

Again, there are mitigations against that kind of behaviour by these models as well. People might’ve heard of like retrieval-augmented generation or RAGs. This is a special add-on on top of of modern generative AI, where you kind of force the model to pay attention to real sources of information, like real documents or real knowledge in a database. So when it’s producing its responses, it kind of has to pull from real sources and give them to you as opposed to sort of making them up word by word or character by character. And that, to a large extent, has not entirely solved the problem of hallucination, but it’s definitely helped. 

Dr. Parikh: So as we’re coming to a close, you know, I’d like you to reflect on maybe two things: the good and the worrisome. So what is the greatest good that you see emerging from AI over the next few years?

Dr. Rudzicz: I think there are many goods and the main thing is to try to just improve systems that are sort of either kind of broken or are at risk of breaking. So across lots of countries, healthcare systems are at a breaking point and a lot of the people who work in healthcare are overworked, they’re burning out, and there’s lots of challenges with regards to getting patients the care that they need. So across a variety of these cases in clinical care, I think AI can provide supports. So like I mentioned earlier with the AI scribes, by saving time by making these processes a lot smoother. And, I mean, it’s not gonna, there’s no silver bullet. There’s no single AI tool that’s going to solve everything at once, but if we can carefully and judiciously apply AI in the proper places, overall, we’ll be able to kind of support and maintain and improve healthcare in particular.

There are risks. I think… I mean, we talked a little bit about risks of bias and I’m very concerned about that. But again, people are working on it and I think being aware of it and being able to identify bias will go a long way to help. We’ve touched on issues like loss of skills. So kind of dehumanizing, I guess some of these processes. And I’m not too concerned about that either. I think AI will mostly automate things that we want to have automated and free up humans to do what humans still do best. So I think that’s also an area where I’m very positive or bullish about AI.

One area that I’m a bit more concerned about is issues related to access and equity and kind of who is providing these AI tools in general. This is a bit more of a personal opinion than one that is entirely validated through experiment, but I’m concerned that, you know, if we end up having monopolies of AI, it will drastically change who has access and who can pay for access to some of these tools. If there’s only one or two companies that are providing AI tools for hospitals, then they can set the price, they can decide what we’re allowed to do with the data and the systems themselves. This won’t result in a system that I think is maximally fair or accessible. I think what we need is an ecosystem of smaller companies that are each focused on small different tasks. And for local hospital networks, clinics, to kind of maintain their access to their own data and to protect their own data without having to ship it off to these massive data farms you mentioned earlier. So I’m concerned about monopolies taking over everything and I think what we need is a more vibrant ecosystem for AI.

Dr. Parikh: Well, you’ve told us the good and the bad about AI. I want to just close with a final reflection. You know, the average citizen, how should they be using AI? Should they be using it to plan their summer vacation? Should they be getting AI to do their tax returns? Or should the average academic be using AI to write papers so they get tenure? What should the average person be using AI for? 

Dr. Rudzicz: I can give you my personal approach to it. I use AI usually for low risk, like low impact, sort of activities. You mentioned vacations… I literally used ChatGPT once to help me figure out what I was going to do for a long weekend. It identified a provincial park not far from me and said, ‘You should go here. It’s beautiful this time of year’. I said, ‘fine, I’ll check it out’ and it was a fantastic vacation. It was a wonderful hike. So I use AI for those kinds of fun sort of exercises. I do not use AI for any kind of academic work. And I think that is where we should kind of draw the line. If you’re trying to do something that involves skills that you kind of want to maintain, then yeah indeed, touching on what we were talking about earlier, it’s important to maintain that boundary, right? I don’t mind asking AI to help me find papers. A lot of my students for whom English is their second language might use it to help fix up some grammar. I think that’s an appropriate use also. But like the deep stuff, like the use of our own thoughts and our own outputs that we want to claim as our own, we have to make sure that those are still ours and not the outputs of ChatGPT.

Dr. Parikh: Professor Rudzicz, thank you so much for really enlightening us on the good, the bad, maybe even the ugly about AI. Thank you very much. 

Dr. Rudzicz:  Thank you.