- cross-posted to:
- technology
- [email protected]
- cross-posted to:
- technology
- [email protected]
I haven’t read through the entire thing yet, but so far it looks very thorough and well cited. It’s aimed at healthcare decision makers in Canada but it’s pretty accessible and there are sections that might be helpful to others too.
For those unfamiliar:
Canada Health Infoway is an independent, federally funded, not-for-profit organization tasked with accelerating the adoption of digital health solutions, such as electronic health records, across Canada. (wikipedia)
My personal opinion is that they’re pretty good about it
Here is the description from the site:
The goals of the Toolkit for Implementers of Artificial Intelligence in Health Care are to assist health care organizations across Canada in understanding what is required to embark on an AI implementation journey, to provide specific tools to begin this journey and to ensure that organizations are prepared to adopt AI in a safe way that minimizes risk for all stakeholders.
The toolkit is divided into six modules that provide:
- Checklists to ensure that organizations can more effectively plan their activities
- Best practices, tips and recommendations related to responsible innovation
- Case studies demonstrating real-world Canadian examples of successfully implemented AI solutions
- Comprehensive footnotes and bibliographic links for further readings about AI and health care
I don’t want any of my healthcare information handled by AI. I don’t think we understand the technology enough yet to consider it safe, private, and secure.
I agree, and I think that’s a part of it. From what I can tell, this toolkit extends to all “AI” used in healthcare, including some “AI” that we have been using for some time. The three it listed off near the beginning were robotics (assisted surgery), computer vision (diagnosis and screening), and then the natural language processing. The existing “AI” uses, which I’d prefer to just call algorithms, make sense for the most part.
My worry is that some institutions are going to be in a rush to set up NLP chatbots in client facing spaces. There should already be guidelines against HCPs inputting personal medical information into NLPs, but I think it’s still a concern if patients need to input it themselves when being triaged. Until we have hospitals and health departments self-hosting the LLMs that make the NLP possible, we should not be using them.