Valley High's administrator told members Oct. 3 that she has been using generative AI tools to produce "informed risk" documents for new admissions and for residents who experience significant changes, and said the documents can capture diagnosis combinations and associated risks for families.
"I took his diagnosis off the face sheet, dropped it into AI, asked the right questions ... and I have now a 5 page informed risk document that I could share with my family," the administrator said, describing a pilot process built from diagnosis codes rather than names or other direct identifiers.
The administrator said the tools used included ChatGPT and other vendor products being tested. She and members raised data-security and privacy questions. Tom (a staff member) and others reiterated that the organization's AI policy prohibits inputting personally identifiable information. Alicia Shuler was cited in the discussion noting the policy: "Our AI policy ... specifically says no PII in the AI," according to the transcript.
Administrators described a workaround for privacy: copying only diagnosis codes from a resident record and asking AI for a risk summary, rather than uploading full records. The administrator said the approach is still experimental and that the facility is exploring Microsoft government licensing, siloing options and other vendor controls for HIPAA-sensitive workflows. She also said commercial devices such as cleaning robots can capture building maps and upload images to cloud services; staff recommended verifying where third-party vendors store data before adopting new devices.
No formal action was taken at the meeting on AI policy or new procurement; administrators said they would continue to follow the existing AI policy and coordinate with IT/security staff when piloting tools.