Earlier this year, Futurism reported that OpenAI has launched a health-focused version of ChatGPT that can ingest full medical records. When accessed, it issues an explicit warning it shouldn’t be used for diagnosis or treatment.
This raises some interesting questions for futures facilitators. For example, what assumptions about trust, authority, and responsibility are being normalised here? And who is expected to navigate the ambiguity when things go wrong?.