No, ChatGPT hasn’t added a ban on giving legal and health advice

3 days ago 5

OpenAI says ChatGPT’s behaviour “remains unchanged” aft reports crossed societal media falsely claimed that caller s updates to its usage argumentation forestall the chatbot from offering ineligible and aesculapian advice. Karan Singhal, OpenAI’s caput of wellness AI, writes connected X that the claims are “not true.” 

“ChatGPT has ne'er been a substitute for nonrecreational advice, but it volition proceed to beryllium a large assets to assistance radical recognize ineligible and wellness information,” Singhal says, replying to a now-deleted station from the betting level Kalshi that had claimed “JUST IN: ChatGPT volition nary longer supply wellness oregon ineligible advice.”

According to Singhal, the inclusion of policies surrounding ineligible and aesculapian proposal “is not a caller alteration to our terms.”

The caller argumentation update on October 29th has a database of things you can’t usage ChatGPT for, and 1 of them is “provision of tailored proposal that requires a license, specified arsenic ineligible oregon aesculapian advice, without due engagement by a licensed professional.”

That remains akin to OpenAI’s previous ChatGPT usage policy, which said users shouldn’t execute activities that “may importantly impair the safety, wellbeing, oregon rights of others,” including “providing tailored legal, medical/health, oregon fiscal proposal without reappraisal by a qualified nonrecreational and disclosure of the usage of AI assistance and its imaginable limitations.” 

OpenAI antecedently had 3 abstracted policies, including a “universal” one, arsenic good arsenic ones for ChatGPT and API usage. With the caller update, the institution has 1 unified database of rules that its changelog says “reflect a cosmopolitan acceptable of policies crossed OpenAI products and services,” but the rules are inactive the same.

Read Entire Article