Last September, each eyes were connected Senate Bill 1047 arsenic it made its mode to California Governor Gavin Newsom's table - and died determination arsenic helium vetoed the buzzy portion of legislation.
SB 1047 would person required makers of each ample AI models, peculiarly those that outgo $100 cardinal oregon much to train, to trial them for circumstantial dangers. AI manufacture whistleblowers weren't blessed astir the veto, and astir ample tech companies were. But the communicative didn't extremity there. Newsom, who had felt the authorities was excessively stringent and one-size-fits-all, tasked a radical of starring AI researchers to assistance suggest an alternate program - 1 that would enactment the improvement and the governance of generative AI successful California, on with guardrails for its risks.
On Tuesday, that study was published.
The authors of the 52-page "California Report connected Frontier Policy" said that AI capabilities - including models' chain-of-thought "reasoning" abilities - person "rapidly improved" since Newsom's determination to veto SB 1047. Using humanities lawsuit studies, empirical research, modeling, and simulations, they suggested a caller model that would necessitate much transparency and autarkic scrutiny of AI models. Their study one …
 (2).png)











English (US) ·