![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
Finally some news about the first human trial.
The part about them not issuing regular progress reports since day 1 (a month or so ago) is, how these doctors put it, concerning.
Apart from that, I think jumping from monkeys to human experiments when the success rate is low feels either rush work or some high person in charge decided to go all-or-nothing.
While the proposed bill’s goals are great, I am not so sure about how it would be tested and enforced.
It’s cool that on current LLMs, the LLM can generate a ‘no’ response like those clips where people ask if the LLM has access to their location – but then promptly gives advices to a closest restaurant as soon as the topic of location isn’t on the spotlight.
There’s also the part about trying to contain ‘AI’ to follow once it has ingested a lot of training data. Even goog doesn’t know how to curb it once they are done with initial training.
I am all up for the bill. It’s a good precedent but a more defined and enforce-able one would be great as well.