![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
They side with the laws enacted by the people, not the people. And all federal judges are appointed.
This doesn’t seem to be working as intended. We have “originalists” who turn that concept on it’s head and are explicitly a political project.
They side with the laws enacted by the people, not the people. And all federal judges are appointed.
This doesn’t seem to be working as intended. We have “originalists” who turn that concept on it’s head and are explicitly a political project.
It’s a great narrative that happens to justify a power grab by the judicial branch; probably the least democratic of the three branches.
Can’t really answer the expense trade-off until you look at concrete use cases, something general AI is allergic to…
Certain types of content. But YouTube’s own existence started because people made content without licensing rights.
Yeah, warning labels just make people dumber and less safe somehow.
Detecting an LLM is a skill.
Ever heard of the Turing test? Ever since AIs could pass it it became not a thing.
In place of the Turing test we have a new test that informs us whether an individual can properly identify a stochastic parrot
But it’s also very gay, so it’s probably worth it.
Automation suites exist and they are very much tuned to the individual apps. It seems giving ML an OCR readout of a page is not enough for it to know what it should do (accurately). We have had a training set for “booking flights on a browser” for about 6 years now and no one has figured out how to have it disrupt automated testing: https://miniwob.farama.org/
I’m providing explicit examples of compilers doing “the stuff we want it to do”. LLMs do what the want 50% of the time and it still needs modifications afterwards. Imagine having to correct a compiler output and calling that compiler “useful”.
That’s a distinction without a difference. The code is useful because we can reason how it was made and we can then make deterministic changes. Try using a compiler that gives you a qualitatively different result each time it runs even though the inputs are the same.
technology develops exponentially, while humans are … static
I have yet to see a self-improving technology that does not require adaptive human intelligence as an input.
Compilers are deterministic and you can reason about how they came to their results, and because of that they are useful.
I don’t use corporate social media anymore, is Facebook actually taking on misinformation now?
Big brain PDF tells the judge it is okay because the person in the picture is now an adult.
First day of job training is to keep the one machine running that keeps the place from exploding.
Happy? I’ll settle for dignity.
Speaking of books, have you ever read the Earthsea series? The way you talk about things reminds me of how in Earthsea things have a true name by which you can call and have them send towards your will. The trick is that some people have more then one true name and that, I think, is metaphor for how we can escape being manipulated when someone (or AI) learns our nature.
Is there a phrase to describe the when a programmer thinks what is needed is something “easy” when the problem requires something “simple” instead?
I must’ve missed that announcement at the last antifa meetup.