like, a video of Tao giving a demonstration?
like, a video of Tao giving a demonstration?
LLMs are basically just good pattern matchers. But just like how A* search can find a better path than a human can by breaking the problem down into simple steps, so too can an LLM make progress on an unsolved problem if it’s used properly and combined with a formal reasoning engine.
I’m going to be real with you: the big insight behind almost all new mathematical ideas is based on the math that came before. Nothing is truly original the way AI detractors seem to believe.
By “does some reasoning steps,” OpenAI presumably are just invoking the LLM iteratively so that it can review its own output before providing a final answer. It’s not a new idea.
I do agree that grad students don’t exactly live in luxury, and frequently develop mental health crises. But their contributions and insight are what power their labs. Profs often have to spend so much time teaching and chasing grants that they can’t do much real research. Academia overall is in a sad state.
But Tao is a superstar, and a charismatic blogger. I’d be disappointed to learn he mistreats his grad students. (I don’t know if he even has any tbh)
U.S. vehicles currently emit 2 billion tons. So that’s very bad news.
However – I think 2030 is waaay too far in the future to predict anything about AI.
YouTube has a network effect monopoly as well. Who would use a competing service?
The most effective thing to do as consumers is to encourage other people not to use google products. The best way to do that is to foment outrage at Google.
Does anyone have a (link to a) good summary of the ruling and rationale?
I find the idea that “Google is the only real choice” kind of odd. There are other perfectly functional and user-friendly search engines. It’s not like other monopolies, say, Youtube, where there’s no realistic alternative. (I’m not denying that search is a monopoly too.)
“The platform formerly known as Twitter”?
It wouldn’t be so bad if it weren’t the fact that he’s trying to lay claim to 1/26th of the English alphabet
The full title on close parsing doesn’t make sense in xorg context. But “X kills its <unix os> app” initially had my brain trying to figure out what people were running X on mac.
Please call it Twitter in the title unless there’s a good reason not to. I thought this was Xorg.
There are many practical uses, and more to be discovered, but I think most of them won’t be user-facing.
the high seas have a comment section?
Ok then it’s a more easy to use GIMP.
Yeah I mean it’s just a more easy to use Photoshop basically.
I agree people need to understand better the privacy risks of social media.
When you put out photos of yourself on the internet you should expect anyone to find them and do whatever they want to them.
Expect, yeah I guess. Doesn’t mean we should tolerate it. I expect murder to happen on a daily basis. People editing images of me on their own devices and keeping that to themself, that’s their business. But if they edit photos of me and proliferate, I think it becomes my business. Fortunately, there are no photos of me on the internet.
Edit: I basically agree with you regarding text content. I’m not sure why I feel different about images of me. Maybe because it’s a fingerprint. I don’t mind so much people editing pictures I post that don’t include my face. Hmm.
This sounds like a cool idea because it is a novel approach, and it appeals to my general heuristic of the inevitability of technology and freedom. However, I don’t think it’s actually a good idea. People are entitled privacy, on this I hope we agree – and I believe this is because of something more fundamental: people are entitled dignity. If you think we’ll reach a point in this lifetime where it will be too commonplace to be a threat to someone’s dignity, I just don’t agree.
Not saying the solution is to ban the technology though.
Ehh, I mean, it’s not really surprising it knows how to lie and will do so when asked to lie to someone as in this example (it was prompted not to reveal that it is a robot). It can see lies in its training data, after all. This is no more surprising than “GPT can write code.”
I don’t think GPT4 is skynet material. But maybe GPT7 will be, with the right direction. Slim possibility but it’s a real concern.
Sometimes a bullshitter is what you need. Ever looked at a multiple choice exam in a subject you know nothing about but feel like you could pass anyway just based on vibes? That’s a kind of bullshitting, too. There are a lot of problems like that in my daily work between the interesting bits, and I’m happy that a bullshit engine is good enough to do most of that for me with my oversight. Saves a lot of time on the boring work.
It ain’t a panacea. I wouldn’t give a gun to a monkey and I wouldn’t give chatgpt to a novice. But for me it’s awesome.
This is an absolutely wonderful graph. Thank you for teaching me about the trough of disillusionment.
I’ve seen it for years
This I can believe tbh. It’s a very useful tool in the hands of an expert. Otherwise it’s like giving a chimp a gun.
Maybe this is why I am surprised at people’s hatred of ChatGPT. It’s borne of misuse of a tool for experts, like newcomers struggling with a C++ compiler error.