![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
The year 2000 was peak human technology. It’s been downhill in every way since, until generative AI - which is f’in amazing. But let’s be real, the future belongs to the bots.
The year 2000 was peak human technology. It’s been downhill in every way since, until generative AI - which is f’in amazing. But let’s be real, the future belongs to the bots.
/stares in smart glasses
WebP is a raster graphics file format developed by Google intended as a replacement for JPEG, PNG, and GIF file formats. It supports both lossy and lossless compression, as well as animation and alpha transparency. Google announced the WebP format in September 2010, and released the first stable version of its supporting library in April 2018.
The format has spotty support across applications and some vulnerabilities were discovered that required patch efforts last year. It’s not clear why you should do anything.
deleted by creator
Humans are really bad at determining whether a chat is with a human or a bot
Eliza is not indistinguishable from a human at 22%.
Passing the Turing test stood largely out of reach for 70 years precisely because Humans are pretty good at spotting counterfeit humans.
This is a monumental achievement.
As long as no one messes with their open source contributions… (ditto for MS)
To the one person who upvoted this: We should be friends.
Aye, I’d wager Claude would be closer to 58-60. And with the model probing Anthropic’s publishing, we could get to like ~63% on average in the next couple years? Those last few % will be difficult for an indeterminate amount of time, I imagine. But who knows. We’ve already blown by a ton of “limitations” that I thought I might not live long enough to see.
Participants only said other humans were human 67% of the time.
On the other hand, the human participant scored 67 percent, while GPT-3.5 scored 50 percent, and ELIZA, which was pre-programmed with responses and didn’t have an LLM to power it, was judged to be human just 22 percent of the time.
54% - 67% is the current gap, not 54 to 100.
Thank you, I seldom see my own thoughts laid out so clearly. As a practitioner of the Dark Arts (marketing), this union of commerce and art is a foul bargain. I think it’s time the two had some time apart to work on themselves.
It seems to me that we’ve reached a crossroads. I’ve been very aware of the data mining, garden walls, data trading, privacy violations, security issues, ownership issues, etc. - for roughly 30 years. I regularly make the choice to be exploited for the benefits I extract, largely because the data they’ve gotten from me thus far I don’t highly value. But the necessity to develop strategies to keep the devil’s bargain beneficial has reached a fevered pitch. I want to train my own AI and public AIs. I want to explore the vast higher dimensional semantic spaces of generative models without API charges. APIs are vanishing as we speak, anyway, companies fearful of their data being extracted without compensation. Can’t really sit on the Open/Closed fence anymore.
Until they can distribute the training load of large models to consumer graphics cards (and do something like SETI@Home) it does seem like the benefit of distributed training isn’t enough to overcome the friction.
Like a decade ago?
The papers have a ton of practical info about feasibility, implementation, etc.
I do think Perplexity does a better job. Since it cites sources in its generated response, you can easily check its answer. As to the general public trusting Google, the company’s fall from grace began in 2017, when the EU fined them like 2 billion for fixing search results. There’ve been a steady stream of controversies since then, including the revelation that Chrome continues to track you in private mode. YouTube’s predatory practices are relatively well-known. I guess I’m saying that if this is what finally makes people give up on them, no skin off my back. But I’m disappointed by how much their mismanagement seems to be adding to the pile of negativity surrounding AI.
Wikipedia got where it is today by providing accurate information. Google results have always been full of inaccurate information. Sorting through the links for respectable sources just became second nature, then we learned to scroll past ads to start sorting through links. The real issue with misinformation from an AI is that people treat it like it should be some infallible Oracle - a point of view only half-discouraged by marketing with a few warnings about hallucinations. LLMs are amazing, they’re just not infallible. Just like you’d check a Wikipedia source if it seemed suspect, you shouldn’t trust LLM outputs uncritically. /shrug
Honestly, I’d get on-board with just about anytime 2000 to 2010. The enshittification of the internet and social-media-driven comment culture didn’t start in earnest until smart phones took off.