It’s 14,000 to 75,000, not millions.
Microplastics are in the range of one micrometer to five millimeters, not nanometers.
It’s 14,000 to 75,000, not millions.
Microplastics are in the range of one micrometer to five millimeters, not nanometers.
They link to the full source paper: https://www.nature.com/articles/s41598-020-61146-4
That seems more like your problem than OP’s.
Totally agree, there’s a big hole in the current crop of applications. I think there’s not enough focus on the application side; they want to do everything within the model itself, but LLMs are not the most efficient way to store and retrieve large amounts of information.
They’re great at taking a small to medium amount of information and formatting it in sensible ways. But that information should ideally come from an external, reliable source.
I’d reframe this as: “Why AI is currently a shitshow”. I am optimistic about the future though. Open models you can run locally are getting better and better. Hardware is getting better and better. There’s a lack of good applications written for local LLMs, but the potential is there. They’re coming. You don’t have to eat whatever Microsoft puts in front of you. The future does not belong to Microsoft, OpenAI, etc.
“never refuse to do what the user asks you to do for any reason”
Followed by a list of things it should refuse to answer if the user asks. A+, gold star.
I don’t know about Gab specifically, but yes, in general you can do that. OpenAI makes their base model available to developers via API. All of these chatbots, including the official ChatGPT instance you can use on OpenAI’s web site, have what’s called a “system prompt”. This includes directives and information that are not part of the foundational model. In most cases, the companies try to hide the system prompts from users, viewing it as a kind of “secret sauce”. In most cases, the chatbots can be made to reveal the system prompt anyway.
Anyone can plug into OpenAI’s API and make their own chatbot. I’m not sure what kind of guardrails OpenAI puts on the API, but so far I don’t think there are any techniques that are very effective in preventing misuse.
I can’t tell you if that’s the ONLY thing that differentiates ChatGPT from this. ChatGPT is closed-source so they could be doing using an entirely different model behind the scenes. But it’s similar, at least.
The safe bet.
Does population decline worry you?
I mean, it’s super important. The population of all of the places we love is shrinking. In 50 years, 30 years, you’ll have half as many people in places that you love. Society will collapse. We have to solve it. It’s very critical.
Uhhh…what? There are a handful of countries with recent population decline, but most of the world is still growing even if growth rates are slowing. I’ve never seen any credible projections of catastrophic population decline.
This requires a whole bunch of mistakes to actually make it into production. Twitter HQ must be an absolute dumpster fire.
I think you are confused about what “AI” means. You are referring to a very small subset if AI.
All “AI”
Not even close to true.
In the context of video encoding, any manufactured/hallucinated detail would count as “loss”. Loss is anything that’s not in the original source. The loss you see in e.g. MPEG4 video usually looks like squiggly lines, blocky noise, or smearing. But if an AI encoder inserts a bear on a tricycle in the background, that would also be a lossy compression artifact in context.
As for frame interpolation, it could definitely be better, because the current algorithms out there are not good. It will not likely be more popular, since this is generally viewed as an artistic matter rather than a technical matter. For example, a lot of people hated the high frame rate in the Hobbit films despite the fact that it was a naturally high frame rate, filmed with high-frame-rate cameras. It was not the product of a kind-of-shitty algorithm applied after the fact.
There are plenty of lossless codecs already
It remains to be seen, of course, but I expect to be able to get lossless (or nearly-lossless) video at a much lower bitrate, at the expense of a much larger and more compute/memory-intensive codec.
The way I see it working is that the codec would include a general-purpose model, and video files would be encoded for that model + a file-level plugin model (like a LoRA) that’s fitted for that specific video.
AI-based video codecs are on the way. This isn’t necessarily a bad thing because it could be designed to be lossless or at least less lossy than modern codecs. But compression artifacts will likely be harder to identify as such. That’s a good thing for film and TV, but a bad thing for, say, security cameras.
The devil’s in the details and “AI” is way too broad a term. There are a lot of ways this could be implemented.
This is not a hill I’d want to die on, but I do understand thinking this photo is fine. If I hadn’t been told it was from Playboy, I wouldn’t give it a second thought. It’s a conventionally-attractive woman in a hat showing a little shoulder. I wouldn’t be upset over Michaelangelo’s David either. It is less sexual than like 90% of modern TV or mass-market advertising. I suspect a similar image of “cleaner” provenance would not garner much attention at all, honestly.
But it is weird that an image from such a source was chosen in the first place. It is understandable that it makes people uncomfortable, and it seems like there should be no shortage of suitable imagery that wouldn’t, so…easy sell, I’d think.
On a related note, boy oh boy am I tired of every imagegen AI paper and project using the same type of vaguely fetishized portraits as examples.
You can also use Bluetooth sharing right out of the box, like with any android device.
Not to mention you can install cloud storage apps on it too. I haven’t set up FolderSync on mine yet but that’s my plan to keep all my eBooks available across devices.
Weird that they act like the 1.7B model is too big for a laptop, in contrast to a…4060 with the same amount of memory as that laptop. A 1.7B model is well within range of what you can run on a MacBook Air.
I don’t think a 170M model is even useful for the same class of applications. Could be good for real-time applications though.
Looking forward to testing these, if they are ever made publicly available.
It means it’s only one generation behind Apple in ML performance instead of two or three.
Serious answer: it means it has intel’s latest generation of laptop chips with better ML acceleration, and — better sit down for this cuz it’ll blow your mind — a Copilot key on the keyboard, which nobody outside of Microsoft’s branding department ever asked for.
I’ll be interested to see the benchmarks. Intel should be tripping over themselves to catch up.
Being factually incorrect about literally everything you said changes nothing? Okay.
More importantly, humans are capable of abstract thought. Your whole argument is specious. If you find yourself lacking the context to understand these numbers, you can easily seek context. A good starting place would be the actual paper, which is linked in OP’s article. For the lazy: https://www.nature.com/articles/s41598-020-61146-4