Yeah until the cops pull you over and take your cash under civil asset forfeiture because it’s “suspicious that you have so much cash on hand”.
Yeah until the cops pull you over and take your cash under civil asset forfeiture because it’s “suspicious that you have so much cash on hand”.
The features you miss out on would be direct deposit from checks and app notifications (usually there are a few that you want enabled but are only available through the app).
Good luck when banking apps start doing this.
I just want to be able to set alarms with their calendar app (where it currently only sends notifications).
Ok, but the most important part of that research paper is published on the github repository, which explains how to provide audio data and text data to recreate any STT model in the same way that they have done.
See the “Approach” section of the github repository: https://github.com/openai/whisper?tab=readme-ov-file#approach
And the Traning Data section of their github: https://github.com/openai/whisper/blob/main/model-card.md#training-data
With this you don’t really need to use the paper hosted on arxiv, you have enough information on how to train/modify the model.
There are guides on how to Finetune the model yourself: https://huggingface.co/blog/fine-tune-whisper
Which, from what I understand on the link to the OSAID, is exactly what they are asking for. The ability to retrain/finetune a model fits this definition very well:
The preferred form of making modifications to a machine-learning system is:
- Data information […]
- Code […]
- Weights […]
All 3 of those have been provided.
I don’t understand. What’s missing from the code, model, and weights provided to make this “open source” by the definition of your first link? it seems to meet all of those requirements.
As for the OSAID, the exact training dataset is not required, per your quote, they just need to provide enough information that someone else could train the model using a “similar dataset”.
I did a quick check on the license for Whisper:
Whisper’s code and model weights are released under the MIT License. See LICENSE for further details.
So that definitely meets the Open Source Definition on your first link.
And it looks like it also meets the definition of open source as per your second link.
Additional WER/CER metrics corresponding to the other models and datasets can be found in Appendix D.1, D.2, and D.4 of the paper, as well as the BLEU (Bilingual Evaluation Understudy) scores for translation in Appendix D.3.
The STT (speech to text) model that they created is open source (Whisper) as well as a few others:
I don’t think this is specifically an “AI” problem as much as it’s a privacy issue with the way companies are buying and selling our info for targeted advertising. These models are definitely enabling them to do more with the data that they have as well as to collect more information from us in new ways.
Yeah, the other thing I could see happening is a similar tactic used by scammers where they use Mules who pick up mail from various Airbnbs throughout whatever country, but this would definitely limit most bot operations… Unless some organization specializes in this and just offers some service to create a bunch of accounts for anyone willing to pay.
Also, how many accounts would you limit to a single address, and how long would you lock up an address before it could be used again (given that people do move around from time to time).
edit:typo.
That’s a good point. I didn’t know about the USPS Form 1583 for virtual mailboxes… Although that is a U.S. specific thing, so finding a similar service in a country that doesn’t care so much might be the way to go about that.
Yep, exactly this. It might deter some small time bot creators, but it won’t stop larger operations and may even help them to seem more legitimate.
If anything, my favorite idea comes from this xkcd:
Easy way to get around that with “virtual” addresses: https://ipostal1.com/virtual-address.php
Just pay $10 for every account that you want to create… you may as well just go with the solution of charging everyone $10 to create an account. At least that way the instance owner is getting supported and it would have the same effect.
Yeah, a decision to modify copyright so that it affects training data as well would devastate open source models and set us back a bit.
There are many that want to push LLMs back, especially journalists, so seeing articles like this are to be expected.
edit: a word.
That’s a big misconception with what quantum internet is (and what quantum entanglement actually allows for) as explained by this physicist: https://www.youtube.com/watch?v=u-j8nGvYMA8
Quantum Internet doesn’t mean that you can transmit data faster than the speed of light.
Quantum Internet just means you get an ultra secure connection, but it’s super susceptible to noise (in other words, you can’t send a lot of data reliably and it would be terrible for that).
At best this would be useful for being absolutely sure that some encryption keys were sent successfully without being intercepted by anyone else.
Did you open the same websites (same number of tabs) at the same time across all of the browsers?
From the screenshot it looks like you have a different number of tabs open in each one.
Wow, a mouse with an extra button that can only be used to launch some process which opens up a window to ChatGPT. What a great use of AI!
/s
It looks like this is an invite only feature through their “Creator Bonus Program”.
https://web.archive.org/web/20240324054627/https://creators.facebook.com/programs/bonuses
You can take that line even further,
What if only the intro fanfare was created with AI?
What if only one creature in the film is created with AI and everything else was done using pre-AI tech?
What if certain body parts of that creature were generated with AI, but the rest was then created/merged normally?
What if an overall shape was initially created with AI, but then the artist just used this as a base to start from and spent more time reworking it to their intended vision and no part was left untouched?
What if a green screen was used, and then AI was used to touch up the edges around the subject and the background?
What if AI was used to help brainstorm some ideas by the writer (as in they didn’t just copy/paste the output from the LLM)?
The oldest tweets I could find that actually started reporting this are from ~16 days ago.
https://x.com/Piotrdotcom/status/1829126494574067992
They reference a page here that was posted on Aug 29th.
https://niebezpiecznik.pl/post/uwazajcie-na-takie-captcha/