• 0 Posts
  • 18 Comments
Joined 2 years ago
cake
Cake day: June 1st, 2023

help-circle

  • The website uses google services while promoting a phone that advertises avoiding google services… And it seems to be completely broken for me on any android browser I use. Also his main website seems to use the Microsoft store logo for his store logo lol

    720 display?, it doesn’t say it has a removable battery? and can somehow promise years of software updates when they rely on third party custom ROMs for their software…

    Also I don’t know of any android phones that block replacing the battery?

    The website says 6GB RAM and the indiegogo page says 8GB RAM

    The hardware seems meh and the software is based on FOSS ROMs that support a wide variety of devices that actually exist. Ubuntu touch is advertised as Coming Soon™

    The BraX3 phone offers the most privacy-friendly location service through the Lunar Network

    Please anyone who can find the Lunar Network website, link it because right now to me it seems they’re using a now defunct minecraft server for location services

    I’ve got scammy vibes from this guy for years, even when I was new to privacy I never felt like he was reliable. Did he dub himself “the internet privacy guy” lol

    Overall, I think the phone will probably come out at some point, it’ll be kinda sucky from a hardware perspective for anyone who is used to mid range phones and the software could be fine or it could be buggy AF but I don’t think it’ll be anything special or offer anything unique enough from a privacy perspective to bother with.

    I don’t think he can guarantee software updates as he has no control over the ROMs he relies on and I would not expect any sort of support for bugs etc. I don’t think he is knowledgeable enough to trust not to have privacy/security issues with the phone, especially when his whole shtick is “privacy”

    It sucks that headphone jacks and sd cards are so hard to find in phones today but I’d prefer to use a dongle and external storage or self host and have a better refurbished phone running LineageOS or any other degoogled ROM than trust this guy.







  • Lol I spent a week going back and forth with Revolut support in august. I could sign into the app but it would always ask me for a “selfie” verification and every time support would say its a super dark selfie.

    Eventually I decided to try a stock ROM and it just worked and I realised what was happening so I transferred all of my money out and deleted my account.

    Most local banks here are terrible at making apps, some even require a separate device that looks like a calculator to use online banking, so hopefully they wont follow suit anytime soon



  • 2 accounts consistently reporting the same IP, location and user habits etc being linked is more absurd than nobody ever noticing excessive uploaded data from their phones? It is very easy to monitor the amount of uploaded and downloaded data on a device, lots of people would have noticed by now. The amount of storage, bandwidth and processing power that would be required to monitor the audio from hundreds of millions of android users globally 24/7 would make this the dumbest business decision ever when there are so many easier and efficient ways to track users.


  • https://www.bleepingcomputer.com/news/security/genetics-firm-23andme-says-user-data-stolen-in-credential-stuffing-attack/

    The information that has been exposed from this incident includes full names, usernames, profile photos, sex, date of birth, genetic ancestry results, and geographical location.

    The threat actor accessed a small number of 23andMe accounts and then scraped the data of their DNA Relative matches, which shows how opting into a feature can have unexpected privacy consequences.

    • Usernames Profile Photos DoB

    They can be linked to other online accounts. This allows for phishing, potentially scamming or getting additonal information on them which can lead to more sophisticated/personalised scams. Older, less tech savvy users are better targets for scammers.

    • Username Sex DoB Genetic Ancestry Location data

    Data aggregators can sell this info to Health Insurance Companies or any other system who can then discriminate based on genes sex age or location

    • All of this information

    Can contribute to people committing fraud with their information if they collect enough information from different sources.

    • DNA relatives

    Having enough information about a user to use it to target their now known relatives in personalised scams.

    The people that did this probably didn’t know what information they were going to get, maybe they were hoping for payment info, and settled for trying to just sell what they got.

    Any information, no matter how useless it might seem, is better than no information and enough useless information in the wrong hands can be very valuable.

    Theres countless data breaches every year and people will collect it all and link different accounts from different breaches until they have enough information. Most people use the same email address for every website and a lot of people reuse the same passwords, which is how this data leak occurred. Knowing that these users reuse the same email/password combination here means theres a very good chance they’ve reused it elsewhere.

    You can check out what data breeches have occured and if your email or password has been posted in any of these dumps here https://haveibeenpwned.com/

    Once the information is out there, its out there for good and what might seem trivial now to you could be valuable tomorrow to someone else








  • AI regulations is definitely needed, selfregulations never works, look at how Google and Meta have been operating and even now with GDPR in place they’re still getting away with abusing users data with no consequences.

    OpenAI did not tell us what good regulation should look like,” the person said.

    What they’re saying is basically: trust us to self-regulate,” says Daniel Leufer, a senior policy analyst focused on AI at Access Now’s Brussels office.

    I should hope OpnAI didn’t tell them how to regulate OpenAI and I really hope this isn’t the only regulation that we see since technology is constantly advancing we’re going to need to constantly update regulation to keep companies like OpenAI from getting out of control like Google.

    OpenAI argued that, for example, the ability of an AI system to draft job descriptions should not be considered a “high risk” use case, nor the use of an AI in an educational setting to draft exam questions for human curation. After OpenAI shared these concerns last September, an exemption was added to the Act

    This bothers me, job descriptions are already ridiculous with over the top requirements for jobs that don’t require them, feeding these prompts into AI is only going to make that worse.

    With regards to drafting exams, does it not start to make these exams redundant if the experts on the material being examined can’t even come up with questions and problems, then why should students even bother engaging with the material when they could just use AI because of this loose regulation.

    Researchers have demonstrated that ChatGPT can, with the right coaxing, be vulnerable to a type of exploit known as a jailbreak, where specific prompts can cause it to bypass its safety filters and comply with instructions to, for example, write phishing emails or return recipes for dangerous substances.

    Unfortunately since this regulation isn’t global and there are so many open source models that can run on consumer hardware there is no real way to regulate jailbreaking prompts and this is always going to be an issue. On the other hand though, these open source low power models are needed to give users more options and privacy, this is where we went wrong with search engines and operating systems.