• 0 Posts
  • 114 Comments
Joined 1 year ago
cake
Cake day: July 14th, 2023

help-circle



  • hedgehog@ttrpg.networktoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    17 days ago

    Tons of laptops with top notch specs for 1/2 the price of a M1/2 out there

    The 13” M1 MacBook Air is $700 new from Best Buy. Better specced versions are available for $600 used (buy-it-now) on eBay, but the base specced version is available for $500 or less.

    What $300 or less used / $350 new laptops are you recommending here?


  • reasonable expectations and uses for LLMs.

    LLMs are only ever going to be a single component of an AI system. We’ve only had LLMs with their current capabilities for a very short time period, so the research and experimentation to find optimal system patterns, given the capabilities of LLMs, has necessarily been limited.

    I personally believe it’s possible, but we need to get vendors and managers to stop trying to sprinkle “AI” in everything like some goddamn Good Idea Fairy.

    That’s a separate problem. Unless it results in decreased research into improving the systems that leverage LLMs, e.g., by resulting in pervasive negative AI sentiment, it won’t have a negative on the progress of the research. Rather the opposite, in fact, as seeing which uses of AI are successful and which are not (success here being measured by customer acceptance and interest, not by the AI’s efficacy) is information that can help direct and inspire research avenues.

    LLMs are good for providing answers to well defined problems which can be answered with existing documentation.

    Clarification: LLMs are not reliable at this task, but we have patterns for systems that leverage LLMs that are much better at it, thanks to techniques like RAG, supervisor LLMs, etc…

    When the problem is poorly defined and/or the answer isn’t as well documented or has a lot of nuance, they then do a spectacular job of generating bullshit.

    TBH, so would a random person in such a situation (if they produced anything at all).

    As an example: how often have you heard about a company’s marketing departments over-hyping their upcoming product, resulting in unmet consumer expectation, a ton of extra work from the product’s developers and engineers, or both? This is because those marketers don’t really understand the product - either because they don’t have the information, didn’t read it, because they got conflicting information, or because the information they have is written for a different audience - i.e., a developer, not a marketer - and the nuance is lost in translation.

    At the company level, you can structure a system that marketers work within that will result in them providing more correct information. That starts with them being given all of the correct information in the first place. However, even then, the marketer won’t be solving problems like a developer. But if you ask them to write some copy to describe the product, or write up a commercial script where the product is used, or something along those lines, they can do that.

    And yet the marketer role here is still more complex than our existing AI systems, but those systems are already incorporating patterns very similar to those that a marketer uses day-to-day. And AI researchers - academic, corporate, and hobbyists - are looking into more ways that this can be done.

    If we want an AI system to be able to solve problems more reliably, we have to, at minimum:

    • break down the problems into more consumable parts
    • ensure that components are asked to solve problems they’re well-suited for, which means that we won’t be using an LLM - or even necessarily an AI solution at all - for every problem type that the system solves
    • have a feedback loop / review process built into the system

    In terms of what they can accept as input, LLMs have a huge amount of flexibility - much higher than what they appear to be good at and much, much higher than what they’re actually good at. They’re a compelling hammer. System designers need to not just be aware of which problems are nails and which are screws or unpainted wood or something else entirely, but also ensure that the systems can perform that identification on their own.




  • And when you’re comparing two closed source options, there are techniques available to evaluate them. Based off the results of people who have published their results from using these techniques, Apple is not as private as they claim. This is most egregious when it comes to first party apps, which is concerning. However, when it comes to using any non-Apple app, they’re much better than Google is when using any non-Google app.

    There’s enough overlap in skillset that pretty much anyone performing those evaluations will likely find it trivial to configure Android to be privacy-respecting - i.e., by using GrapheneOS on a Pixel or some other custom ROM - but most users are not going to do that.

    And if someone is not going to do that, Android is worse for their privacy.

    It doesn’t make sense to say “iPhones are worse at respecting user privacy than Android phones” when by default and in practice for most people, the opposite is true. What we should be saying is “iPhones are better at respecting privacy by default, but if privacy is important to you, the best option is to put in a bit of extra work and install GrapheneOS on a Pixel.”







  • Ah, fair enough. I was just giving people interested in that method a resource to learn more about it.

    The problem is that your method doesn’t consistently generate memorable passwords with anywhere near 77 bits of entropy.

    First, the example you gave ended up being 11 characters long. For a completely random password using alphanumeric characters + punctuation, that’s 66.5 bits of entropy. Your lower bound was 8 characters, which is even worse (48 bits of entropy). And when you consider that the process will result in some letters being much more probable, particularly in certain positions, that results in a more vulnerable process. I’m not sure how much that reduces the entropy, but it would have an impact. And that’s without exploiting the fact that you’re using quoted as part of your process.

    The quote selection part is the real problem. If someone knows your quote and your process, game over, as the number of remaining possibilities at that point is quite low - maybe a thousand? That’s worse than just adding a word with the dice method. So quote selection is key.

    But how many quotes is a user likely to select from? My guess is that most users would be picking from a set of fewer than 7,776 quotes, but your set and my set would be different. Even so, I doubt that the set an attacker would need to discern from is higher than 470 billion quotes (the equivalent of three dice method words), and it’s certainly not 2.8 quintillion quotes (the equivalent of 5 dice method words).

    If your method were used for a one-off, you could use a poorly known quote and maybe have it not be in that 470 billion quote set, but that won’t remain true at scale. It certainly wouldn’t be feasible to have a set of 2.8 quintillion quotes, which means that even a 20 character password has less than 77.5 bits of entropy.

    Realistically, since the user is choosing a memorable quote, we could probably find a lot of them in a very short list - on the order of thousands at best. Even with 1 million quotes to choose from, that’s at best 30 bits of entropy. And again, user choice is a problem, as user choice doesn’t result in fully random selections.

    If you’re randomly selecting from a 60 million quote database, then that’s still only 36 bits of entropy. When the database has 470 billion quotes, that’ll get you to 49 bits of entropy - but good luck ensuring that all 470 billion quotes are memorable.

    There are also things you can do, at an individual level, to make dice method passwords stronger or more suitable to a purpose. You can modify the word lists, for one. You can use the other lists. When it comes to password length restrictions, you can use the EFF short list #2 and truncate words after the third character without losing entropy - meaning your 8 word password only needs to be 32 characters long, or 24 characters, if you omit word separators. You can randomly insert a symbol and a number and/or substitute them, sacrificing memorizability for a bit more entropy (mainly useful when there are short password length limits).

    The dice method also has baked-in flexibility when it comes to the necessary level of entropy. If you need more than 82 bits of entropy, just add more words. If you’re okay with having less entropy, you can generate shorter passwords - 62 bits of entropy is achieved with a 6 short-word password (which can be reduced to 18 characters) and a 4 short-word password - minimum 12 characters - still has 41 bits of entropy.

    With your method, you could choose longer quotes for applications you want to be more secure or shorter quotes for ones where that’s less important, but that reduces entropy overall by reducing the set of quotes you can choose from. What you’d want to do is to have a larger set of quotes for your more critical passwords. But as we already showed, unless you have an impossibly huge quote database, you can’t generate high entropy passwords with this method anyway. You could select multiple unrelated quotes, sure - two quotes selected from a list of 10 billion gives you 76.4 bits of entropy - but that’s the starting point for the much easier to memorize, much easier to generate, dice method password. You’ve also ended up with a password that’s just as long - up to 40 characters - and much harder to type.

    This problem is even worse with the method that the EFF proposes, as it’ll output passphrases with an average of 42 characters, all of them alphabetic.

    Yes, but as pass phrases become more common, sites restricting password length become less common. My point wasn’t that this was a problem but that many site operators felt that it was fine to cap their users’ passwords’ max entropy at lower than 77.5 bits, and few applications require more than that much entropy. (Those applications, for what it’s worth, generally use randomly generated keys rather than relying on user-generated ones.)

    And, as I outlined above, you can use the truncated short words #2 list method to generate short but memorable passwords when limited in this way. My general recommendation in this situation is to use a password manager for those passwords and to generate a high entropy, completely random password for them, rather than trying to memorize them. But if you’re opposed to password managers for some reason, the dice method is still a great option.





  • Small correction - iCloud Photos are only end-to-end encrypted if you enable Advanced Data Protection, which was introduced in December 2022, and otherwise Apple has the keys. See https://support.apple.com/en-us/102651 for more details.

    So the uploaded photos in question couldn’t have been e2ee. Even so, it’s reasonable for people to question the legitimacy of e2ee given instances where it’s been shown to be a lie or for the data to also have been transmitted without e2ee, like Anker’s Eufy cameras’ “e2ee” feeds clearly being accessible without keys from the user devices, or WhatsApp exposing tons of messaging metadata to Meta.

    That said, I personally wasn’t using iCloud Photos prior to enabling Advanced Data Protection, and I had a few deleted photos show up from several years ago, so Apple’s explanation makes sense to me. And, like you’ve pointed out, most of the speculation was devoid of any critical thinking.



  • That’s a bit abstract, but saying what others “should” do is both stupid and rude.

    Buddy, if anyone’s being stupid and rude in this exchange, it’s not me.

    And any true statement is the same as all other true statements in an interconnected world.

    It sounds like the interconnected world you’re referring to is entirely in your own head, with logic that you’re not able or willing to share with others.

    Even if I accepted that you were right - and I don’t accept that, to be clear - your statements would still be nonsensical given that you’re making them without any effort to clarify why you think them. That makes me think you don’t understand why you think them - and if you don’t understand why you think something, how can you be so confident that you’re correct?