• 0 Posts
  • 75 Comments
Joined 1 year ago
cake
Cake day: June 2nd, 2023

help-circle










  • The wording of the article implies an apples to apples comparison. So 1 Google search == 1 question successfully answered by an LLM. Remember a Google Search in layspeak is not the act of clicking on the search button, rather it’s the act of going to Google to find a website that has information you want. The equivalent with ChatGPT would be to start a “conversation” and getting information you want on a particular topic.

    How many search engine queries, or LLM prompts that involves, or how broad the topic, is a level of technical detail that one assumes the source for the number x25 has already controlled for (Feel free to ask the author for the source and share with us though!)

    Anyone who’s remotely used any kind of deep learning will know right away that deep learning uses an order of magnitude or two more power (and an order of magnitude or two more performance!) compared to algorithmic and rules based software, and a number like x25 for a similar effective outcome would not at all be surprising, if the approach used is unnecessarily complex.

    For example, I could write a neural network to compute 2+2, or I could use an arithmetic calculator. One requires a 500$ GPU consuming 300 watts, the other a 2$ pocket calculator running on 5 watts, returning the answer before the neural network is even done booting.



  • knows me extremely well, is able to tirelessly work on my behalf, and has a personality tailored to my needs and interests.

    Those may still be ANI applications.

    Today’s LLM’s marketed as the future of AGI are more focused on knowing a little bit about everything. Including a little bit about how MRIs work and a summary of memes floating around a parody subreddit. I fail to see how LLM’s as they are trained today will know you extremely well and give you a personality tailored to your needs. I also think commercial interests of big tech are pitted against your desire for “tirelessly work[ing] on my behalf”.


  • The big problem with AI butlers for research is, IMO, stripping out the source takes away important context that helps you decide wether the information you are getting is relevant and appropriate or not. Was the information posted on a parody forum or is it an excerpt from a book by an author with a Ph.D. on the subject? Who knows. The AI is trained to tell you something that you want to hear, not something you ought to hear. It’s the same old problem of self selecting information, but magnified 100x fold.

    As it turns out, data is just noise without some authority or chain of custody behind it.





  • Smarter Americans in that past recognized that freedom, including the free market, doesn’t just happen of its own accord, that it has to be defended, legislated. That is how antitrust laws came to be in arguably the most capitalist nation on earth.

    Apathetic Americans now have lost sight of the importance of protecting their freedoms.

    “Illegal” is not just some hypothetical moral absolute. It is the politics of defending one’s values. Americans clearly no longer value either their freedoms or the free market.



  • Democracy doesn’t work when centralized powers build tools like TikTok or Facebook to influence people’s thoughts with bias and other psychological hacks.

    If it were me, I would ban all social media platforms larger than 100,000, and create task forces to reign in on predatory marketing and social media collusion.

    People just can’t be trusted to see how they are constantly being manipulated by companies with deep pockets and foreign governments. Children and adults alike. It’s not people’s fault either.

    Either that or we need a widespread social repudiation of these platforms, a wake up to the fact that our minds are constantly being poisoned, like Tobacco was reigned in.