Charlie Fish

Software Engineer (iOS - ForeFlight) 🖥📱, student pilot ✈️, HUGE Colorado Avalanche fan 🥅, entrepreneur (rrainn, Inc.) ⭐️ https://charlie.fish

  • 66 Posts
  • 77 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle






  • Your instance is the one that federates. However it starts with a user subscribing to that content. Your instance won’t federate normally without user interaction.

    Normally the solution for the second part is relays. But that isn’t something Lemmy supports currently. This issue is very common with smaller instances. It isn’t as big of a deal with bigger instances since users are more likely to have subscribed to more communities that will automatically be federated to your instance. You could experiment with creating a user and subscribing to a bunch of communities so they get federated to your instance.













  • I know I’m not necessarily the target audience for this. But it feels too expensive. 6x the price of Cloudflare R2, almost 13x the price of Wasabi. Even iCloud storage is $0.99 for 50 GB with a 5 GB free tier. But again, I know I’m not necessarily the target audience as I have a lot of technical skills that maybe average users don’t have.

    If you ever get around to building an API, and are interested in partnerships, let me know. Maybe there is a possibility for integration into [email protected] 😉.


  • This worked!!! However it now looks like I have to pass in 32 (batch size) comments in order to run a prediction in Core ML now? Kinda strange when I could pass in a single string to TensorFlow to run a prediction on.

    Also it seems to be much slower than my Create ML model I was playing with. Went from 0.05 ms on average for the Create ML model to 0.47 ms on average for this TensorFlow model. Looks like this TensorFlow model also is running 100% on the CPU (not taking advantage of GPU or Neural Engine).

    Obviously there are some major advantages to using TensorFlow (ie. I can run on a server environment, I can better control stopping training early based on that val_accuracy metric, etc). But Create ML seems to really win in other areas like being able to pass in a simple string (and not having to worry about tokenization), not having to pass in 32 strings in a single prediction, and the performance.

    Maybe I should lower my batch_size? I’ve heard there are pros and cons to lowering & increasing batch_size. Haven’t played around with it too much yet.

    Am I just missing something in this analysis?

    I really appreciate your help and advice!















  • Charlie FishMAtoEchoNew update is nice!
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Comment design is on my todo list for a refresh. I thought the design was going to work but after using it myself, it doesn’t hit the mark.

    Right now it’s a drawer at the bottom of the post view that you can pull up to comment.

    If you want to reply to a comment you should be able to swipe a comment from the left to the right and that’ll mark it as reply to.

    Right now you must be subscribed to Echo+ in order to comment.


  • Charlie FishMAtoEchoNew update is nice!
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Thanks so much for trying it out! Much much more to come, so stay tuned.

    As for the refresh thing, thanks for the report. It’s on my list to resolve. I’ll add a +1 to that item to bump it up on the priority list. Not quite sure when it’ll be resolved, but hopefully soon.