![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://beehaw.org/pictrs/image/23b10d50-5c1f-45da-be43-07f234fc229b.png)
Every new signal defy explanation for a little while, until it’s explained.
Every new signal defy explanation for a little while, until it’s explained.
Visa Gives People a New Reason to Not Use Visa
Okay, I misinterpreted your comment.
Having a backup at a cloud provider is fine, as long as there is at least one other backup that isn’t with this provider.
Cloud provider seems to do a good job protecting against hardware failure, but can do poorly with arbitrary account bans, and sometimes have mishaps due to configuration problems.
Whereas a DIY backup solution is often more subject to hardware problems (disk failure, fire, flooding, theft, …), but there’s no risk of account problem.
A mix is fine to protect against different kind of issues.
That would indeed be a good backup strategy, but better be specific. “Offsite” may be interpreted in different ways.
They had backups at multiple locations, and lost data at multiple (Google Cloud) locations because of the account deletion.
They restored from backups stored at another provider. It may have been more devastating if they relied exclusively on google for backups. So having an “offsite backup” isn’t enough in some cases, that offsite location need to be at a different provider.
Wrong choices happen when there’s deletion of useful historical data, motivated by short-term cost saving.
Wrong choices also happen when there’s unnecessary creation on data, such as logging and storing everything, just in case, with a verbose level.
Storage can be cheap in some cases, but high-availablility high-performance cloud storage is very expensive. Anyway, it’s not infinite.
The way to keep useful data is to be strategic and only store relevant logs. Fine tune retention policy especially for fastest growing data. Storing everything on high-cost storage, without smart retention policy, could lead to deleting git data to make place for a mix of debug logs and random shit.
She probably find that silly, but do these silly thumbnail anyway because that’s what work with the YouTube algo
Considering it’s operating without backup since 1982, we’re incredibly lucky that system hadn’t failed until now.
The manufacturer deserve some credit too.
This seems like good news for RocketStar since they’re doing a press release, although it’s hard to understand the significance.
A news article would be even more interesting, in order to have more context and a more objective coverage.
Yes it’s hard, which I acknowledged by saying if they have ambitions, go big and recycle materials in space. But you make it sounds like it’s nearly impossible, which I doubt.
We know how to keep air in space stations and capsules, without involving force fields or any other sci-fi tech.
For sure, building in space it different from building in earth gravity, but that doesn’t necessarily make it impossible. There already have been experiments and small-scale demonstrations in space:
Another example is a microgravity extrusion experiment in the ISS between from 2021 to 2023,
I assume it’s easier to start by building small parts, and progressively build larger parts, until hopefully we’re able to build most ships parts. The assembly can presumably happen in the vacuum of space, without air. There’s potential for ultimately building ships in orbit larger than anything we could lift with a rocket.
Is it worth trying land such a large rocket/ship when a small capsule does the job? Is it possible at all?
I get that SpaceX aims for re-usability, but if they have ambitions, go big and recycle materials in space to build space parts/ships/stations in-situ.
Rewrite the application to be less greedy in the number of requests it submit to the server, make (better) use of caching. That’ll probably lower the number of concurrent request that have to be handled.
Trying to think what Saudi Arabia will look like in 10-20 years. I’m betting on a slightly hotter desert, but without working AC.
Even a Real Time Operating System cannot guarantee serial/network input will arrive in time.
Is this for an opensource software project, and if so can you tell more about the project?
If that’s for a work or university project, you should share salary and/or credit with whoever is going to give you a solution.
It’s much better for public health to destroy this wine, rather than have people drink a larger amount.
I wish they could produce more grape juice instead and find buyers for it.
Good point, that’s another difference between the two. Although you can probably achieve the same result with both.
Not depending on the cloud processing your data is more important in my opinion.
… and Python that actually gets executed on your machine, not someone else’s machine (ie the cloud).
Apparently not yet, astronomers are still waiting for the signal to repeat to appropriately study it.
For now there are just guesses. If such burst isn’t a fluke and repeats, astronomers will get a chance to better study it and provide a confident explanation.