Usually, when you open a website, that site might be pulling live data from somewhere, but it’s from a database on the same server. If you click a Fediverse link, and no-one else from your instance has already done so, it seems like your instance has to contact a remote site, pull the data and render it, in the same timeframe it would have to do so with local data.
To illustrate with some possibly-new-to-you examples:
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
What’s your experience like clicking these? Does it go through first time?
I realize they’ll be people for whom these work first time no problem, and they’ll wonder what I’m complaining about. I’m not really complaining about anything really, I’m just wondering if my instinctive reaction has any validity.
My main concern is the long-term cost of compute and storage. These instances aren’t going to be free, and hopefully we can build a funding model that works.
Especially since, and correct me if I am wrong, but every instance holds all of the data for all of the other instances too? (that they are federated with).
This means there is an insane amount of redundancy no? With hundreds or thousands of servers the cost would eventually become prohibitive and need to rely on only a select few large servers and thus Lemmy doesn’t ‘solve’ the issue it tries to in that sense.
Or, maybe it’s only the bandwidth that becomes an issue and the data storage is actually minimal. If that’s the case I can see more how a smaller server could afford to be part of the ecosystem. Perhaps also down the line if not already there could be a cut off point for historical data to avoid bloat.
Just the text I think. It’s not nothing, but if you upload an image to your instance as part of a post, the text is copied to my instance, but with just a link to the image, so it could be worse.
Ahhh, ok that makes far more sense actually then. Text alone isn’t too bad especially if there are some optimisations available along the way.
To put this into perspective. Wikipedia text only is under 100gb uncompressed.
Wikipedia isn’t a social platform. I suspect that their text growth was
log(n)
or something of the like. The only new text are things that are literally new or updates.Lemmy has no cap there. The amount of new text will grow in some proportion to the user base. The more users and more instances, the more text. To say nothing of duplication from cross posting when you get wonky cuts in the federation connections.
None of this is free and it’s going to be a problem if Lemmy grows.
not all the data afaik, but all the data for subs that it’s users are subscribed to