• 0 Posts
  • 244 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle







  • Well I think you’re wrong here, and about any adult can learn how to drive, but only a small subset can learn how to code. Not learning how to throw a simple script together, real codeing.

    Coding is engineer level, engineers build cars, they dont only drive them. For me the difference is the same between a developer of a software and the user of said soft.

    One it way way way more complicated, and IA is supposed to do that “soon” when it can’t even drive a car.

    Nah, not happening any time soon.










  • Thanks!

    IPFS is static, whereas tenfingers is dynamic when it comes to the links. So you can update the shared data without the need of redistributing the link.

    That said, its also very different tech wise, there is no need for benevolent nodes (or some crypto or payment).

    Nodes do not need to be trustworthy either, so node discovery is very simple (basically just ask other nodes for known nodes).

    The distribution part, where nodes share your data, is based on reciprocal sharing, you share theirs and they share yours. If they don’t share any more (there are checks) you just ditch the deal and ask for a new deal with another node.

    With over sharing (default is you share your data with 10 other nodes, sharing their data) this should both make bad nodes a no problem, but also make for good uptime and takedown safety.

    This system also makes it scalable infinitely node wise, as every node does not need to know all other nodes, just enough for their need (for example thousands out if millions of existing nodes).

    To share lots if data, you need to bring enough storage and bandwith to the table because it’s reciprocal, so basically it’s up to your node how much it can share.

    Big data sets are always complicated because of errors and long download times, I have done 300MB files without problems, but the download process sure can be made better (with parallel downloading for example and better error handling).

    I haven’t worked on sharing way bigger datasets, even a simple terabyte is a pita to download on the regular internet :-) and the use case is more the idea of sharing lots of smaller data, like a website for example, or a chat.

    What do you think, am I missing something important? Or of course if you have other questions please do ask!

    Also, sorry I’m writing this on my mobile so it’s not very well written.

    Edit: missed one question; getting the data is straight forward to use (a bit complicated how it’s handled because of the changing nature of things) but when you download, you have the addresses of the nodes sharing your data so you just connect to one of them and download it (or the next if the first one isn’t up etc and so on). So that should not be any kind of bottleneck.