TIL! Yep, that gives the EU exactly what’s needed to suspend them from Schengen.
TIL! Yep, that gives the EU exactly what’s needed to suspend them from Schengen.
Hmm. Could they legally kick Hungary out of Schengen without its approval?
Agreements outside of the EU framework - now that is indeed a clever workaround. I seem to recall similar maneuvers during the Greek financial crisis when the UK wouldn’t agree to things.
What I fear is that he’s technically right - because he’ll use Hungary’s position as an EU member to tear up and otherwise interfere with EU attempts to fund Ukraine (something he’s not able to do to the US) and do it well enough that Ukraine’s position in it’s war of self-defense is seriously compromised.
I guess they back either other up. Like archive.is is able to take archives from archive.org but the saved page reflects the original URL and the original archiving time from the wayback machine (though it also notes the URL used from wayback itself plus the time they got archived it from wayback).
Ah, that makes sense. So the FediDB info seems to be wrong - I wonder if they got confused by cloudflare as per the other comment in https://feddit.org/post/4529920/2993842 ?
Also, is there a way to let them know to update it? I guess someone could report an issue on github…
That confuses me too. I’ve never really understood that. Likewise, /m/news is for US news while world news goes into /m/world and US news isn’t allowed.
Maybe that’s another reason why folks thing it’s US-based - because the magazines are clearly so US oriented. But I’m not sure how that happened.
On the brain bin for example it’s PoliticsUSA - https://thebrainbin.org/m/PoliticsUSA
The other thing is that I recall that kbin.social exploded and got a huge chuck of the exodus - but now that it’s been effectively dead for half a year, those users mostly seem to have vanished.
A fraction clearly did migrate to other mbin and lemmy instances. It seems like the rest did not return to spez’s site from what I’m hearing (“all the posts I’m seeing there are complaining that only bots are active here”) but I’m not sure where they went. But for example, one person I was following seems to have dropped off entirely from the fediverse and all social media.
Why did you think lemmy.world was US based? It’s fully European.
But that’s probably it - folks assume the instance that’s for the whole world is the US-based one and don’t feel the need to make another major US-based one.
Came here to say that. I wasn’t covered by GDPR under spez’s site - but luckily their policies treated me like I was anyways.
I moved to kbin.social - which was probably the 2nd largest after lemmy.world. Also, it was Polish.
What I liked about that was - as per my understanding - since these are hosted in the EU, the GDPR applies to my data here even if I’m not the EU myself and am not an EU citizen.
With a tld ending like .world you’d think it’s for the whole world, not just europe (.eu) or a specific country.
feddit.org itself is a bit of a curiosity since the .org doesn’t make it obvious that it is German - but someone posted the full story of how feddit.de fell apart and feddit.org became the successor.
What’s interesting is that currently, the site is broken, but in the footer you can see the last set of magazines that were new.
Which means the database is still intact, and if not a full resurrection, we could get our data back at least (I lost a lot of content when kbin.social went down). Just gotta figure out who to contact - which company is actually maintaining or hosting the servers that kbin.social run on…
What’s your current kbin instance? Curious to see if it’s running mbin now or if it really is the original kbin on there still.
Also, anyone remember kbin.cafe ?
Yeah, that chart needs to be updated. AFAIK no instance is still on kbin, everything has gone to mbin. It’s also missing pyfedi/piefed
As far as I can tell there’s been no communication from him for several months and not since he posted saying he’d turn kbin.social over to a new admin.
But the domain for kbin.social was recently renewed (I posted full details over at https://fedia.io/m/fediverse/t/1403334/Any-updates-on-kbin-social-recently ) which gives me hope that ernest is still around, just a bit more behind the scenes.
Of course, it could also be that the domain was simply auto-renewed (as described in https://www.godaddy.com/en-ca/help/turn-my-domain-auto-renew-on-or-off-41085 ). I think some registrars or services even offer prepayment options for auto-renewing, meaning that ernest might have set this all up before he disappeared, rather than slowly reappearing now…
While this would almost certainly work, it would be nice if the root cause can be discovered and either fixed or worked around. Having to reinstall everytime one needs to free up disk space is … less than ideal.
This issue already exists, regardless of the embed server problem. Right now, images posted by users to an instance get sent to that community’s instance and then copied to all instances of all subscribers.
If anything, the embed server provides a potential solution - rather than federate the image directly, simply link to the copy of the image on the embed server. (I’ve done some customized code changes on top of pyfedi to implement this idea there.)
I imagine instance admins would still want to to monitor and delete links to CP, but under this idea only the admins of the embed server and their delegates would have the ability to remove CP from the embed server itself. (Should they delegate this ability to other instance admins? Probably only on a case-by-case basis at most.)
Perhaps they could support a reporting functioning from mods and instance admins though…
You’re not the first to think about this.
See https://aumetra.xyz/posts/the-fedi-ddos-problem - there an embed server is proposed, to be shared by multiple instances (ideally a great many would use just the one), which can host things like image files and previews.
To the surprise of absolutely no one.
This is actually very easy. You can copy the files from the container, even while it’s not running, onto your host system to edit there, and then copy them back afterwards.
See the top answer on https://stackoverflow.com/questions/22907231/how-to-copy-files-from-host-to-docker-container for step by step instructions on how to do this.
So I dug into the source code a bit to see how it’s used. It turns out that IPFS might actually optional, as per the log line on https://github.com/hyprspace/hyprspace/blob/master/p2p/node.go#L213 (“Getting additional peers from IPFS API”)
The list of required bootstrap peers is hardcoded in the same file, but a few lines above, specifically at https://github.com/hyprspace/hyprspace/blob/master/p2p/node.go#L181
I say might be because - while the required bootstrap peers include a bunch of ones based on bootstrap.libp2p.io - there is a long list of hardcoded ip addresses and I don’t recognize any of them.
So those might be libp2p.io ip addresses, but they might also be IPFS ip addresses, or even belong to someone else altogether. (Edit: There are WHOIS tools online like https://lookup.icann.org/en that can be used to look these up and figure out who they belong to if you are really curious, but I can’t be bothered to do that right now.)
In any case, it looks like the way this works is that from a peer, libp2p tries to look up additional peers, and so on. So at most IPFS would be used as a way to get a listing, but once the desired peer is found, IPFS is cut out of the picture for that particular connection and NAT hole punching is used to establish a direct connection between peers instead (as per the linked wikipedia article, https://en.wikipedia.org/wiki/Hole_punching_(networking )