• 0 Posts
  • 31 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle

  • You say that it is sorted in the order of most significants, so for a date it is more significant if it happend 1024, 2024 or 9024?

    Most significant to least significant digit has a strict mathematical definition, that you don’t seem to be following, and applies to all numbers, not just numerical representations of dates.

    And most importantly, the YYYY-MM-DD format is extensible into hh:mm:as too, within the same schema, out to the level of precision appropriate for the context. I can identify a specific year when the month doesn’t matter, a specific month when the day doesn’t matter, a specific day when the hour doesn’t matter, and on down to minutes, seconds, and decimal portions of seconds to whatever precision I’d like.


  • Sometimes the identity of the messenger is important.

    Twitter was super easy to set up with the API to periodically tweet the output of some automated script: a weather forecast, a public safety alert, an air quality alert, a traffic advisory, a sports score, a news headline, etc.

    These are the types of messages that you’d want to subscribe to the actual identity, and maybe even be able to forward to others (aka retweeting) without compromising the identity verification inherent in the system.

    Twitter was an important service, and that’s why there are so many contenders trying to replace at least part of the experience.


  • This isn’t exactly what you asked, but our URI/URL schema is basically a bunch of missed opportunities, and I wish it was better designed.

    Ok so it starts off with the scheme name, which makes sense. http: or ftp: or even tel:

    But then it goes into the domain name system, which suffers from the problem that the root, then top level domain, then domain, then progressively smaller subdomains, go right to left. www.example.com requires the system look up the root domain, to see who manages the .com tld, then who owns example.com, then a lookup of the www subdomain. Then, if there needs to be a port number specified, that goes after the domain name, right next to the implied root domain. Then the rest of the URL, by default, goes left to right in decreasing order of significance. It’s just a weird mismatch, and would make a ton more sense if it were all left to right, including the domain name.

    Then don’t get me started about how the www subdomain itself no longer makes sense. I get that the system was designed long before HTTP and the WWW took over the internet as basically the default, but if we had known that in advance it would’ve made sense to not try to push www in front of all website domains throughout the 90"s and early 2000’s.


  • Your day to day use isn’t everyone else’s. We use times for a lot more than “I wonder what day it is today.” When it comes to recording events, or planning future events, pretty much everyone needs to include the year. Getting things wrong by a single digit is presented exactly in order of significance in YYYY-MM-DD.

    And no matter what, the first digit of a two-digit day or two-digit month is still more significant in a mathematical sense, even if you think that you’re more likely to need the day or the month. The 15th of May is only one digit off of the 5th of May, but that first digit in a DD/MM format is more significant in a mathematical sense and less likely to change on a day to day basis.


  • Functionally speaking, I don’t see this as a significant issue.

    JPEG quality settings can run a pretty wide gamut, and obviously wouldn’t be immediately apparent without viewing the file and analyzing the metadata. But if we’re looking at metadata, JPEG XL reports that stuff, too.

    Of course, the metadata might only report the most recent conversion, but that’s still a problem with all image formats, where conversion between GIF/PNG/JPG, or even edits to JPGs, would likely create lots of artifacts even if the last step happens to be lossless.

    You’re right that we should ensure that the metadata does accurately describe whether an image has ever been encoded in a lossy manner, though. It’s especially important for things like medical scans where every pixel matters, and needs to be trusted as coming from the sensor rather than an artifact of the encoding process, to eliminate some types of error. That’s why I’m hopeful that a full JXL based workflow for those images will preserve the details when necessary, and give fewer opportunities for that type of silent/unknown loss of data to occur.


    • Existing JPEG files (which are the vast, vast majority of images currently on the web and in people’s own libraries/catalogs) can be losslessly compressed even further with zero loss of quality. This alone means that there’s benefits to adoption, if nothing else for archival and serving old stuff.
    • JPEG XL encoding and decoding is much, much faster than pretty much any other format.
    • The format works for both lossy and lossless compression, depending on the use case and need. Photographs can be encoded in a lossy way much more efficiently than JPEG and things like screenshots can be losslessly encoded more efficiently than PNG.
    • The format anticipates being useful for both screen and prints. Webp, HEIF, and AVIF are all optimized for screen resolutions, and fail at truly high resolution uses appropriate for prints. The JPEG XL format isn’t ready to replace camera RAW files, but there’s room in the spec to accommodate that use case, too.

    It’s great and should be adopted everywhere, to replace every raster format from JPEG photographs to animated GIFs (or the more modern live photos format with full color depth in moving pictures) to PNGs to scanned TIFFs with zero compression/loss.



  • On the other extreme, 24/7 operations have redundancy.

    A friend of mine explained that being an Emergency Medicine physician is a great job for work life balance, despite the fact that he often has to work ridiculous shifts, because he never has to take any work home with him. An Emergency Room is a 24/7 operation, so whenever he’s at home, some other doctor is responsible for whatever happens. So he gets to relax and never think about work when he’s not at work and not on call.


  • This is wrong, because you’re talking about disability insurance in a comment thread about disability discrimination.

    Disability is very broadly defined for the purpose of disability discrimination laws, which is the context of this comment chain.

    Disability is defined specific to a person’s work skills for the purpose of long term disability insurance (like the US’s federally administered Social Security disability insurance). Depending on the program/insurance type, it might require that you can’t hold down any meaningful job, caused by a medical condition that lasts longer than a year.

    For things like short term disability, the disability is defined specific to that person’s preexisting job. Someone who gets an Achilles surgery that prevents them from operating the pedals of a motor vehicle for a few weeks would be “disabled” for the purpose of short term disability insurance if they’re a truck driver, and might not even be disabled if their day job is something like being a telemarketer who sits at a desk for their job.



  • I think that it’s foolish to concentrate people and activity there even further, it defeats the point of a federation.

    It defeats some of the points of federation, but there are still a lot of reasons why federation is still worth doing even if there’s essentially one dominant provider. Not least of which is that sometimes the dominant provider does get displaced over time. We’ve seen it happen with email a few times, where the dominant provider loses market share to upstarts, one of whom becomes the new dominant provider in some specific use case (enterprise vs consumer, mobile vs desktop vs automation/scripting, differences by nation or language), and where the federation between those still allows the systems to communicate with each other.

    Applied to Lemmy/kbin/mbin and other forum-like social link aggregators, I could see LW being dominant in the English-speaking, American side of things, but with robust options outside of English language or communities physically located outside of North America. And we’ll all still be able to interact.


  • For my personal devices:

    • Microsoft products from MS DOS 6.x or so through Windows Vista
    • Ubuntu 6.06 through maybe 9.04 or so
    • Arch Linux from 2009 through 2015
    • MacOS from 2011 through current
    • Arch Linux from 2022 through current

    I’ve worked with work systems that used RedHat and Ubuntu back in the late 2000’s, plus decades of work computers with Windows. But I’m no longer in a technical career field so I haven’t kept on top of the latest and greatest.



  • I’m still a skeptic of the Nova system into the 4 categories (1: unprocessed or minimally processed, 2: processed ingredients, 3: processed foods, 4: ultra processed foods), because it’s simultaneously an oversimplification and a complication. It’s an oversimplification because the idea of processing itself is such a broad category of things one can do to food, that it isn’t itself all that informative, and it’s a complication in that experts struggle to classify certain foods as actual prepared dishes being eaten (homemade or otherwise).

    So the line drawing between regular processed food and ultraprocessed is a bit counterintuitive, and a bit inconsistent between studies. Guided by the definitions, experts struggle to place unsweetened yogurt into Nova 1 (minimally processed), 2 (processed culinary ingredients), 3 (processed food) or 4 (ultra processed food). As it turns out, experts aren’t very consistent in classifying the foods, which introduces inconsistency in the studies that are performed investigating the differences. Bread, cheese, and pickles in particular are a challenge.

    And if the whole premise is that practical nutrition is more than just a list of ingredients, then you have to handle the fact that merely mixing ingredients in your own kitchen might make for a food that’s more than a sum of its parts. Adding salt and oil catapults pretty much any dish to category 3, so does that mean my salad becomes a processed food when I season it? Doesn’t that still make it different than French fries (category 3 if I make them myself, probably, unless you count refined oil as category 4 ultra processed, at which point my salad should probably be ultra processed too)? At that point, how useful is the category?

    So even someone like me, who does believe that nutrition is so much more than linear relationships between ingredients and nutrients, and is wary of global food conglomerates, isn’t ready to run into the arms of the Nova system. I see that as a fundamentally flawed solution to what I agree is a problem.





  • I don’t think the First Amendment would ever require the government to host private speech. The rule is basically that if you host private speech, you can’t discriminate by viewpoint (and you’re limited in your ability to discriminate by content). Even so, you can always regulate time, place, and manner in a content-neutral way.

    The easiest way to do it is to simply do one of the suggestions of the linked article, and only permit government users and government servers to federate inbound, so that the government hosted servers never have to host anything private, while still fulfilling the general purpose of publishing public government communications, for everyone else to host and republish on their own servers if they so choose.