• 3 Posts
  • 30 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle



  • Strictly speaking, you cannot make an ISO from an audio CD. Yes, you can make a bin cue file pair as another commenter has suggested. But realistically what you’ve then got is uncompressed wav audio with the metadata in separate files. The only real advantage this gives you is something that theoretically allows you to recreate precisely the original layout of the audio CD, together with the appropriate length of silence in between the tracks, etc.

    When you convert to FLAC there is no loss in audio quality, you use approximately half of the storage space compared to wav, and you can have all of the metadata such as tags and art images embedded in the file itself.

    Bin/cue is not really very useful unless you’re not listening directly from a computer or burning to a CD and listening to that. For every other use case, it’s better to have a file that you can play directly and index directly.






  • Depends what you want to play it on. In my house we have:

    3 laptops 2 tablets 2 mobile phones (1 android, 1 iPhone) TV

    Not all these devices support local storage for music and it’s a pain to sync files between them. With Jellyfin the complete library is in one location with a consistent interface. It can also be made available remotely if I choose.


  • Ok. I missed which sub I was in, sorry. There is a Linux desktop Jellyfin app but I haven’t used it myself. In my own case I am running Jellyfin on Linux. I use various clients, including web browser (laptop), Android and Roku (TV) and find it works really well. In the past I had tried with the ‘connect directly to the server’ route with XBMC (as Kodi was called then) and it never worked well, with similar issues those described in other comments.




  • I think you’re missing a key area here. The original Mozilla product was Netscape- a commercial combined web browser and email client. There used to be a number of commercial competitors in the space, e.g. Opera, Eudora, etc. Microsoft killed that market in the 1990’s.

    I struggle to see how any organisation could make money out of giving away a product that costs money to produce and promote. You’ve suggested they could have been Proton but that’s a completely different sector. We could just as easily have suggested they could have been Twitter, WhatsApp or Instagram.


  • We’re going to need to know as a minimum:

    • Linux distribution and version
    • Jellyfin install method and version
    • what you have already tried- not sure where all those flags are coming from

    I would also support the comments here recommending that you use docker. There’s only a small number of Linux distributions and versions where a distribution package installation of jellyfin is fully supported, but even then what you need to do varies across each one. All Linux distributions and versions support docker and the process is essentially the same for all of them.


  • Ok, aside from Android, I’ve yet to see any serious usage of SELinux in the real world and I’ve been working on cloud tech for years. Acknowledged issues such as complexity aside, it’s really just that much less relevant in a modern, single purpose environment such as Docker/kubernetes/cloud functions/etc


  • I feel this and some of the other comments in this thread are missing the point. It’s not about me and my followers. It’s about the news sources and topics that I search for or follow. They simply haven’t moved to Mastodon and where notable individuals that I follow have tried, it simply hasn’t worked out due to lack of interest. I’m not interested in the fediverse as a topic in itself, I’m interested in the topics and events I want to follow. Something happens and I can find and read and watch clips about it on Twitter. Not so Mastodon.



  • SquiffSquiff@lemmy.worldtoFediverse@lemmy.worldBluesky continues to soar
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    6
    ·
    6 months ago

    I’ve been on Mastodon for over a year and the content simply isn’t there. Several of the people that I follow on Twitter have tried moving or duplicating to Mastodon. They’ve had a fraction of the visibility and engagement from commenters that they would get on Twitter. Invariably after a few months they have essentially given up on it as a primary medium. For me the discoverability is essentially non-existent, which I don’t think is helped by the idea of it being based around instance-local communities, which have no meaning when you’re looking at something like Twitter.


  • GitLab just doesn’t compare in my view:

    To begin with, you have three different major versions to work with:

    • Self-Hosted open source
    • SAAS open source
    • Enterprise SAAS

    Each of which have different features available and limitations, but all sharing the same documentation- A recipe for confusion if ever I saw one. Some of what’s documented only applies to you the enterprise SAAS as used by GitLab themselves and not available to customers.

    Whilst theoretically, it should be possible to have a gitlab pipeline equivalent to GitHub actions, invariably these seem to metastasize In production to use includes making them tens or hundreds of thousands of lines long. Yes, I’m speaking from production experience across multiple organisations. Things that you would think were obvious and straightforward, especially coming from GitHub actions, seen difficult or impossible, example:

    I wanted to set up a GitHub action for a little Golang app: on push to any branch run tests and make a release build available, retaining artefacts for a week. On merging to main, make a release build available with artefacts retained indefinitely. Took me a couple of hours when I’d never done this before but all more or less as one would expect. I tried to do the equivalent in gitlab free SAAS and I gave up after a day and a half- testing and building was okay but it seems that you’re expected to use a third party artefact store. Yes, you could make the case that this is outside of remit, although given that the major competitor or alternative supports this, that seems a strange position. In any case though, you would expect it to be clearly documented, it isn’t or at least wasn’t 6 months ago.




  • Coming from what looks to me like a different perspective to many of the commenters here (Disclosure I am a professional platform engineer):

    If you are already scripting your setups then yes you should absolutely learn/use Ansible. The key reasons are that it is robust, explicit, and repeatable- doesn’t matter whether that’s the same host multiple times or multiple hosts. I have lost count of the number of pet Bash scripts I have encountered in various shops, many of them created by quite talented people. They all had problems. Some typical ones:

    Issue Example
    Most people write bash scripts without dependency checks ‘Of course everyone will have gnu coreutils installed, it’s part of every Linux distro’ - someone runs the script on a Mac
    We need to pass this action out to a command-line tool, that’s obvious Fails if command-line tool isn’t available, no handling errors from tool if they aren’t exactly what’s expected
    Of course people will realise that they need to run this from an environment prepared in this exact (undocumented) way Someone runs the script in a different environment
    Of course people will be running this on x86_64/AMD64, all these third party binaries are available for that Someone runs it on ARM
    Of course people will know what to do if the script fails midway through People try to re-run the script when it fails mid-way through and it’s a mess

    The thing about Ansible is that it can be modular (if you want) and you can use other people’s code but fundamentally it runs one step at a time. You will know for each step:

    • Are dependencies met?
    • Did that step succeed or fail (in realtime!)?
    • (If it failed) what was the error?
    • (Assuming you have written sane Ansible) you can re-run your playbook at any time to get the ‘same’ result. No worries about being left in an indeterminate state
    • (To an extent) It is self-documenting
    • Host architecture doesn’t really matter
    • Target architecture/OS is specified and clear