• 0 Posts
  • 9 Comments
Joined 1 year ago
cake
Cake day: June 26th, 2023

help-circle

  • It seems OP wanted to pass the file name to -k, but this parameter takes the password itself and not a filename:

           -k password
               The password to derive the key from. This is for compatibility with previous versions of OpenSSL. Superseded by the -pass argument.
    

    So, as I understand, the password would be not the first line of /etc/ssl/private/etcBackup.key, but the string /etc/ssl/private/etcBackup.key itself. It seems that -kfile /etc/ssl/private/etcBackup.key or -pass file:/etc/ssl/private/etcBackup.key is what OP wanted to use.




  • metiulekm@sh.itjust.workstoProgramming@programming.dev...
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    7 months ago

    I really need to try out Mercury one day. When we did a project in Prolog at uni, it felt cool, but also incredibly dynamic in a bad way. There were a few times when we misspelled some clause, which normally would be an error, but in our case it just meant falsehood. We then spent waaay to much time searching for these. I can’t help but think that Mercury would be as fun as Prolog, but less annoying.

    I actually use from time to time the Bower email client, which is written in Mercury.


  • My understanding is that all issues are patched in the mentioned releases, the config flag is not needed for that.

    The config flag has been added because supporting clients with different endianness is undertested and most people will never use it. So if it is going to generate vulnerabilities, it makes sense to be able to disable it easily, and to disable it by default on next major release. Indeed XWayland had it disabled by default already, so only the fourth issue (ProcRenderAddGlyphs) is relevant there if that default is not changed.



  • Edit: Actually, I thought about it, and I don’t think clang’s behavior is wrong in the examples he cites. Basically, you’re using an uninitialized variable, and choosing to use compiler settings which make that legal, and the compiler is saying “Okay, you didn’t give me a value for this variable, so I’m just going to pick one that’s convenient for me and do my optimizations according to the value I picked.” Is that the best thing for it to do? Maybe not; it certainly violates the principle of least surprise. But, it’s hard for me to say it’s the compiler’s fault that you constructed a program that does something surprising when uninitialized variables you’re using happen to have certain values.

    You got it correct in this edit. But the important part is that gcc will also do this, and they both are kinda expected to do so. The article cites some standard committee discussions: somebody suggested ensuring that signed integer overflow in C++20 will not UB, and the committee decided against it. Also, somebody suggested not allowing to optimize out the infinite loops like 13 years ago, and then the committee decided that it should be allowed. Therefore, these optimisations are clearly seen as features.

    And these are not theoretical issues by any means, there has been this vulnerability in the kernel for instance: https://lwn.net/Articles/342330/ which happened because the compiler just removed a null pointer check.


  • I’m super conflicted about this article. The portion on disabilities is great! But then, we see this:

    It’s considered an ‘AI-complete’ problem, something that would require computers that are as fully complex as, and functionally equivalent to, human beings. (Which about five minutes ago was precisely what the term ‘artificial intelligence’ meant, but since tech companies managed to dumb down and rebrand ‘AI’ to mean “anything utilizing a machine-learning algorithm”, the resulting terminology vacuum necessitated a new coinage, so now we have to call machine cognition of human-level complexity ‘AGI’, for ‘artificial general intelligence’.)

    This is honestly the first part that’s outright objectively wrong. A quick look at the Wiki will tell us that the term AGI was already used in 1997, for example. You can’t say that it was made up by tech companies about five minutes ago. And the author returns to this “rebranding” later in the article, so you can’t just brush this away as a misguided aside; it’s just clear that the author does not really know anything about AI, yet is willing to write an article about it. Mix this with the snarky tone, and it just gets very sad.

    It’s not like that I don’t agree with what they say about AI either, and I definitely agree with the big conclusions; it’s not like there are no people with a similar opinion that know more about AI (Gary Marcus, for instance), the comparision to disabilities is the novel (to me) part. But I just couldn’t share this article with anyone. As I am writing, the top comment on [email protected] is criticizing the same part of the article, except in less nice words. I don’t think that the person who wrote that comment will learn anything helpful about disabilities from this article…