It was merged after they where rightfully ridiculed by the community.

The awful response to the backlash by matwojo really takes the cake:

I’ve learned today that you are sensitive to ensuring human readability over any concerns in regard to AI consumption

  • Mikina@programming.dev
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    5 days ago

    Having AI not bullshiting will require an entirely different set of algorithms than LLM, or ML in general. ML by design aproximates answers, and you don’t use it for anything that’s deterministic and has a correct answer. So, in that rwgard, we’re basically at square 0.

    You can keep on slapping a bunch of checks on top of random text prediction it gives you, but if you have a way of checking if something is really true for every case imaginable, then you can probably just use that to instead generate the reply, and it can’t be something that’s also ML/random.

    • FizzyOrange@programming.dev
      link
      fedilink
      arrow-up
      2
      arrow-down
      7
      ·
      4 days ago

      You can’t confidently say that because nobody knows how to solve the bullshitting issue. It might end up being very similar to current LLMs.