• rufus@discuss.tchncs.de
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    10 months ago

    Yeah, I don’t want to be negative, but half the article is a bit stupid. I hope they don’t do that. I tried writing a murder mystery story and ChatGPT would lecture me how killing people was immoral instead of helping. It’s ridiculous and I’m sure there are lots of other analogies. It’s neither possible to achive it 100% nor is it useful.

    Thinking it through properly: AI is a tool. It would be like re-designing a knife so nobody can be stabbed anymore. It’d end up you not being able to cut pineapples or melons any more.

    And I could still do harm to people with other tools than a knife. Or in this example: I can give harmful advice or write a pornographic story myself. What’s the benefit of any chatbot maker having to implement protections? Who decides on what moral is the correct one?

    I think the correct approach is to study AI safety and expose ethics and make it controllable. Make users able to constrain/restrict or guide output to align with their use-case. I mean a company that replaces their helpdesk with AI would be interested the chatbot doesn’t tell their clients lewd stories. But it could be a valid use-case for other people. And giving advice or helping with scenarios or computer code also involves talking about issues and potential risks. You can’t entirely switch that off without ‘lobotomizing’ the AI and making it unusable except for casual talk.

    And the article is a bit inconsistent. First they say researchers found an attack that can be used even if patched by developers. And then they offer the solution to patch it…?!