• 0 Posts
  • 426 Comments
Joined 2 年前
cake
Cake day: 2023年7月3日

help-circle





  • “”"

    16 Just then a man came up to Jesus and asked, “Teacher, what good thing must I do to get eternal life?”

    17 “Why do you ask me about what is good?” Jesus replied. “There is only One who is good. If you want to enter life, keep the commandments.”

    18 “Which ones?” he inquired.

    Jesus replied, “‘You shall not murder, you shall not commit adultery, you shall not steal, you shall not give false testimony, 19 honor your father and mother,’[a] and ‘love your neighbor as yourself.’[b]”

    20 “All these I have kept,” the young man said. “What do I still lack?”

    21 Jesus answered, “If you want to be perfect, go, sell your possessions and give to the poor, and you will have treasure in heaven. Then come, follow me.”

    22 When the young man heard this, he went away sad, because he had great wealth.

    “”"

    Don’t see a lot of rich assholes talking about this part of the bible. (Matthew 19:16-22)







  • This reminds me of the new vector for malware that targets “vibe coders”. LLMs tend to hallucinate libraries that don’t exist. Like, it’ll tell you to add, install, and use jjj_image_proc or whatever. The vibe coder will then get an error like “that library doesn’t exist” and "can’t call jjj_image_proc.process()`.

    But you, a malicious user, could go and create a library named jjj_image_proc and give it a function named process. Vibe coders will then pull down and run your arbitrary code, and that’s kind of game over for them.

    You’d just need to find some commonly hallucinated library names





  • The conservative mindset seems to be “What’s good for me right now?”. The law is good when it hurts their enemies, and it’s unfair when it hurts them. A policy is good when it benefits them, and bad when it benefits someone they don’t like. They are essentially toddlers. We should treat their ideas as seriously as we’d treat a two year old’s ideas. Yes dear that’s a really interesting idea to replace all the toilets in the building with monster trucks, but we’re not going to do that.


  • Many people have found that using LLMs for coding is a net negative. You end up with sloppy, vulnerable, code that you don’t understand. I’m not sure if there have been any rigorous studies about it yet, but it seems very plausible. LLMs are prone to hallucinating, so you’re going to get it telling you to import libraries that don’t exist, or use parts of the standard library that don’t exist.

    It also opens up a whole new security threat vector of squatting. If LLMs routinely try to install a library from pypi that doesn’t exist, you can create that library and have it do whatever you want. Vibe coders will then run it, and that’s game over for them.

    So yeah, you could “rigorously check” it but a. all of us are lazy and aren’t going to do that routinely (like, have you used snapshot tests?), b. it’s going to anchor you around whatever it produced, making it harder to think about other approaches, and c. it’s often slower overall than just doing a good job from the start.

    I imagine there are similar problems with analyzing large amounts of text. It doesn’t really understand anything. To verify it’s correct, you would have to read the whole thing yourself anyway.

    There are probably specialized use cases that are good- I’m told AI is useful for like protein folding and cancer detection- but that still has experts (I hope) looking at the results.

    To your point, I think people are trying to use these LLMs for things with definite answers, too. Like if I go to google and type in “largest state in the US” it uses AI. This is not a good use case.