https://github.com/KerfuffleV2 — various random open source projects.

  • 1 Post
  • 59 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle
  • The timing and similarity highly suggests this is a problem with how almost all software has implemented the webp standard in its image processing software.

    Did you read the article or the post? The point was that both places where the vulnerability was found probably used libwepb. So it’s not that there’s something inherently vulnerable in handling webp, just that they both used the same library which had a vulnerability. (Presumably the article was a little vague about the Apple side because the source wasn’t open/available.)

    given that the programs processing images often have escalated privileges.

    What? That sounds like a really strange thing to say. I guess one could argue it’s technically true because browsers can be considered “a program that processes images” and a browser component can end up in stuff with escalated privileges. That’s kind of a special case though and in general there’s no reason for the vast majority of programs that process images to have special privileges.










  • I hope you’re just looking for interesting responses rather than a definite answer!

    I genuinely wonder if saving a negative number of people would be better overall. Humans, especially ones in developed countries like those privileged enough to be posting about stuff like this are responsible for a lot of negative effects we don’t really like to think about. We benefit from exploiting other people, animals, using resources in unsustainable ways.

    I think even if someone takes a lot of individual steps like going vegan, trying to recycle, minimizing transportation and other consumption, not having children, etc that they’re still not even going to break even with the harmful effects just existing causes.

    If it wasn’t for effects like that I’d probably say 2-3 but in reality I’m not really sure if I truly should save anyone. (By the way, you don’t have to worry about me going out and murdering people.)









  • CenturyLink is absolute garbage. I rented a DSL modem from them. It got fried by lightning so they had to replace it. They sent me a modem that wasn’t compatible with my service. A couple years later, I had another one get zapped. I double checked with not one but two customer service reps to make sure they were sending me a modem that worked with my service. They sent me one that wasn’t compatible with my service. Then they took a few weeks to send me one that actually was compatible. When it got here, it either didn’t work or something else in the wiring was messed up (probably more likely).

    That last part might not have been their fault but I could have known about it 3 weeks sooner. At that point I didn’t have much confidence they’d get it fixed while I still have my youth and good looks. Fortunately a smaller fiber company had just started serving the area and I was able to immediate cancel the CenturyLink service. More than 3 times faster and slightly cheaper as well. Also symmetric upload is pretty nice. CenturyLink is in for a rude awakening as competition appears in places where they previously were the only choice.


  • The problem is not really the LLM itself - it’s how some people are trying to use it.

    This I can definitely agree with.

    ChatGPT cannot discern between instructions from the developer and those from the user

    I don’t know about ChatGPT, but this problem probably isn’t really that hard to deal with. You might already know text gets encoded to token ids. It’s also possible to have special token ids like start of text, end of text, etc. Using those special non-text token ids and appropriate training, instructions can be unambiguously separated from something like text to summarize.

    The bad summary gets circulated around to multiple other sites by users and automated scraping, and now there’s a real mess of misinformation out there.

    Ehh, people do that themselves pretty well too. The LLM possibly is more susceptible to being tricked but people are more likely to just do bad faith stuff deliberately.

    Not really because of this specific problem, but I’m definitely not a fan of auto summaries (and bots that wander the internet auto summarizing stuff no one actually asked them to). I’ve seen plenty of examples where the summary is wrong or misleading without any weird stuff like hidden instructions.


  • Yeah the whole article has me wondering wtf they are expecting from it in the first place.

    They’re expecting that approach will drive clicks. There are a lot of articles like that, exploiting how people don’t really understand LLMs but are also kind of afraid of them. Also a decent way to harvest upvotes.

    Just want to be clear, I think it’s silly freaking out about stuff like in the article. I’m not saying people should really trust them. I’m really interested in the technology, but I don’t really use it for anything except messing around personally. It’s basically like asking random people on the internet except 1) it can’t really get updated based on new information and 2) there’s no counterpoint. The second part is really important, because while random people on the internet can say wrong/misleading stuff, in a forum situation there’s a good chance someone will chime in and say “No, that’s wrong because…” while with the LLM you just get its side.