• 0 Posts
  • 26 Comments
Joined 2 years ago
cake
Cake day: May 29th, 2024

help-circle

  • The thing about this perspective is that I think its actually overly positive about LLMs, as it frames them as just the latest in a long line of automations.

    Not all automations are created equal. For example, compare using a typewriter to using a text editor. Besides a few details about the ink ribbon and movement mechanisms you really haven’t lost much in the transition. This is despite the fact that the text editor can be highly automated with scripts and hot keys, allowing you to manipulate even thousands of pages of text at once in certain ways. Using a text editor certainly won’t make you forget how to write like using ChatGPT will.

    I think the difference lies in the relationship between the person and the machine. To paraphrase Cathode Ray Dude, people who are good at using computers deduce the internal state of the machine, mirror (a subset of) that state as a mental model, and use that to plan out their actions to get the desired result. People that aren’t good at using computers generally don’t do this, and might not even know how you would start trying to.

    For years ‘user friendly’ software design has catered to that second group, as they are both the largest contingent of users and the ones that needed the most help. To do this software vendors have generally done two things: try to move the necessary mental processes from the user’s brain into the computer and hide the computer’s internal state (so that its not implied that the user has to understand it, so that a user that doesn’t know what they’re doing won’t do something they’ll regret, etc). Unfortunately this drives that first group of people up the wall. Not only does hiding the internal state of the computer make it harder to deduce, every “smart” feature they add to try to move this mental process into the computer itself only makes the internal state more complex and harder to model.

    Many people assume that if this is the way you think about software you are just an elistist gatekeeper, and you only want your group to be able to use computers. Or you might even be accused of ableism. But the real reason is what I described above, even if its not usually articulated in that way.

    Now, I am of the opinion that the ‘mirroring the internal state’ method of thinking is the superior way to interact with machines, and the approach to user friendliness I described has actually done a lot of harm to our relationship with computers at a societal level. (This is an opinion I suspect many people here would agree with.) And yet that does not mean that I think computers should be difficult to use. Quite the opposite, I think that modern computers are too complicated, and that in an ideal world their internal states and abstractions would be much simpler and more elegant, but no less powerful. (Elaborating on that would make this comment even longer though.) Nor do I think that computers shouldn’t be accessible to people with different levels of ability. But just as a random person in a store shouldn’t grab a wheelchair user’s chair handles and start pushing them around, neither should Windows (for example) start changing your settings on updates without asking.

    Anyway, all of this is to say that I think LLMs are basically the ultimate in that approach to ‘user friendliness’. They try to move more of your thought process into the machine than ever before, their internal state is more complex than ever before, and it is also more opaque than ever before. They also reflect certain values endemic to the corporate system that produced them: that the appearance of activity is more important than the correctness or efficacy of that activity. (That is, again, a whole other comment though.) The result is that they are extremely mind numbing, in the literal sense of the phrase.


  • The absolute epitome of non-AI slop has got to be these creepy videos that were on YouTube back in ~2017:

    https://en.wikipedia.org/wiki/Elsagate

    Its exactly the kind of thing you’d expect would be the product of AI, but it actually came before AI. I think a lot of it was procedurally generated though, using scripts to control 3D software and editing software, so different character models could be used in the same scenes and different scenes could be strung together to make each video.

    I think a similar thing happens with those shovelware Android games. There’s so many that are just the same game with (incredibly poorly done) asset swaps that I think they must just make a game once and then automatically generate a thousand+ variations on it.




  • Car engines, for probably the past 100 years, have always been advertised based on their peak power rating, not what they can produce continuously. Cars are not designed to have their accelerator pedals floored for hours on end, nor is this even possible to do, as you’d eventually hit a curve and need to slow down.

    This is especially the case for high performance vehicles, which usually have more demanding maintenance requirements just from normal operation, let alone from being abused like that.


  • This is an idea from the 1960s back when they thought solar panels would be like computer chips and remain super expensive in terms of area but become exponentially better at the amount of sunlight they could convert into electricity.

    It makes absolutely zero sense to spend billions of dollars putting solar panels in space and beaming the power back to earth now that they are so cheap per unit area. The one thing you could argue a space based solar array could do would be to stretch out the day length so you need less storage, but that’s easier to accomplish using long electrical cables.









  • Nah,

    If I walk up to you on the street and tell you to hand over your money or I’ll kill you, that’s enough to land me jail. Its maybe even enough for you to be justified in punching me in self defense, if you feared for your life and there was no other way you could ensure your safety.

    But suddenly if I say I want to put a million people in a gas chamber that’s A-OK? Suddenly no one can punch back or else they’re “just as bad”? Suddenly the lines are super blurry and the slopes are super slippery and its absolutely impossible to tell what a threat of violence is.

    Its a crime to say you’ll kill one person, its your right to say you’ll kill a million.


  • drosophila@lemmy.blahaj.zonetoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    10 months ago

    Since we strive for transparency, and the LEGAL definition of “sale of data” is extremely broad in some places, we’ve had to step back from making the definitive statements you know and love. We still put a lot of work into making sure that the data that we share with our partners (which we need to do to make Firefox commercially viable)

    So in other words we sell your data and get paid for it, and some countries won’t let us lie about it.


  • So, keep in mind that single photon sensors have been around for awhile, in the form of avalanche photodiodes and photomultiplier tubes. And avalanche photodiodes are pretty commonly used in LiDAR systems already.

    The ones talked about in the article I linked collect about 50 points per square meter at a horizontal resolution of about 23 cm. Obviously that’s way worse than what’s presented in the phys.org article, but that’s also measuring from 3km away while covering an area of 700 square km per hour (because these systems are used for wide area terrain scanning from airplanes). With the way LiDAR works the system in the phys.org article could be scanning with a very narrow beam to get way more datapoints per square meter.

    Now, this doesn’t mean that the system is useless crap or whatever. It could be that the superconducting nanowire sensor they’re using lets them measure the arrival time much more precisely than normal LiDAR systems, which would give them much better depth resolution. Or it could be that the sensor has much less noise (false photon detections) than the commonly used avalanche diodes. I didn’t read the actual paper, and honestly I don’t know enough about LiDAR and photon detectors to really be able to compare those stats.

    But I do know enough to say that the range and single-photon capability of this system aren’t really the special parts of it, if it’s special at all.



  • Specifically they are completely incapable of unifying information into a self consistent model.

    To use an analogy you see a shadow and know its being cast by some object with a definite shape, even if you can’t be sure what that shape is. An LLM sees a shadow and its idea of what’s casting it is as fuzzy and mutable as the shadow itself.

    Funnily enough old school AI from the 70s, like logic engines, possessed a super-human ability for logical self consistancy. A human can hold contradictory beliefs without realizing it, a logic engine is incapable of self-contradiction once all of the facts in its database have been collated. (This is where the SciFi idea of robots like HAL-9000 and Data from Star Trek come from.) However this perfect reasoning ability left logic engines completely unable to deal with contradictory or ambiguous information, as well as logical paradoxes. They were also severely limited by the fact that practically everything they knew had to be explicitly programmed into them. So if you wanted one to be able to hold a conversion in plain English you would have to enter all kinds of information that we know implicitly, like the fact that water makes things wet or that most, but not all, people have two legs. A basically impossible task.

    With the rise of machine learning and large artificial neural networks we solved the problem of dealing with implicit, ambiguous, and paradoxical information but in the process completely removed the ability to logically reason.