itty53 everywhere but twitter.

  • 0 Posts
  • 12 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle




  • There’s a difference between choosing and listening to fans (critics) to improve and being made to feel obligated to do so. This society literally harasses people over being upset at fictional portrayals of cartoons. Sometimes harassed right out of their chosen career. Game devs know this very well.

    Content creators have no obligations to the consumers of the content, period. No more than Picasso had an obligation to paint landscapes. He didn’t care to so he didn’t.

    Content creators, publishers, etc: they’re free to make schlock we don’t like, and we’re free to express our disdain for it, and I’m free to point out that the folks wasting their energy complaining are indeed, wasting their energy. And cringey to boot. There’s a line crossed when you start insisting and making personal commentary at all. A publisher’s interests and the fan’s interests are not always aligned. That’s fine. You can deal with it, I promise. You bring up the snyder cut: Know who probably drove that whole push? The studio. Yeah, every one of those “fans” got played. This kind of shit is unacceptable. Period.

    https://www.rollingstone.com/tv-movies/tv-movie-features/justice-league-the-snyder-cut-bots-fans-1384231/

    Don’t encourage it.


  • Huh, the games did phenomenally well in America. Weird. /s

    We’re in an age of knee-jerk finger pointing, with the problem getting worse the higher you get in society. It’s just one giant game of blame hot-potato.

    Here’s the thing: The producers don’t owe the fans shit. They don’t owe the fans an explanation even. They owe the investors an explanation. The fans are just there, that’s the reality of being a fan of something. We don’t get a say, we just can choose to watch or not, and then decide to trash it or praise it online if we want to.

    So while there’s a problem going up the ladder of the blame game, there’s another one coming back down the ladder, and it’s entitlement. For some odd reason there’s an air of “we deserve this content, exactly to our specifications” and it permeates games, movies, music, all of the entertainment content we have been inundated with as a society. And I think the culture generally leans towards encouraging it because it keeps the culture thriving. But it also keeps us in the exact status quo we’re in as a society, beholden to these billionaire publishers we all rail on daily.

    Because let’s face it: We as a society spend an enormous amount of energy and as such, destroy a lot of the planet, on all this entertainment. If we can’t accept that as a fact then we’re fucking doomed.


  • This is a wildly over generalized take.

    Twitter was also an important tool for journalists and researchers worldwide. Military targets have come from Twitter posts. It is a reflection of a huge chunk of society. You may as well call all of internet technology “just a porn box” for how wildly over generalized that statement is. The reality is your generalization comes from arrogance. “I never engaged in such frivolous behavior”. You’re here now. Yes you have and yes you do.

    Even your comment is the first cousin of outrage, it’s pure disdain. Nothing more or less, and exactly as valuable as outrage.


  • You’re following me exactly, just not seeing what I’m pointing at.

    I agree, a human can’t meaningfully distinguish between a flat white picture made by a human (with say, MSPaint) and one made by an “AI” with a data model that includes the color Flat White. Similarly there’s no meaningful distinction to be made between 4’33" as performed by an algorithm vs one performed by a master pianist - humans can’t do that and neither can a machine.

    We’ve called certain kinds of entertainment “formulaic” - well that wasn’t inaccurate. It was. It is. We are. We are algorithmic. And just like in decades past when scientists put forth the idea that our emotions are just the combination of biology and chemistry, there will be serious existential pushback from certain sectors of humanity. Because it belittles the idea of what it is to be human and relegates us back to simple animals that can be trained. The reality is we are just that. And we keep proving it.

    We’ve been seeing this problem framed as one facing teachers and educators: How do we know students aren’t cheating and having an LLM writing their term papers? The reality is if they have been and teachers didn’t catch that from the start? The fault isn’t the tool they used. They’re teaching and grading the wrong thing.

    Language, like math largely did with the calculator, will be relegated to machines and algorithms because we already did that to ourselves a long time ago. We’re just building the machines to do the same thing for us, and getting the desired results. If I ask you what 237 x 979 is I don’t expect you to math that out in your head, I expect you to probably use a calculator to get that answer. But it’s still important we teach kids how to multiply 237 and 979 together on paper. It’s very simple to do that and avoid the use of computers altogether. It’s basic writing skills after all. Teaching isn’t about producing term papers, what does it matter that LLMs might be used to cheat them then? It’s about educating the students. Our whole focus on the problems of LLMs is just highlighting over and over and over the problems we as society have had for a long long long time, far before anyone knew what an “LLM” was.

    Sorry. I rant.


  • I really hate the label AI. They’re data models, not intelligence - artificial or otherwise. It’s PAI. Pseudo Artificial Intelligence, which we’ve had since the 80s.

    The thing is that these data models are, in the end, fed to algorithms to provide output. That being the case it’s a mathematical certainty that it can be reversed and thus, shown to be from such an algorithm. Watermark or not, if an algorithm makes a result, then you can deduce the algorithm from a given set of it’s results.

    It wouldn’t be able to meaningfully distinguish 4’33" from silence though. Nor could it determine a flat white image wasn’t made by an algorithm.

    I think what we’re really demonstrating in all this is just exactly how algorithmically human beings think already. Something psychology has been talking about for a longer time still.