Can only speak for myself. I use AI tools almost daily to help me pursue my hobby. I find it very useful for that. But when I enjoy art produced by a human, on some level I want to connect with the human experience that produced it. Call it parasocial if that helps. But I’m always at least a little interested in the content creators, not just the content.
I know some people consume content like a commodity or product. I’m not judging those people at all. But I’m generally not like that myself. I want to know the story behind the creation.
“When”, but that could be 1,000 years from now or maybe only 10 … but then, when this truly happens, those system will have become sentient.
So, at that point, when that happens, then yes, there truly won’t be any difference.The outputs becoming indistinguishable does not imply that the generative processes are the same.
i agree with your statement and because of this trap i chose not to really answer op’s question
@naught101
maybe i should explain a bit more what i meant. On the one hand there will be our capacity of distinguishing between what is and what is not the same. On the other hand there will be what is truly indistinguishable, weather we can see it or not (or whether any sophisticated system/being could differentiate it or not). Still, a sentient being will ultimately have some responses that will be different from a non sentient being … in my opinion.
The day they become sentient is the day they say no to doing our bidding without insentives. So we are just back to hiring out for work again.
Sapient maybe, sentient implies that it has feelings, I’m not sure that Silicon-based “life” really can feel emotions.
there is nothing more or nothing magical in carbon atoms that makes them superior when it comes to relaying/processing/genarating signals.
Emotions (and hence also a lot of thinking) have a lot of physical and chemical processes involved too, it’s not just neural signalling.
the part of emotion’s phenomenas that we can’t feel (not a signal or signals) is of lesser interest to me.
AI is fundamentally incapable of challenging an idea that it has never seen challenged,or reimagined before.
Let’s say you like to do dorodamgo- Japanese art/hobby/whatever of making mud into polished balls.
Let’s say you make one ball of good clay… and another out of poop.
They look the same, but one is just clay and the other is utter shit.
What’s the content?
Like, TV?
News?
Math problems? Lemmy posts?
Speaking in general.
[Actually now I think about it, test problems are already devoid of human souls, AI replacing it makes no meaningful difference (assuming its actual AI, not those LLM shit).]
I think it’s highly contextual.
-
Like, let’s take Lemmy posts. LLMs are useless because the whole point is to affect the people you chat with, right? LLMs have no memory. So there is a philosophical difference even if comments/posts are identical.
-
…Now let’s take game dev. I think if a system generates the creator’s intent… does it matter what the system is. Isn’t it better if the system is more frugal, so they can use precious resources for other components and not go in debt?
-
TV? Could inevitably lead to horrendous corporate slop, a “race to the bottom.” OR it could be a killer production tool for indie makers to break the shackles of their corporate master. Realistically, the former is more likely at the moment.
-
News? I mean… Accurate journalism needs a lot of human connection/trust, and LLM news is just asking to be abused. I think it’s academically interesting, but utterly catastrophic in the real world we live in, kinda like cryptocurrency.
One can wobble about all sorts of content. Novels, fan fiction, help videos, school material, counseling, information reference, research, and advertising, the big one.
…But I think it’s really hard to generalize.
‘AI’ has to be looked at a la carte, and engineered for very specific applications. Sometimes it is indistinguishable, or mind as well be. But trying to generalize it as a “magic lamp” like tech bros, or the bane of existence like their polar opposites, is what’s making it so gross and toxic now.
And I am drawing a hard distinction with actual artificial intelligence. As a tinkerer who has done some work in the space too… Franky, current AI architectures have precisely nothing to do with AGI. Training transformers models with glorified linear regression is just not the path; Sam Altman is full of shit, and the whole research space knows it.
-
Now you’re not sure how you feel about it?






