

Not really - it isn’t prediction, it is early detection. Interpretive AI (finding and interpreting patterns) is way ahead of generative AI.
These are all me:
I control the following bots:
Not really - it isn’t prediction, it is early detection. Interpretive AI (finding and interpreting patterns) is way ahead of generative AI.
Like I said - there is a small vocal group who few that Lemmy as a whole should be boycotted due to the developers’ political views.
I think having a 300 year life span would tend to select for darker skin and possibly other traits that would better survive 300 years of exposure - enough to distinguish it from any existing ethnicity.
Some people are excessively sensitive to software developer political views.
Lemmy isn’t Kbin and Kbin isn’t Lemmy. Both are software participants in the fediverse. It is like saying nginx isn’t Apache: of course isn’t, but that doesn’t make them any less web servers.
It is fairly common among Catholics. I’ve known some fairly progressive Catholics who are Republicans because abortion. Now, that isn’t to say that a good number haven’t bought into the divisive rhetoric and gone full maga, but that’s not where they started.
It is a wedge issue that has locked a portion of the population who are single issue voters into being Republicans despite literally all their other beliefs. That is basically what all the non-financial planks of the Republican platform have in common.
The irony that this story was posted by a bot…
Bots that don’t identify as such count towards active users. There have been a number of bot purges.
Pro-tip: if you are trying to figure out if a website has a feature, try the default web interface first.
This is a pet peeve of mine as well.
Long ago I noticed that on Star Trek, nobody wanted to tell the captain what was going on over the comms, they wanted the captain to stop what they were doing and go to a different part of the ship / station. I always eyerolled at the absurdity of the staff having so little respect for the captain’s time.
Then it started happening to me. I’m not a captain, my time isn’t that important, but have a little respect for what I’m currently engaged in? maybe?
I’ve reported pictures/gifs of accidental nudity that were posted on Reddit without any evidence of consent, and they blew me off. Not just ignored me - they took the time to say the content was fine.
Yeah, it was legal to post stuff like that - no reasonable expectation of privacy in public places and all that. But it isn’t ethical. Don’t do it. It isn’t funny.
Lemmy died. Nothing to see here.
Federation is the future of social media for exactly this reason, especially in the twitter-like realm where who is saying it is as (or more) important than what is being said. These people and organizations need to control their brand outside the scope of commercial pressure from the platform.
That’s pretty much my thinking, though there is an advantage that having a large number of users on an instance amplifies it’s caching effect, though as you say - if their interests are too far spread, that effect is diminished.
Not harmful, but I would agree that the network seems optimized for a small number of user-focused servers, and a large number of community-focused servers.
They can and do. Every sub has a unique name.
The complaint is that there isn’t a single entity who gets to decide which sub is the authoritative sub for a topic. Which is a feature, not a bug.
Communities will coalesce around certain subs that work, and they will rise up over the alternatives. We’re just in ego-land-grab mode right now.
If it could, it couldn’t claim that the content out produced was original. If AI generated content were detectable, that would be a tacit admission that it is entirely plagiarized.
The base assumption of those with that argument is that an AI is incapable of being original, so it is “stealing” anything it is trained on. The problem with that logic is that’s exactly how humans work - everything they say or do is derivative from their experiences. We combine pieces of information from different sources, and connect them in a way that is original - at least from our perspective. And not surprisingly, that’s what we’ve programmed AI to do.
Yes, AI can produce copyright violations. They should be programmed not to. They should cite their sources when appropriate. AI needs to “learn” the same lessons we learned about not copy-pasting Wikipedia into a term paper.
Copyright 100% applies to the output of an AI, and it is subject to all the rules of fair use and attribution that entails.
That is very different than saying that you can’t feed legally acquired content into an AI.
Nature knows how to solve this problem.