

Ditto. API thing was transition as well
Nice. Software developer, gamer, occasionally 3d printing, coffee lover.
Ditto. API thing was transition as well
Honestly one could probably make a comparison to a loan shark, you keep paying it off but the interest is so high and you keep borrowing more you just stay in debt and keep sinking deeper.
Sleep debt itself is a weird metric. You can’t “pay it back”, lost sleep is lost sleep, period.
The body doesn’t keep track, the body just complains. You might feel weird if there is a dramatic change to your sleep schedule.
I just use PLA. PLA itself is good safe, but occasionally the additives aren’t, so I don’t use any for human related stuff. It’s also worth considering that the layered approach can allow for bacterial growth, so unless you treat it (e.g. epoxy seal it), you’ll need to wash it fairly frequently to curb buildup.
I wouldn’t say I’ve made back my investment on 3D printing in the past half a decade I’ve done it. But in terms of “prints for friends” like this one above I may be close. Plus there’s just something nice about going “I need a measuring cup for dog food” and printing one to the exact serving size.
I used to have a fair bit of imposter syndrome but now that I’ve been working with a proper team I’ve come too accept I have an aptitude for code and logic in general, alongside a fairly good abstract memory.
I’m not the best by any stretch of the imagination, but I’m a little more competent than the average software engineer, enough that it gets noticed.
I also got lucky and scored a job at 17 in the field (with no nepotism involved), not a great one but enough to look good on my resume, and have been working in the industry for just over a decade with no college.
You might tell yourself in retrospect that you consciously made that decision to deepen it and twist it, but with the adrenaline, panic, fear? You instinctually reacted to survive, and the fact you were able to withdraw from the situation before inflicting greater harm is a testament to the fact it wasn’t cruel.
Surprised Slice & Dice hasn’t been mentioned, it’s amazing. That and Dawncaster.
A useful tool is DarkPattern.games, which lists dark patterns in mobile games (such as micro transactions).
I don’t understand the logic behind this. If it’s your job to analyze and deduce whether certain content is or is not acceptable, why shouldn’t you make assessments on a case by case basis?
The bit about “ignoring it” was more in jest. We do review each report and handle it in a case by case basis, my point with this statement is that someone hosting questionable content is going to generate alot of reports, regardless of whether it is illegal or not, and we won’t take an operating loss and let them keep hosting with us.
Usually we try and determine if it was intentional or not, if someone is hosting CSAM and is quick and responsive with resolving the issue, we generally won’t immediately terminate them for it. But even if they (our client) is a victim, we are not required to host for them and after a certain point we will terminate them.
So when we receive a complaint about a user hosting CSAM, we review it and see they are hosting a site advertising itself as intended to allow users to distribute AI generated CP, we aren’t going to let him continue hosting with us.
Even if you remove CSAM from the equation you still have to continuously sift through content and report any and all illegal activities - regardless of its frequency.
This is not an accurate statement, at least in the U.S. where we are based. We are not (yet) required to sift through any and all content uploaded on our servers (not to mention the complexity of such an undertaking making it virtually impossible at our level). There have been a few laws proposed that would have changed that, as we’ve seen in the news from time to time. We are required to handle reports we receive about our clients.
Keep in mind when I say we are a hosting provider, I’m referring to pretty high up the chain - we provide hosting to clients that would say, host a Lemmy instance, or a Discord bot, or a personal NextCloud server, to name a few examples. A common dynamic is how much abuse is your hosting provider willing to put up with, and if you recall with the CSAM attacks on Lemmy instances part of the discussion was risking getting their servers shutdown.
Which is valid, hosting providers will only put up with so much risk to their infrastructure, reputation, and / or staff. Which is why people who run sites like Lemmy or image hosting services do usually want to take an active role in preventing abuse - whether or not they are legally liable won’t matter when we pull the plug because they are causing us an operating loss.
And it’s the right of any … [continued]
I’m just going to reply to the rest of your statement down here, I think I did not make my intent/purpose clear enough. I originally replied to your statement talking about AI being used to make CP in the future by providing a personal anecdote about it already happening. To which you asked a question as to why I defined AI generated CP as CSAM, and I clarified. I wasn’t actually responding to the rest of that message. I was not touching the topic or discussion of what impact it might have on the actual abuse of children, merely providing my opinion as to why, whether legal or not, hosting providers aren’t ever going to host that content.
The content will be hosted either way, but whether it is merely relegated to “offshore” providers but still accessible via normal means and not criminal content, or becomes another part of the dark web, will be determine at some point in the future. It hasn’t become a huge issue yet but it is rapidly approaching that point.
The report came from a (non-US) government agency. It wasn’t reported as AI generated, that was what we discovered.
But it highlights the reality - while AI generated content may be considered fairly obvious for now, it won’t be forever. Real CSAM could be mixed in at some point, or, hell, the characters generating it could be feeding it real CSAM to have it recreate it in a manner that makes it harder to detect.
So what does this mean for hosting providers? We continuously receive reports for a client and each time we have to review it and what, use our best judgement to decide if it’s AI generated? We add the client to a list and ignore CSAM reports for them? We have to tell the government that it’s not “real CSAM” and expect it to end there?
No legitimate hosting provider is going to knowingly host CSAM, AI generated or not. We aren’t going to invest legal resources into defending that, nor are we going to jeopardize the mental well-being of our staff by increasing the frequency of those reports.
It already is being used to make CSAM. I work for a hosting provider and just the other day we closed an account because they were intentionally hosting AI generated CSAM.
Came here to say this. Wasn’t as often but I’d get specialty coffee for $8-$10 a couple times a week. I bought a off brand espresso machine for $100 that is running to this day. If I include various accessories I’ve probably spent around $200. I did wind up getting a work bonus and splurging on a $400 Eureka grinder so I can have freshly ground.
Last I did the math at most I spend around $1.00 a cup, for a savings of around $8. I’ve made at least a couple hundred coffees it has definitely paid for itself and then some.
My dog is old and going deaf and blind. Anytime I leave him he whines, I had to leave him with a friend for about a week to attend a wedding and apparently he whined the entire time. I don’t know what I did to deserve this kind of love but I know he doesn’t hate me.
You’d think the hottest day record being broken daily for what, a week back to back, would kick our asses into gear. Instead most people barely acknowledged it.
In the development industry, the concept of an API (application programming interface) is to give developers a way to interface with the backend". When we’re referring to an API, like the Reddit API, a majority of the time we’re referring to a public API - which are typically versioned, documented, and have specific rules or conditions unique to them.
When a website or service doesn’t have a public API, or that public API doesn’t suit the needs or is otherwise not applicable, we will often turn to scraping. Scraping can be done a number of ways, and I won’t go into those details. But the short of it is that we’re usually mapping data into a usable format, and acquiring it by interfacing with the website as though we’re a normal user.
One of the drawbacks of scraping is that normal changes, even small, can break the logic we have scraping that website. This is also the case with projects like NewPipe, Piped, and LibreTube.
LibreTube, NewPipe, and Piped all use NewPipe’s Extractor underneath the hood. Invidious appears to have it’s own logic. The “Piped API” uses NewPipe Extractor, which is a Java library, and essentially converts it into a web API that other projects that aren’t Java can utilize.
Sharing that logic is actually beneficial, if YouTube changes something that breaks all the open source frontends, developers from all 3 projects (and as it is OSS, unaffiliated ones as well) can identify what change is breaking the extractor and fix it, pushing out an update fixing their frontend rapidly. I’m not sure how it works in practice as I don’t follow the project, so I’m just assuming that Invidious gets fixed slightly slower than the other three.
But back to your original question, YouTube can’t “turn rogue” in the same way Reddit did, in a sense YouTube is already rogue. The developers of third party apps on Reddit utilize a public API with permission, and follow a certain set of guidelines and other requirements. Developers making alternate frontends for YouTube do not utilize any APIs with permission, and just the act of using one of those frontends is likely a terms of service violation on YouTube (not that it matters).
The reason we don’t see the open source YouTube frontends breaking all the time is likely due to effort. YouTube could completely change one of it’s APIs it uses, but then they would have to update all of it’s software that relies on it. Their website, the Android app, the iOS app, probably the YouTube music apps, and not to mention every smart TV app. And this change would likely be identified and fixed by the OSS developers within days, if not hours. So YouTube can’t really actively combat it on the development end, and would need to take the legal route. Publishing code for scraping isn’t illegal (but they could, and have, use intimidation), so YouTube’s legal recourse would be to get stuff using the code shutdown. Right now the user base for the frontends is probably small enough that they’re tolerating it.
TL;DR: YouTube is already rogue, scrapiing bypasses that. Changes to YouTube already break open source frontends but are resolved rapidly by developers.
As for answering the followup question: “can Reddit developers do the same thing”. Yes. Will they? Maybe, maybe not. Reddit had a great API which encouraged third party growth, now that Reddit has made that API prohibitively expensive, it’s not really an option. Developers inthe past could utilize that API and monetize their projects (so long as it fit within guidelines, if any). Turning to scraping / “unsupported” methods of interfacing with the website means they realistically can’t monetize their work as doing so puts them in legal hot water. They also would have trouble publishing their app (if it’s an app) on platforms like Play Store and the App Store. And face intimidation and legal threats.
If you don’t want to disable biometric auth, familiarize yourself with your phone and see if it has lockdown mode. Apple phones and most modern Android phones support it, using it will require your password / pin for unlocks. Put it into lockdown mode for the flight.