This does have the vibe of those pro-billionaire articles the Washington Post puts out…
This does have the vibe of those pro-billionaire articles the Washington Post puts out…
Oh that’s what I was looking for. Nice.
Thing is, any friends and family that click “accept” when an app asked for permission to see their address book will have accidentally released the “good” email.
I wonder whether there shouldn’t just be “an email I can read out to people on the phone” and the rest are all random privacy/hash ones.
Great, so when they abandon the nuclear project in 18mths who will maintain them?
It took me a moment to notice those weren’t specifically security terms…
Yeah I agree
The absolute lack of any kind of consistency with layout or alignment makes me cringe too.
It’s just shows how they’re just glued onto the page with no care or planning. Especially no consideration to the user or user experience.
Ghostery has an “auto-reject cookies” setting.
It’s not for you as a consumer.
It’s to reduce your usefulness as a worker.
Which would be lovely, if our value wasn’t calculated by our usefulness to the market.
It’s not on mine yet.
So some a/b testing maybe
Thank you for letting me know. I will too.
You know it’s a sliding scale, right?
There’s not just three choices.
There’s Far-side, side-wing, side, side-leaning, center-side for both (plus center).
Then you have fiscal Vs moral lean on both sides.
“no right wing talking points” is hardly left wing.
Not left wing. Just left.
None of them are left wing (maybe Green has some left wing stuff?)
Thanks for taking the time to explain that so clearly! It’s really interesting.
If something was written by V3 and then published, that text doesn’t get updated every time a new version of chatGPT comes out.
The text isn’t dynamic.
It could be a new level added to the peer review of work. Nothing to do with the university. Just “other professionals”.
A thesis isn’t just an exam, it’s a real scientific paper.
And usually claims is contents as fact, which can be referenced by others as fact.
And absolutely should be open to scrutiny so long as it is relevant.
Yeah but if your wrote your thesis in 2024, and the detector is run on it in 2026…
You’re probably busted.
It’s not like you’ll re-write your thesis with every major ChatGPT release.
I bet AI detection is going to get a lot better over time.
I wonder if there’s going to be retrospective testing of theses as time goes on.
Could really damage some careers down the line.
Edit: guys, retrospective testing means it was done later (i.e. with a more up to date AI detector).
You don’t know any 5yo software engineers?