

It’s so funny too, because in the very next article, they correctly call another proposal “proposed”. Like, Linuxiac, what is this:
“Falsehood flies, and truth comes limping after it, so that when men come to be undeceived, it is too late; the jest is over, and the tale hath had its effect: […] like a physician, who hath found out an infallible medicine, after the patient is dead.” —Jonathan Swift
It’s so funny too, because in the very next article, they correctly call another proposal “proposed”. Like, Linuxiac, what is this:
OP, you linked to the comments instead of the top of the article. 💀
I’m not agreeing with their dumb point, but just pointing out: this satellite works on radar. I’m genuinely concerned how many people seem to be commenting without reading the article.
I don’t know why you’re assuming their ‘/s’ is alluding to sarcasm around this being surveillance versus sarcasm around needing more surveillance. “We need more surveillance (we actually don’t)” seems to be indicated here, not “This is surveillance (it actually isn’t)”.
Especially when Reddit types are notoriously, chronically unable to read articles before they go spouting uninformed bullshit in the comments.
Did you read the part where this is a radar satellite designed for monitoring the climate? That is, did you read anything besides the headline before you decided: “Yeah, I think I’m able to make informed commentary about this”?
A few additional fun points about this:
I think the meme is funny too, but it seems like it’s becoming so divorced from its original context that some people actually believe that carcinisation is some kind of ideal endpoint of evolution. Just to clarify: this isn’t true given how few, localized actual examples there are and the tradeoffs involved.
Fucking thank you. Yes, experienced editor to add to this: that’s called the lead, and that’s exactly what it exists to do. Readers are not even close to starved for summaries:
What’s outrageous here isn’t wanting summaries; it’s that summaries already exist in so many ways, written by the human writers who write the contents of the articles. Not only that, but as a free, editable encyclopedia, these summaries can be changed at any time if editors feel like they no longer do their job somehow.
This not only bypasses the hard work real, human editors put in for free in favor of some generic slop that’s impossible to QA, but it also bypasses the spirit of Wikipedia that if you see something wrong, you should be able to fix it.
Betteridge’s law in shambles
I don’t at all understand why the second law of thermodynamics is being invoked. Nonetheless, capillary condensation is already a well-studied phenomenon. As the scientific article itself notes, the innovation here over traditional capillary condensation would be the ability to easily remove the water once it’s condensed.
Re: Entropy:
Dude, I’m sorry, I just don’t know how else to tell you “you don’t know what you’re talking about”. I’d refer you to Chapter 20 of Goodfellow et al.'s 2016 book on Deep Learning, but 1) it tragically came out a year before transformer models, and 2) most of it will go over your head without a foundation from many previous chapters. What you’re describing – generative AI training on generative AI ad infinitum – is a death spiral. Literally the entire premise of adversarial training of generative AI is that for the classifier to get better, you need to keep funneling in real material alongside the fake material.
You keep anthropomorphizing with “AI can already understand X”, but that betrays a fundamental misunderstanding of what a deep learning model is: it doesn’t “understand” shit about fuck; it’s an unfathomably complex nonlinear algebraic function that transforms inputs to outputs. To summarize in a word why you’re so wrong: overfitting. This is one of the first things you’ll learn about in a ML class, and it’s what happens when you let a model train on the same data over and over again forever. It’s especially bad for a classifier to be overfitted when it’s pitted against a generator, because a sufficiently complex generator will learn how to outsmart the overfitted classifier and it will find a cozy little local minimum that in reality works like dogshit but outsmarts the classifier which is its only job.
You really, really, really just fundamentally do not understand how a machine learning model works, and that’s okay – it’s a complex tool being presented to people who have no business knowing what a Hessian matrix or a DCT is – but please understand when you’re talking about it that these are extremely advanced and complex statistical models that work on mathematics, not vibes.
Your analogy simply does not hold here. If you’re having an AI train itself to play chess, then you have adversarial reinforcement learning. The AI plays itself (or another model), and reward metrics tell it how well it’s doing. Chess has the following:
Here’s where generative AI is different: when you’re doing adversarial training with a generative deep learning model, you want one model to be a generator and the other to be a classifier. The classifier should be given some amount of human-made material and some amount of generator-made material and try to distinguish it. The classifier’s goal is to be correct, and the generator’s goal is for the classifier to pick completely randomly (i.e. it just picks on a coin flip). As you train, you gradually get both to be very, very good at their jobs. But you have to have human-made material to train the classifier, and if the classifier doesn’t improve, then the generator never does either.
Imagine teaching a 2nd grader the difference between a horse and a zebra having never shown them either before, and you hold up pictures asking if they contain a horse or a zebra. Except the entire time you just keep holding up pictures of zebras and expecting the child to learn what a horse looks like. That’s what you’re describing for the classifier.
Notepad and WFE get thrown off hell in a cell into an announcer’s table by Kate and Dolphin, respectively, but to say they “don’t work” is intellectually lazy and dishonest.
Who are you trying to convince right now? Linux and macOS users are probably never going back to Windows if they can help it, and Windows users will correctly say “but it’s right there; I’m using it right now”.
This is entirely correct, and it’s deeply troubling seeing the general public use LLMs for confirmation bias because they don’t understand anything about them. It’s not “accidentally confessing” like the other reply to your comment is suggesting. An LLM is just designed to process language, and by nature of the fact it’s trained on the largest datasets in history, practically there’s no way to know where this individual output came from if you can’t directly verify it yourself.
Information you prompt it with is tokenized, run through a transformer model whose hundreds of billions or even trillions of parameters were adjusted according to god only knows how many petabytes of text data (weighted and sanitized however the trainers decided), and then detokenized and printed to the screen. There’s no “thinking” involved here, but if we anthropomorphize it like that, then there could be any number of things: it “thinks” that’s what you want to hear; it “thinks” that based on the mountains of text data it’s been trained on calling Musk racist, etc. You’re talking to a faceless amalgam unslakably feeding on unfathomable quantities of information with minimal scrutiny and literally no possible way to enforce quality beyond bare-bones manual constraints.
There are ways to exploit LLMs to reveal sensitive information, yes, but you have to then confirm that sensitive information is true, because you’ve just sent data into a black box and gotten something out. You can get a GPT to solve the sudoku puzzle, but you can’t then parade that around before you’ve checked to make sure the puzzle is correct. You cannot ever, under literally any circumstance, trust anything a generative AI creates for factual accuracy; at best, you can use it as a shortcut to an answer which you can attempt to verify.
They should end in the style of the author’s notes from the fanfic My Immortal instead.
>try all the OS out there
>person you’re responding to is suggesting they try the other one of the two top DEs for Linux desktop before leading with “Linux Is Already Broken Before You Even Start”
This is a ridiculous strawman. I empathize with them and want to see accessibility improve (it’s something I do in the project I work on even though you wouldn’t conventionally expect that blind people can use it). If you’re going to talk in such broad terms about the Linux desktop, not just your specific distro/DE, the onus is on you to at minimum try GNOME and KDE. Instead they chose GNOME and MATE, the latter of which is barely maintained and has effectively zero relevance outside of users who abandoned GNOME ages ago during GTK3 or people whose hardware makes the Atari 2600 look like a supercomputer (it looks like the former here). It’s not 2017 anymore; Ubuntu with GNOME isn’t some near-universal Linux desktop experience. I’m not telling them “nooooo just try my specific config for NixOS bro I promise Linux isn’t that bad”.
This isn’t even to say that KDE will be better; I don’t know, which is why I wish they covered it. If KDE is also bad, then this is a stronger argument that Linux desktop contributors need more awareness of and focus on accessibility. If it’s just mediocre, KDE devs can see it and learn how to improve. If it’s good, then GNOME and MATE devs have a lesson in how they can improve.
I don’t expect anyone to exhaust every DE on every distro, but when the userbase is so firmly concentrated around GNOME and KDE, I expect you to at minimum include KDE (let alone if you include MATE). You don’t have to, but I’m free to criticize your essay if you have such a massive hole in it. If you don’t want to try KDE, literally just find+replace “Linux” to “GNOME/MATE” and solve the problem that way.
Just rewrite curl in Rust so you can immediately close any AI slop reports talking about memory safety issues. /s
You’re right. We should support developers.
Here’s Jellyfin’s ‘How to Contribute’ page.
The $90 million in venture capital can nourish the leeches at Plex just fine.
I remember speculating as a (small) kid that the AI soldiers in Battlefront II’s local multiplayer might be real people employed by the developer. Not the brightest child was I.