• 0 Posts
  • 40 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle
  • The UN is supposed to be a toothless, executively dysfunctional institution, that’s a feature, not a bug. Its members are nations, whose entire purpose is to govern their regions of the planet. If the UN itself had the power to make nations do things, it wouldn’t be the United Nations, it’d be the One World Government, and its most powerful members absolutely do not want it to be that, so it isn’t.

    It’s supposed to be an idealized, nonviolent representation of geopolitics that is always available to nations as a venue for civilized diplomacy. That’s why nuclear powers were given veto power: they effectively have veto power over the question of “should the human race continue existing” and the veto is basically a reflection of that. We want issues to get hashed out with words in the UN if possible, rather than in real life with weapons, and that means it must concede to the power dynamics that exist in real life. The good nations and the bad nations alike have to feel like they get as much control as they deserve, otherwise they take their balls and go home.

    It’s frustrating to see the US or Russia or China vetoing perfectly good resolutions and everyone else just kind of going “eh, what can you do, they have vetoes,” but think through the alternative: everyone has enough and decides “no more veto powers.” The UN starts passing all the good resolutions. But the UN only has the power that member nations give it, so enforcement would have to mean some nations trying to impose their will on the ones that would’ve vetoed. Now we’ve traded bad vetoes in the UN for real-world conflict instead.

    What that “get rid of the vetoes so the UN can get things done” impulse is actually driving at is “we should have a one world government that does good things,” which, yeah, that’d be great, but it’s obviously not happening any time soon. Both articles mention issues and reforms that are worthy of consideration, but the fundamental structure of the UN is always going to reflect the flaws of the world because it’s supposed to do that.





  • That’s not how it works at all. If it were as easy as adding a line of code that says “check for integrity” they would’ve done that already. Fundamentally, the way these models all work is you give them some text and they try to guess the next word. It’s ultra autocomplete. If you feed it “I’m going to the grocery store to get some” then it’ll respond “food: 32%, bread: 15%, milk: 13%” and so on.

    They get these results by crunching a ton of numbers, and those numbers, called a model, were tuned by training. During training, they collect every scrap of human text they can get their hands on, feed bits of it to the model, then see what the model guesses. They compare the model’s guess to the actual text, tweak the numbers slightly to make the model more likely to give the right answer and less likely to give the wrong answers, then do it again with more text. The tweaking is an automated process, just feeding the model as much text as possible, until eventually it gets shockingly good at predicting. When training is done, the numbers stop getting tweaked, and it will give the same answer to the same prompt every time.

    Once you have the model, you can use it to generate responses. Feed it something like “Question: why is the sky blue? Answer:” and if the model has gotten even remotely good at its job of predicting words, the next word should be the start of an answer to the question. Maybe the top prediction is “The”. Well, that’s not much, but you can tack one of the model’s predicted words to the end and do it again. “Question: why is the sky blue? Answer: The” and see what it predicts. Keep repeating until you decide you have enough words, or maybe you’ve trained the model to also be able to predict “end of response” and use that to decide when to stop. You can play with this process, for example, making it more or less random. If you always take the top prediction you’ll get perfectly consistent answers to the same prompt every time, but they’ll be predictable and boring. You can instead pick based on the probabilities you get back from the model and get more variety. You can “increase the temperature” of that and intentionally choose unlikely answers more often than the model expects, which will make the response more varied but will eventually devolve into nonsense if you crank it up too high. Etc, etc. That’s why even though the model is unchanging and gives the same word probabilities to the same input, you can get different answers in the text it gives back.

    Note that there’s nothing in here about accuracy, or sources, or thinking, or hallucinations, anything. The model doesn’t know whether it’s saying things that are real or fiction. It’s literally a gigantic unchanging matrix of numbers. It’s not even really “saying” things at all. It’s just tossing out possible words, something else is picking from that list, and then the result is being fed back in for more words. To be clear, it’s really good at this job, and can do some eerily human things, like mixing two concepts together, in a way that computers have never been able to do before. But it was never trained to reason, it wasn’t trained to recognize that it’s saying something untrue, or that it has little knowledge of a subject, or that it is saying something dangerous. It was trained to predict words.

    At best, what they do with these things is prepend your questions with instructions, trying to guide the model to respond a certain way. So you’ll type in “how do I make my own fireworks?” but the model will be given “You are a chatbot AI. You are polite and helpful, but you do not give dangerous advice. The user’s question is: how do I make my own fireworks? Your answer:” and hopefully the instructions make the most likely answer something like “that’s dangerous, I’m not discussing it.” It’s still not really thinking, though.






  • It’s not a fantasy because they’re bad ideas (they’re not) or we shouldn’t fight for them (we should), it’s a fantasy because you’re skipping over any of the actual work that needs to be done to make them happen: convincing more people to join you and demand more. Ask 100 people if the Senate and Supreme Court should be abolished and 99 of them are going to look at you like you have two heads. You can insist that you’re right and they’re all wrong all you want, but unless you work to get more people on your side, you’ll just be complaining into the void and setting impossible standards for politicians so that you can feel smug when they fail to meet them.


  • It’s a new model this year, as Nate Silver took his with him when he left 538. The new one seems to put a lot of emphasis on “the fundamentals” this far out, that is, it “thinks” that the general environment and economy and such is pretty good for the incumbent and that the polls might move in that direction by the time election day comes along. And since it’s fitted to historical data, it’s also implicitly assuming that this election will be similar to past elections (like, say, including a competent campaign by a candidate who can get out there and effectively communicate accomplishments and a plan for their term).

    I personally think those assumptions are pretty clearly wrong this year and so I’m more inclined to base my perception of the race on pure polling averages, which are looking quite bad for Biden.



  • If a minority group is being oppressed or is otherwise motivated to create change and is voting in large numbers, but the majority is apathetic and not bothering to vote, then this system would prevent the minority from changing their representation as “punishment” for something they’re not doing.

    It’s also a bit of a “the beatings will continue until morale improves” solution to the problem, if it even is actually a problem. Low turnout is bad, but not because it’s inherently bad not to vote. It’s a symptom of the fact that people don’t think it matters, or that it will change anything, and unfortunately they’re not exactly wrong much of the time. Instead of putting effort into punishing people for not being engaged enough, it’d be better to make systemic changes that empower people and make the government more representative of their interests.


  • Ooh, interesting. I’m kind of surprised to find that I do feel more comfortable with It/Its actually, not so much because of the logical “promotion and demotion cancel out” aspect, but because it’s two atypical constructions combined, and that almost pushes it out of intuitive meaning entirely for me. I know the context and convention for each one individually but nothing for both of them at the same time, so I think I’m more open to allowing a meaning to be defined that isn’t hierarchical if It assures me that it isn’t. (Pure grammar bonus points in that last sentence where this type of capitalization happens to remove an ambiguity!) For He/Him and She/Her, though, I find it hard to set aside the established meaning because it’s in wide use and has been for quite some time. Maybe that’s a rigidity that deserves to be bent, people push back on the more “out there” neopronouns for similar reasons, but I think it’s likely that most people will instinctively react negatively when encountering this, and it’s going to be difficult for what I have to imagine is a very small group of people to change the general understanding to something more acceptable.


  • Hmm… this makes me uncomfortable, and although I don’t think it’s internalized phobia or anything like that, I want to interrogate that discomfort to see if I can nail it down.

    I do think it’s difficult or maybe impossible to decouple this practice from indications of power for most people. The only instances of capitalized pronouns in common use that I’ve seen are the God and Jesus usage, and in some circles, capitalizing pronouns for a dominant in a role play context. “I” getting capitalized is also there, kind of, but that’s not a power thing because it’s not special, everyone is expected to use it as a language rule. I’ve also seen things like “oh, sure, that’s what They want you to think” or, not quite a pronoun, something like “they want you to fear The Other,” maybe less of a power thing but definitely a signal of additional weight and meaning above and beyond the word’s usual sense.

    I think this is the main source of my discomfort, that this practice is currently used almost exclusively at least as “this word is being used in a special and important context, pay extra attention” and going as far as “I am explicitly signaling that the person being referred to is superior.” I don’t use He/Him pronouns for God or Jesus because I don’t belong to those religions and don’t see those entities that way, and I have a fundamental belief in the equality of all humans that makes me uncomfortable putting a person on a pedestal like that.

    I feel uncomfortable about it/its pronouns as well for the same reason, I don’t like the idea of dehumanizing or objectifying a person, but in that case I actually have some friends who use them. It’s easier to take a “well, if it makes you happy, it’s no harm to me” attitude if it’s asking for a “demotion” so to speak, I think. The personal connection probably does help too, I don’t know anyone who wants capitalized pronouns myself.

    I’ve seen Dan Savage use capital pronouns to refer to dominants when answering letters, but that seems to me like Dan stepping into the letter writer’s scene space and choosing to go along with the “rule” while he’s there giving advice, kind of a “good houseguest” thing. I don’t think that’s something that the rest of us are obligated to do as a rule. I’d push back on a friend insisting that I refer to their dominant with capitalized pronouns, because whatever their relationship is with each other, their dom isn’t my dom, and I didn’t agree to that hierarchy, they did.

    I think the other discomfort is more of a language and grammar thing, which obviously is less important than an actual person’s comfort (see also, the old “they is always plural” chestnut) so I’m not going to assert that this is a reason to disregard a person’s wishes, and language rules are subject to change. But in general capitalization is not all that significant in English, which we know because something written in all caps or in all lower case usually has no meaning removed. Words at the start of sentences, proper nouns, and “I” get capitalized, and that’s mostly it. It’s mostly about readability, because ALL CAPS DOESN’T HAVE AS MUCH CONTRAST but when used sparingly as we usually do, important words stand out with a capital letter. “Demanding” that a particular word be used to refer to yourself in the form of pronouns is in the same ballpark as choosing your own name, obviously completely reasonable and acceptable, but “demanding” that special language rules be used about yourself feels a step beyond that. I don’t want to cross into “oh so could you identify as an attack helicopter too” territory, but I do wonder about some of the boundaries on this. Lots of people habitually write in all lowercase, would it be disrespectful to say “oh yeah i saw larry at the empire state building and had a conversation with him” if Larry uses He/Him pronouns? Would Larry be upset about both the name and pronouns, or just the pronouns? I don’t think most people would get up in arms about their proper name getting de-capitalized in that context which seems like further evidence that capitalization isn’t normally a meaningful aspect of the writing, it’s a more mechanical and practical rule, so insisting that for certain people it does need to be made significant feels like more of an imposition to me, and comes right back to the “you need to treat Me as special and more important” feeling that I have.


  • Okay, after watching the video twice I think I know what the fuck he’s talking about. He thinks that you’ll request a mail in ballot, go to the polls, they’ll say you already voted, and then you triumphantly show the world that you didn’t vote, you still have the blank ballot, and obviously they’ve put in a vote for Joe Brandon under your name, is what they’ve done, those bastards. He has done a terrible job of explaining his plan, aside from it also being a bad plan.

    As a former election judge in Minnesota, I can tell you exactly how this would go in real life in that state (where, to brag a bit, we have a very progressive voting system that makes it very easy to vote, all the things Republicans hate). You’d get your mail in ballot, then show up to your polling place with your blank ballot. Then when you ask to vote, they’ll say “yep, sure, come on in” and you can just go in and vote as normal.

    (The rule is that even if you request an absentee ballot, you can still cast a vote as normal, and even if you have mailed it in, either they have already counted it and then the registration system will bar you from voting in person, or if you get there before it gets processed and vote in person instead, they’ll toss it out when they get to it.)

    Worst case scenario, the election judges see that you’re carrying around an absentee ballot, and they’ll ask you to get rid of it because no one wants ballots floating around a polling place that aren’t valid. That’s the only thing I can think of that would be cause for a Republican to make a ruckus, but… like… yeah, you can’t just bring extra ballots to the polling place. And they won’t scan into the machine because they’re the wrong type. I really, really want to see videos of these people trying to catch the evil Democrats and then just, like, being treated normally though. (Even better if they raised a ruckus and then didn’t actually vote.)


  • OPML files really aren’t much more than a list of the feeds you’re subscribed to. Individual posts or articles aren’t in there. I would expect that importing a second OPML file would just add more subscriptions, but it’d be up to the reader app to decide what it does.



  • If you ask an LLM to help you with a legal brief, it’ll come up with a bunch of stuff for you, and some of it might even be right. But it’ll very likely do things like make up a case that doesn’t exist, or misrepresent a real case, and as has happened multiple times now, if you submit that work to a judge without a real lawyer checking it first, you’re going to have a bad time.

    There’s a reason LLMs make stuff up like that, and it’s because they have been very, very narrowly trained when compared to a human. The training process is almost entirely getting good at predicting what words follow what other words, but humans get that and so much more. Babies aren’t just associating the sounds they hear, they’re also associating the things they see, the things they feel, and the signals their body is sending them. Babies are highly motivated to learn and predict the behavior of the humans around them, and as they get older and more advanced, they get rewarded for creating accurate models of the mental state of others, mastering abstract concepts, and doing things like make art or sing songs. Their brains are many times bigger than even the biggest LLM, their initial state has been primed for success by millions of years of evolution, and the training set is every moment of human life.

    LLMs aren’t nearly at that level. That’s not to say what they do isn’t impressive, because it really is. They can also synthesize unrelated concepts together in a stunningly human way, even things that they’ve never been trained on specifically. They’ve picked up a lot of surprising nuance just from the text they’ve been fed, and it’s convincing enough to think that something magical is going on. But ultimately, they’ve been optimized to predict words, and that’s what they’re good at, and although they’ve clearly developed some impressive skills to accomplish that task, it’s not even close to human level. They spit out a bunch of nonsense when what they should be saying is “I have no idea how to write a legal document, you need a lawyer for that”, but that would require them to have a sense of their own capabilities, a sense of what they know and why they know it and where it all came from, knowledge of the consequences of their actions and a desire to avoid causing harm, and they don’t have that. And how could they? Their training didn’t include any of that, it was mostly about words.

    One of the reasons LLMs seem so impressive is that human words are a reflection of the rich inner life of the person you’re talking to. You say something to a person, and your ideas are broken down and manipulated in an abstract manner in their head, then turned back into words forming a response which they say back to you. LLMs are piggybacking off of that a bit, by getting good at mimicking language they are able to hide that their heads are relatively empty. Spitting out a statistically likely answer to the question “as an AI, do you want to take over the world?” is very different from considering the ideas, forming an opinion about them, and responding with that opinion. LLMs aren’t just doing statistics, but you don’t have to go too far down that spectrum before the answers start seeming thoughtful.


  • In its complaint, The New York Times alleges that because the AI tools have been trained on its content, they sometimes provide verbatim copies of sections of Times reports.

    OpenAI said in its response Monday that so-called “regurgitation” is a “rare bug,” the occurrence of which it is working to reduce.

    “We also expect our users to act responsibly; intentionally manipulating our models to regurgitate is not an appropriate use of our technology and is against our terms of use,” OpenAI said.

    The tech company also accused The Times of “intentionally” manipulating ChatGPT or cherry-picking the copycat examples it detailed in its complaint.

    https://www.cnn.com/2024/01/08/tech/openai-responds-new-york-times-copyright-lawsuit/index.html

    The thing is, it doesn’t really matter if you have to “manipulate” ChatGPT into spitting out training material word-for-word, the fact that it’s possible at all is proof that, intentionally or not, that material has been encoded into the model itself. That might still be fair use, but it’s a lot weaker than the original argument, which was that nothing of the original material really remains after training, it’s all synthesized and blended with everything else to create something entirely new that doesn’t replicate the original.