• 0 Posts
  • 475 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle
  • This isn’t the best or most popular way to do it, but: https://learn.microsoft.com/en-us/windows/wsl/install

    There is a way built into windows to deploy and use Linux from inside windows.

    It’s not the most pure experience, but it’s a way to make sure you have something like a feel for how some parts work before jumping in any deeper.

    A bootable USB stick is another way to try before you commit. Only reason I might suggest starting with trying it the other way first is in case you run into issues connecting to the Internet or something you won’t feel totally lost. Having to keep rebooting back into windows if you have a problem can be frustrating, so getting a little familiarity with a safety line can help feel more confident.

    Issues with a USB boot are increasingly uncommon, as an aside. Biggest issue is likely to be that USB is slow, so things might take a few moments longer to start.

    From there, you should be pretty comfortable doing basic stuff after a little playing around. Not deep mastery, but a sense of “here are my settings”, “my files go here”, “here’s how I fiddle with wifi”, “here’s how I change my desktop stuff”. At that point a dual boot should work out, since you’ll be able to use the system to find out how to do new things with the system, and also use it for whatever, in a general sense.

    If it’s working out, you should find yourself popping back into windows less and less.


  • Oh, I totally know there’s been a lot of politics in the Foss community and that some of the people are nasty, I’m just flabbergasted that someone would try to connect such disparate things.
    I can comprehend a Nazi Foss enthusiast having opinions on race and on window managers. It’s when they start having racist opinions on window managers that it all flies out the window. It’s like being opposed to copper plumbing because it’s too Norwegian.

    Just a case of seeing irrational people who act irrationally act irrationally in a new way and being shocked that the irrationality doesn’t follow a pattern or stay in topic.




  • LLMs are prediction tools. What it will produce is a corpus that doesn’t use certain phrases, or will use others more heavily, but will have the same aggregate statistical “shape”.

    It’ll also be preposterously hard for them to work out, since the data it was trained on always has someone eventually disagreeing with the racist fascist bullshit they’ll get it to focus on. Eventually it’ll start saying things that contradict whatever it was supposed to be saying, because statistically eventually some manner of contrary opinion is voiced.
    They won’t be able to check the entire corpus for weird stuff like that, or delights like MLK speeches being rewriten to be anti-integration, so the next version will have the same basic information, but passed through a filter that makes it sound like a drunk incel talking about asian women.


  • Multiple people is significantly more force than even a knife.

    Proportional force means the force must be proportional to the threat, not to the force the other person is using. If someone threatens death with their hands, you can use deadly force to defend against a deadly threat.

    One would be reasonable in concluding that masked people trying to force you or someone else into a van is an imminent threat of death, great bodily harm or sexual assault.

    You can’t use deadly force to defend against harassment, or theft because that’s disproportionate.



  • All the other benefits of a non-violent protest aside, there’s also immense value is reminding people that they’re not as singular in their viewpoint as they feel.

    For a lot of people, it’s been very easy to feel like everyone else must be in board with this.

    I’m not sure what you’re looking for to codify the implicit threat. A couple million people calling you a king at an event called “no kings day” in a country whose founding narrative is “violently rebel against kings” seems pretty implicit to me.

    Also, I just realized that there’s a red coat/red hat parallel I haven’t seen leveraged yet that has a lot of potential.


  • Fundamentally, I agree with you.

    The page being referenced

    Because the phrase “Wikipedians discussed ways that AI…” Is ambiguous I tracked down the page being referenced. It could mean they gathered with the intent to discuss that topic, or they discussed it as a result of considering the problem.

    The page gives me the impression that it’s not quite “we’re gonna use AI, figure it out”, but more that some people put together a presentation on how they felt AI could be used to address a broad problem, and then they workshopped more focused ways to use it towards that broad target.

    It would have been better if they had started with an actual concrete problem, brainstormed solutions, and then gone with one that fit, but they were at least starting with a problem domain that they thought it was a applicable to.

    Personally, the problems I’ve run into on Wikipedia are largely low traffic topics where the content is too much like someone copied a textbook into the page, or just awkward grammar and confusing sentences.
    This article quickly makes it clear that someone didn’t write it in an encyclopedia style from scratch.


  • A page detailing the the AI-generated summaries project, called “Simple Article Summaries,” explains that it was proposed after a discussion at Wikimedia’s 2024 conference, Wikimania, where “Wikimedians discussed ways that AI/machine-generated remixing of the already created content can be used to make Wikipedia more accessible and easier to learn from.” Editors who participated in the discussion thought that these summaries could improve the learning experience on Wikipedia, where some article summaries can be quite dense and filled with technical jargon, but that AI features needed to be cleared labeled as such and that users needed an easy to way to flag issues with “machine-generated/remixed content once it was published or generated automatically.”

    The intent was to make more uniform summaries, since some of them can still be inscrutable.
    Relying on a tool notorious for making significant errors isn’t the right way to do it, but it’s a real issue being examined.

    In thermochemistry, an exothermic reaction is a “reaction for which the overall standard enthalpy change ΔH⚬ is negative.”[1][2] Exothermic reactions usually release heat. The term is often confused with exergonic reaction, which IUPAC defines as “… a reaction for which the overall standard Gibbs energy change ΔG⚬ is negative.”[2] A strongly exothermic reaction will usually also be exergonic because ΔH⚬ makes a major contribution to ΔG⚬. Most of the spectacular chemical reactions that are demonstrated in classrooms are exothermic and exergonic. The opposite is an endothermic reaction, which usually takes up heat and is driven by an entropy increase in the system.

    This is a perfectly accurate summary, but it’s not entirely clear and has room for improvement.

    I’m guessing they were adding new summaries so that they could clearly label them and not remove the existing ones, not out of a desire to add even more summaries.


  • So, I wasn’t referring to enjoyment. I spoke of engagement or interest. It’s why programming is more appealing than data entry.

    You’re just doubling down on the false dichotomy I spoke of. It’s not at all uncommon to find someone with plenty of experience who can easily and honestly tell you why they think what the company they work for does is interesting.

    Asking someone why they think working at the job they’re applying for is appealing isn’t “hiring for enthusiasm”, and it’s honestly odd that you keep casting it that way.
    I get where you’re coming from, and I partly disagree. It doesn’t seem like you’re parsing what I’m saying because of this “either one or the other” attitude though.
    No offense intended, but it makes you come across as burnt out and sad. I don’t work for small companies, with inexperienced people, and I’m not constantly shipping broken code that needs rewriting. I’ve been doing this for roughly 15 years and I can honestly say “working in security in general is interesting because it forces you to think about your solution from a different perspective, the attacker, and working at $AuthenticationVendorYouQuitePossiblyUse in specific is appealing because you get to work on problems that are actually new at a scale where you can see it have an impact”.
    That’s not gushing with enthusiasm: it’s why I’m not bored everyday. If you’re actually just showing up to work everyday and indifferently waiting to be told what to do because it’s all just the same old slog… That’s sad, and I’m sorry.


  • I’m lucky that after all these years still get those moments of great enjoyment when at the end of doing something insanelly complex it all works

    I just think it’s worth pointing out that that is an example of the work being engaging.

    No one is so naive as to think that you work a job for anything other than money. The original post doesn’t even seem to convey that it’s bad to ask about the pay and benefits. It’s saying that if, when directly asked, the candidate has no answer to what seems interesting about the job they might not be a good fit.

    You seem to be an experienced software developer. You’re easily qualified to do basic manual data entry. Same working environment, same basic activity. Would you be interested in changing roles to do data entry for $1 more salary?
    I’m also a software developer, and I can entirely honestly say I would not, even though it would be less responsibility and significantly easier work.
    Even the boring parts of my work are vaguely interesting and require some mental engagement.

    It seems there’s this false dichotomy that either you’re a cold mercenary working 9 to 5 and refusing to acknowledge your coworkers during your entitled lunch break, or you’re a starry eyed child working for candy and corporate swag. You can ask for fair money, do only the work you’re paid for, have a cordial relationship with coworkers, and also find your work some manner of engaging.

    It’s not unreasonable for an employer to ask how you feel about the work, just like it’s not unreasonable for a candidate to ask about the details of the work.


  • Sure. I wouldn’t disqualify someone for being ambivalent towards what we’re working on, but the person who seems interested is gonna be better to work with.

    Likewise when looking for a place to work, if the tangibles are equivalent I’ll prefer the place with better intangibles.

    I’m not in HR or management, so I don’t care about cost effectiveness or productivity beyond “not screwing me over”. From that perspective, it’s generally nicer to work with someone who finds it interesting than with someone who doesn’t.

    There’s no point asking “why do you want to work here”, because the answer is obviously a combination of money and benefits, and how food and healthcare keeps you from being dead.
    I can’t fault an interviewer who’s clearly trying not to ask the obvious question and instead actually ask how the candidate feels about the work instead of disqualifying them for not volunteering the right answer.

    It’s not unreasonable for an employer to ask a candidate how they feel about the work anymore than it’s unreasonable for the candidate to ask about the working environment.


  • I actually kinda agree with both here.

    It sucks working with someone who is utterly disinterested in the work, if it’s anything above rote work.
    Asking the candidate what they found interesting about it is at least a basically fine idea. If they can’t answer when you ask, that actually is kinda concerning.
    Big difference between asking and expecting them to volunteer the information.

    At the same time, if the people interviewing you can’t even pretend to show basic conversational courtesy by asking some basic “what do you do for fun” style questions or anything that shows they’re gonna be interested in the person they’re looking to work with, that’s a major concern.



  • Eh, there’s an intrinsic amount of information about the system that can’t be moved into a configuration file, if the platform even supports them.

    If your code is tuned to make movement calculations with a deadline of less than 50 microseconds and you have code systems for managing magnetic thrust vectoring and the timing of a rotating detonation engine, you don’t need to see the specific technical details to work out ballpark speed and movement characteristics.
    Code is often intrinsically illustrative of the hardware it interacts with.

    Sometimes the fact that you’re doing something is enough information for someone to act on.

    It’s why artefacts produced from classified processes are assumed to be classified until they can be cleared and declassified.
    You can move the overt details into a config and redact the parts of the code that use that secret information, but that still reveals that there is secret code because the other parts of the system need to interact with it, or it’s just obvious by omission.
    If payload control is considered open, 9/10 missiles have open guidance control, and then one has something blacked out and no references to a guidance system, you can fairly easily deduce that that missile has a guidance system that’s interesting with capabilities likely greater that what you know about.

    Eschewing security through obscurity means you shouldn’t rely on your enemies ignorance, and you should work under the assumption of hostile knowledge. It doesn’t mean you need to seek to eliminate obscurity altogether.





  • There’s a principle in security, https://en.wikipedia.org/wiki/Kerckhoffs’s_principle, roughly summarized as “the enemy knows the system”. It’s the notion that you should be able to fully describe everything about your system except the secret key and still be secure.

    My concept is a bit like this (don’t wanna give it all away):

    That’s always a concerning thing to encounter at the beginning of a description. That implies that there’s an awareness that if you knew how the system worked it would be weaker, which in a security setting is considered a very notable defect.

    If we’re looking at the actual security of the system you describe through that lens, the name of the company doesn’t add to your security. Neither does your word substitution rules. The secret in your system is the passphrase and the number you’re using to modify the letters from the company name.

    Now, using a passphrase is good, but it kinda felt like you were implying that you use the same passphrase for all services and then modify it. That’s not a good idea, since it reduces your effective security to a single number.
    Additionally, a passphrase should be random words, not a known phrase. If the phrase is grammatical it reduces the security pretty fast since it’s weirdly easy to guess word sequences.

    Adding a character to the end of a password during rotation is also a bad idea. Anyone breaking a password database will automatically try with a series of characters tacked onto the end specifically to catch that, so a password of yours that got leaked years ago can be used to figure out your current password just by checking it with different endings.

    A better system would be to write a truly random password down on a sheet of paper along with 31 others. Now fold up the piece of paper and put it in your wallet.
    You are already adept at keeping paper in your wallet secure, and anyone not in physical proximity to you has to fall back to the usual tricks to get at your stuff.
    Better yet would be to use a password manager, ideally one you can export to something you carey, encrypted, with you while you go.