• 1 Post
  • 115 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle
  • I have both. I do not think the OLED version is twice as nice, though it is noticeably improved.

    If the cost is an issue, but doable, consider getting the LCD deck and putting the extra cash toward a TV dock and Bluetooth controller. The deck is awesome on the go (just took mine on vacation - 10/10) but it’s also a fantastic console in its own right. I play a lot of PC games on my couch, even though my I have a decent desktop PC available.

    Either one you purchase though, the Steam deck is the best gaming device I’ve ever owned. Access to the vast Steam library (even if not all titles are compatible yet), access to install whatever else TF I want - even competing stores, emulation nevermind.

    It’s just… 🤯




  • If not vanilla Ubuntu, I’d still suggest trying an Ubuntu derivative like Linux Mint or POP! OS. Ubuntu has a huge community, so in the event you run into issues it’ll be easier to find fixes for it.

    What you’ll find is that Linux distros are roughly grouped by a “family” (my term for it anyway). Anyone can (theoretically, anyway) start from a given kernel and roll their own distro, but most distros are modified versions of a handful of base distros.

    The major families at the moment are

    • Debian: A classic all-rounder that prioritizes stability over all else. Ubuntu is descended from Debian.

    • Fedora: Another classic all-rounder. I haven’t used it in a decade, so I won’t say much about it here.

    • Arch: If Linux nerds were car people, Arch is for the hot rodders. You can tune and control pretty much any aspect of your system. … Not a good 1st distro if you want to just get something going.

    There are many others, but these are the major desktop-PC distro families at the moment.

    The importance of these families is that techniques that work in one (say) Debian-based distro will tend to work in other Debian-based distros… But not necessarily in distros from other families.


  • I used to work summers as an apprentice electrician. The amount of crazy wiring I saw in old houses was (heh) shocking. Sometimes it was just that it was old. Real old houses sometimes just had bare wire wrapped in silk. … And a few decades later that silk was frayed and crumbling in the walls and needed replacing.

    My current house was wired at a time when copper was more precious, so it was wired up and down through the house, with circuits arranged by proximity, not necessarily logic. When a certain circuit in my house blows the breaker, my TV, PC and one wall of the master bedroom all lose power. The TV and PC are not in the same room either.



  • Have you ever been in an old house? Not old, like, on the Historic Register, well-preserved, rich bastard “old house”. Just a house that has been around awhile. A place that has seen a lot of living.

    You’ll find light switches that don’t connect to anything; artwork hiding holes in the walls; sometimes walls have been added or removed and the floors no longer match.

    Any construction that gets used, must change as needs change. Be it a house or a city or a program, these evolutions of need inevitably introduce complexity and flaws that are large enough to annoy, but small enough to ignore. Over time those issues accumulate until they reach a crisis point. Houses get remodeled or torn down, cities build or remove highways, and programs get refactored or replaced.

    You can and should design for change, within reason, because all successful programs will need to change in ways you cannot predict. But the fact that a system eventually becomes complex and flawed is not due to engineering failures - it is inherent in the nature of changing systems.




  • Oh, for sure. I focused on ML in college. My first job was actually coding self-driving vehicles for open-pit copper mining operations! (I taught gigantic earth tillers to execute 3-point turns.)

    I’m not in that space anymore, but I do get how LLMs work. Philosophically, I’m inclined to believe that the statistical model encoded in an LLM does model a sort of intelligence. Certainly not consciousness - LLMs don’t have any mechanism I’d accept as agency or any sort of internal “mind” state. But I also think that the common description of “supercharged autocorrect” is overreductive. Useful as rhetorical counter to the hype cycle, but just as misleading in its own way.

    I’ve been playing with chatbots of varying complexity since the 1990s. LLMs are frankly a quantum leap forward. Even GPT-2 was pretty much useless compared to modern models.

    All that said… All these models are trained on the best - but mostly worst - data the world has to offer… And if you average a handful of textbooks with an internet-full of self-confident blowhards (like me) - it’s not too surprising that today’s LLMs are all… kinda mid compared to an actual human.

    But if you compare the performance of an LLM to the state of the art in natural language comprehension and response… It’s not even close. Going from a suite of single-focus programs, each using keyword recognition and word stem-based parsing to guess what the user wants (Try asking Alexa to “Play ‘Records’ by Weezer” sometime - it can’t because of the keyword collision), to a single program that can respond intelligibly to pretty much any statement, with a limited - but nonzero - chance of getting things right…

    This tech is raw and not really production ready, but I’m using a few LLMs in different contexts as assistants… And they work great.

    Even though LLMs are not a good replacement for actual human skill - they’re fucking awesome. 😅


  • What I think is amazing about LLMs is that they are smart enough to be tricked. You can’t talk your way around a password prompt. You either know the password or you don’t.

    But LLMs have enough of something intelligence-like that a moderately clever human can talk them into doing pretty much anything.

    That’s a wild advancement in artificial intelligence. Something that a human can trick, with nothing more than natural language!

    Now… Whether you ought to hand control of your platform over to a mathematical average of internet dialog… That’s another question.



  • a quick web search uses much less power/resources compared to AI inference

    Do you have a source for that? Not that I’m doubting you, just curious. I read once that the internet infrastructure required to support a cellphone uses about the same amount of electricity as an average US home.

    Thinking about it, I know that LeGoog has yuge data centers to support its search engine. A simple web search is going to hit their massive distributed DB to return answers in subsecond time. Whereas running an LLM (NOT training one, which is admittedly cuckoo bananas energy intensive) would be executed on a single GPU, albeit a hefty one.

    So on one hand you’ll have a query hitting multiple (comparatively) lightweight machines to lookup results - and all the networking gear between. One the other, a beefy single-GPU machine.

    (All of this is from the perspective of handling a single request, of course. I’m not suggesting that Wikipedia would run this service on only one machine.)




  • Thank you for responding! I really liked this bit

    with a (decently designed) UI, you merely have to remember the path you took to get to wherever you want to go, what buttons to press, what mouse movements to execute.

    I think that’s very insightful. I certainly have developed muscle-memory for many of my most-frequent commands in the CLI or editor of choice.

    I agree about Visual Studio as a preference. I’ve used (or at least tried) dozens of IDE setups down the years from vi/emacs to JetBrains/VS to more esoteric things like Code Bubbles. I’ve found my personal happy place but I’d never tell someone else their way of working was wrong.

    (Except for emacs devs. (Excepting again evil-mode emacs devs - who are merely confused and are approaching the light.)) ;)


  • I hope you take this in good humor and at least consider a TUI for your next project.

    Absolutely. I see what you did there… 😉

    But seriously, thank you for your response!

    I think your comment about GUIs being better at displaying the current state and context was very insightful. Most CLI work I do is generally about composing a pipeline and shoving some sort of data through it. As a class of work, that’s a common task, but certainly not the only thing I do with my PC.

    Multistage operations like, say, Bluetooth pairing I definitely prefer to use the GUI for. I think it is partially because of the state tracking inherent in the process.

    Thanks again!


  • As someone who genuinely loves the command line - I’d like to know more about your perspective. (Genuinely. I solemnly swear not to try to convince you of my perspective.)

    What about GUIs appeals to you over a command line?

    I like the CLI because it feels like a conversation with the computer. I explain what I want, combining commands as necessary, and the machine responds.

    With GUIs I feel like I’m always relearning tools. Even something as straightforward as ‘find and replace’ has different keyboard shortcuts in most of the text-editing apps I use - and regex support is spotty.

    Not to say that I think the terminal is best for all things. I do use an IDE and windowing environments. Just that - when there are CLI tools I tend to prefer them over an equivalent GUI tool.

    Anyway, I’m interested to hear your perspective- what about GUIs works better for you? What about the CLI is failing you?

    Thank you!


  • Lots of little quality of life things. For instance, in Kotlin types can be marked nullable or not. When you are passing a potential null into a non-nullable argument, the compiler raises an error.

    But if you had already checked earlier in scope whether or not the value was null, the compiler remembers that the value is guaranteed not to be null and won’t blow up.

    Same for other typechecks. Once you have asserted that a value is a given type, you don’t need to cast it everywhere else. The compiler will remember.