• 1 Post
  • 380 Comments
Joined 1 year ago
cake
Cake day: August 15th, 2023

help-circle

  • Any in many ways, that is the way engineers should speak to other engineers when analyzing a problem.

    If two or more people can actually share a common goal of finding the best solution, everyone involved should be making sure that no time is wasted chasing poor solutions. This not only takes the ability to be direct to someone else, but it also requires that you can parse what others are telling you.

    If someone makes something personal or takes something personal, they need a break. Go take a short walk or something. (Linus is a different sort of creature though. I get it.)

    TBH, this is part of the reason I chose my doctor (GP). She is extremely direct when problem solving and has no problems theory-crafting out loud. Sure, we are social to a degree, but we share many of the same professional mannerisms. (We had a short discussion on that topic the other day, actually. I just made her job easier because I give zero fucks about being judged for any of my personal health issues.)




  • (thinks out lound…)

    If you could force different speeds and different voltages, you can make some guesses as to what the cable might support.

    USB packets use CRC checks, so a bad checksum may indicate a speed or physical problem. (Besides stating the obvious, my point is that doing strict checks for each USB mode gives CRC more value.)

    I just looked over the source code for libusb (like I knew what I was looking for, or something) and it seems that some of the driver(?) components hook really deep into the kernel. There might be a way to test specific parts of any type of handshake (for dataflow or voltage negotiation) to isolate specific wires that are bad by the process of elimination.

    I think my point is that a top-down approach is likely possible, but it’s probabilistic.



  • I am curious as to why they would offload any AI tasks to another chip? I just did a super quick search for upscaling models on GitHub (https://github.com/marcan/cl-waifu2x/tree/master/models) and they are tiny as far as AI models go.

    Its the rendering bit that takes all the complex maths, and if that is reduced, that would leave plenty of room for running a baby AI. Granted, the method I linked to was only doing 29k pixels per second, but they said they weren’t GPU optimized. (FSR4 is going to be fully GPU optimized, I am sure of it.)

    If the rendered image is only 85% of a 4k image, that’s ~1.2 million pixels that need to be computed and it still seems plausible to keep everything on the GPU.

    With all of that blurted out, is FSR4 AI going to be offloaded to something else? It seems like there would be a significant technical challenges in creating another data bus that would also have to sync with memory and the GPU for offloading AI compute at speeds that didn’t risk create additional lag. (I am just hypothesizing, btw.)













  • It’s a markup language(ish) but it’s not a programming language. XML would be closer to programming, IMHO, since you could have simple things like recursion. That example is even pushing what I would consider “programming”, but anyone can feel free to disagree.

    SQL is in the same category for me. It’s a query language and can get super complex, perform some basic logic, but you can’t exactly write “snake” in it. Sure, you could use cmdshell or something else to do something more complex, but that would no longer be SQL.

    My simplistic expectation of an actual programming language would be that you can automate an entire platform at the OS level (or lower) instead of automating functions contained within a service or application. (JVMs and other languages that are “containerized” are weird outliers, by my definition.)

    I am not trying to step on anyone’s toes here. I just never have really thought about what I personally consider a programming language to be.