• 0 Posts
  • 203 Comments
Joined 1 year ago
cake
Cake day: July 10th, 2023

help-circle

  • This would work but assumes the primary use of the machine is Windows and derates your performance under Linux significantly due to USB speeds. Even if you’re storing your data on the Windows HDD, NTFS drivers are dog slow compared to EXT4 and other *nix filesystems.

    Also some BIOSes are a pain to get to boot off removable drives reliably so it really depends on what your machine is.

    I’ve used Linux as a primary dev system for well over a decade now, and with the current state of Windows I’d really recommend just taking the leap, keep your Windows box if you need Windows software and build a dedicated Linux workstation.



  • That’s a valid point, the dev cycle is compressed now and customer expectations are low.

    So instead of putting in the long term effort to deliver and support a quality product, something that should have been considered a beta is just shipped and called “good enough”.

    A good example I guess would be a long term embedded OSS project like Tasmota, compared to the barely functional firmware that comes stock on the devices that people buy to reflash to Tasmota.

    Still there are few things that frustrate me like some Bluetooth device that really shouldn’t have been a Bluetooth device, and has non-deterministic behaviour due to lack of initialization or some other trivial fault. Why did the tractor work lights turn on as purple today? Nobody knows!


  • My type is a dying breed too, the guys who do their best to write robust code and actually trying to consider edge cases, race conditions, properly sized variables and efficient use of cycles, all the things that embedded guys have done as “embedded” evolved from 6800 to Pic, Atmel and then ESP platforms.

    Now people seem to have embraced “move fast and break things” but that’s the exact opposite to how embedded is supposed to be done. Don’t get me wrong there is some great ESP code out there but there’s also a shitload of buggy and poorly documented libraries and devices that require far too many power cycles to keep functioning.

    In my opinion one power cycle is too many in the embedded world. Your code should not leak memory. We grew up with BYTES of RAM to use, memory leaks were unthinkable!

    And don’t get me started on the appalling mess that modern engineers can make with functional block inside a PLC, or their seeming lack of knowledge of industrial control standards that have existed since before the PLC.


  • evranch@lemmy.catoScience Memes@mander.xyzSardonic Grin
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    Been using one of these apps to try to identify the many wild plants in my native pastures. Mostly just out of curiosity and conservation. Likewise it helped identify some trees and shrubs the previous owner planted around the yard.

    They are far from perfect but are a good starting point as you get lots of pictures to compare to your mystery tree, you finish the job yourself.



  • I don’t see how people like you miss the entire concept of “base load”.

    I live in a region with vast amounts of renewable energy resources. It’s always windy and the sun shines almost every day. I have solar panels on my house that cover most of my DHW and a large fraction of my summer cooling load, and keep most of my appliances running.

    But right now, the sun is down and the wind is flat. And I still need power. My battery storage would be depleted by morning, damaging it through overdischarge if I don’t buy power from the grid instead.

    And it’s a lovely summer evening with no heating or cooling demand! What about midwinter, -35C and dark and snowy? Where is my power coming from on that day, after a month of days just like it?

    Nuclear.




  • It’s complicated. The main issue is, I live on a remote farm without cell coverage, except in the tiny zone under my 50’ tower with booster.

    However I now have Starlink, and wired and wireless APs covering a large area with high speed, low latency data.

    So, port my number to VoIP.ms, which supports SMS, and make all my calls/texts through Wifi using SIP. On the road, use a basic cell plan with unlimited slow data that is still fast enough for voice. Tested, working, so far fairly simple.

    Now the issues. RCS won’t work with my now VoIP provisioned number, because there’s no SIM for it. The SIM in the phone has a different number, that of the new plan which will be unreachable at the farm by voice/SMS just like the old number used to be.

    This would all be a non-issue if my provider supported VoWifi on anything other than iPhones, but sadly this is not an option. So I’ve got service everywhere now, but am stuck with voice and SMS, no RCS or MMS.




  • Right, we need to come up with better terms for talking about “AI”. Personally at the moment I’m considering any transformer-type ML system to be part of the category, as you stated none of them are any more “intelligent” than any others. They’re all just a big stack of tensor operations. So if one is AI, they all are.

    Remember long ago when “fuzzy logic” was all the hype and considered to be AI? Just a very early form of classifier network but everyone was super excited at the time.


  • I’m just stating that “AI” is a broad field. These lightweight and useful transformer models are a direct product of other AI research.

    I know what you mean, but simply stating “Don’t use AI” isn’t really valid anymore as soon these ML models will be a common component. There are even libraries and hardware acceleration support for tensor operations on the ESP32-S3.


  • evranch@lemmy.catoTechnology@lemmy.worldThe AI bill that has Big Tech panicked
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    5
    ·
    3 months ago

    It’s possible for local AI models to be very economical on energy, if used for the right tasks.

    For example I’m running RapidOCR which uses a modern transformer architecture, and absolutely blows away traditional OCR at capturing data from character displays.

    Doesn’t even need a GPU and returns results in under a second on a modern CPU. No preprocessing needed, just feed it an image. This little multimodal transformer is just as much “AI” as bloated general purpose GPTs, but it’s cheap, fast and useful.


  • Look at Saskatchewan, Canada. We’re the only province with a public telecom, SaskTel.

    Most people in the cities and even larger towns have fiber, and our cell plans are significantly cheaper than anywhere else in Canada despite being a rural province with a large coverage area to population ratio.

    We also have decent electricity rates considering we have no hydro, and the cheapest natural gas in Canada. Thanks to SaskPower and SaskEnergy.

    Public utilities are the only way to do it, I’m always shocked to see people defend privatization in any way.


  • probably the best optical character recognition by far

    I’ve actually just been working with OCR this week, trying to capture data off of the screen of a stupid proprietary Schneider device as that’s the only way to get at it.

    Long story short Tesseract stinks at this task.

    The Chinese designed PaddleOCR seems significantly superior as it runs a more modern neural net and requires a lot less preprocessing. I would class it as more of a “full service AI” and not just a simple recognition system like Tesseract, it can correct for skew and do its own normalization and thresholding internally while Tesseract wants a perfect boolean raster fed to it.

    Unfortunately, the barrier to entry is a lot higher due to trying to understand their text vomit website and the fact that it seems prone to random segfaulting.