• 0 Posts
  • 67 Comments
Joined 1 year ago
cake
Cake day: July 13th, 2023

help-circle




  • Japan is probably the highest trust society in the planet. You regularly walk into Lawson’s or 7-11 in tokyo that have no employees visible and very few cameras if any. You self check out and are on your way.

    Meanwhile in the bay area more and more regular goods in supermarkets and drug stores are locked in boxes you need to find an employee for to unlock.

    Trust in society makes everything more convenient and easier, it’s an invisible tax multiplied into every daily transaction and the US is in the “close to critical failure” state with almost nobody trusting anyone. No experts, no government, just snakecharmers like Don who change trust for blind faith. And make no mistake, people who have to question every interaction every day eventually are so worn down, the snakecharmers can pick them up by selling hope that you only trust them and all will be ok.

    This is no coincidence. Common narratives in the west, through the GDP lens declare Japan a depression ridden shithole doomed by demographic decline - But the GDP lens misses an entire variable.

    Quality.

    In the west, we have sold quality to growth. A phone that breaks every year sells a new phone. A dish made with lower quality ingredients makes more profit. Quality of life and convenience in Japan is incredibly high. Stuff works. Reliably. Your train is never late. Our escalator doesn’t break down. Your power doesn’t go out. Your food is generally high quality. There a small convenience store every 50 meters - no trips to large walmart megastores requires.

    Yes Japan has issues, this isn’t about otaku fawning here - this is about the fact that we sell quality in the west for profit which does not fly in Japan and if you visit Tokyo in 2023 from SF or Seattle, … the contrast is stark. Somehow every car got replaced with electric or hybrid versions. The trains running on Yamanote are new. The connectivity 5G. Shit just works. The food is highly affordable and it’s quality hasn’t declined.

    Of course people visiting from low trust societies without social compact to not fuck shit up are going to behave like barbarians. Especially when this behavior is incentivized by views / engagement or monetisation.

    I’m with Singapore on this one. Someone who uploads a video like that should be caned and their social media accounts force wiped to start over to disincentive any possible gains





  • Game industry professional here: We know Riccitello. He presided over EA at critical transition periods and failed them. Under his tenure, Steam won total supremacy because he was trying to shift people to pay per install / slide your credit card to reload your gun. Yes his predecessor jumped the shark by publishing the Orange Box, but Riccitellos greed sealed the total failure of the largest company to deal with digital distribution by ignoring that gamers loved collecting boxes (something Valve understood and eventually turned into the massive Sale business where people buy many more games than they consume)

    He presided over EA earlier than that too, and failed.

    Both of times, he ended up getting sacked after the stock reached a record low. But personally he made out like a bandit selling EA his own investment in VG Holdings (Bioware/Pandemic) after becoming their CEO.

    He’s the kind of CEO a board of directors would appoint to loot a company.

    At unity, he invested into ads heavily and gambled on being able to become another landlord. He also probably paid good money on reputation management (search for Riccitello or even his full name on google and marvel at the results) after certain accusations were made.









  • I think at this point we are arguing belief.

    I actually work with this stuff daily and there is a number of 30B models that are exceeding chatGPT for specific tasks such as coding or content generation, especially when enhanced with a lora.

    airoboros-33b1gpt4-1.4.SuperHOT-8k for example comfortably outputs > 10 tokens/s on a 3090 and beats GPT-3.5 on writing stories, probably because it’s uncensored. It’s also got 8k context instead of 4.

    Several recent LLama 2 based models exceed chatgpt on coding and classification tasks and are approaching GPT4 territory. Google bard has already been clobbered into a pulp.

    The speed of advances is stunning.

    M- architecture macs can run large LLMs via llama.cpp because of unified memory interface - in fact a recent macbook air with 64GB can comfortably run most models just fine. Even notebook AMD GPUs with shared memory have started running generative AI in the last week.

    You can follow along at chat.lmsys.org. Open source LLMs are only a few months but have started encroaching on the proprietary leaders who have years of headstart




  • you are answering a question with a different question. LLMs don’t make pictures of your mom. And this particular question?. One that has roughly existed since Photoshop existed.

    It just gets easier every year. It was already easy. You could already pay someone 15 bucks on Fiver to do all of that, for years now.

    Nothing really new here.

    The technology is also easy. Matrix math. About as easy to ban as mp3 downloads. Never stopped anyone. It’s progress. You are a medieval knight asking to put gunpowder back into the box, but it’s clear it cannot be put back - it is already illegal to make non consensual imagery just as it is illegal to copy books. And yet printers exist and photocopiers exist.

    Let me be very clear - accepting the reality that the technology is out there, it’s basic, easy to replicate and on a million computers now is not disrespectful to victims of no consensual imagery.

    You may not want to hear it, but just like with encryption, the only other choice society has is full surveillance of every computer to prevent people from doing “bad things”. everything you complain about is already illegal and has already been possible - it just gets cheaper every year. What you want to have protection from is technological progress because society sucks at dealing with the consequences of it.

    To be perfectly blunt, you don’t need to train any generative AI model for powerful deepfakes. You can use technology like Roop and Controlnet to synthesize any face on any image from a singe photograph. Training not necessary.

    When you look at it that way, what point is there to try to legislate training with these arguments? None.