Right, I find LLMs are fundamentally no different from Markov chains. It doesn’t mean they’re not useful, they’re a tool that’s good for certain use cases. Unfortunately, we’re in a hype phase right now where people are trying to apply them for a lot of cases they’re terrible at and where better tools already exist to boot.
Should the research he’s discussing also be disregarded? https://arxiv.org/pdf/2410.05229
That’s what happens when you start drinking your own kool aid thinking that Russian army is on the brink of collapse fighting with shovels.
I mean how else are they supposed to ensure continued western support?
IDF firing at Irish peacekeepers with no response from the administration would be a fitting capstone humiliation for Biden.
fully automated luxury communism on the way :)
Saying something can be proven in principle isn’t very useful in practical terms. The fact of the matter is that the real world is simply too complex for the human mind to fully understand from first principles. Science is fundamentally about creating models of the world that are useful approximations of reality, it’s not a black and white thing. Asimov explains this well here https://mvellend.recherche.usherbrooke.ca/Asimov_anglosaboteurs.pdf
The link I used clearly shows what people mean by the western world, the fact that it doesn’t include Japan or Korea isn’t the own that you seem to think it is. Of course, I never expect any sort of intelligent response here. lol indeed
What part of the government is making information illegal are you struggling comprehend? If there was some magical anti-censorship platform that was usable by regular people then it would just be banned, and then people using it would be arrested. This is not a technology problem.
It’s incredible that it needs to be explained that the issue is with the censorship regardless of what particular service is being censored or what you personally happen to think of it.
Yes, it’s literally occupied by the US military.
No, anything that’s a vassal state of the US is the west. And yes, Japan and Australia are understood to be part of the western bloc. This isn’t even controversial.
Yeah, Samsung is located in the occupied Korea which is part of the empire last I checked.
continuing to miss the point I see
That’s incredibly naive I’m afraid. This sort of logic works in very simple cases, but quickly breaks down in any complex scenario. The reality is that a lot of knowledge cannot be easily verified because it’s just too complex. Take a peer reviewed scientific study as an example, the study might reference a different study as its basis, that references another study, and so on. If one of the studies in the chain wasn’t conducted properly, and nobody noticed then the whole basis could be flawed. This sort of thing happens all the time in practice.
What you really have is an ideology, which is a set of beliefs that fit together and create a coherent narrative of how the world works. A lot of the knowledge that you integrate into your world view has various biases and interpretations associated with it. Thus, it’s not an absolute truth about the world, but merely an interpretation of it.
Kind of hilarious how people in the west would always claims that the differentiating factor between them and countries like China was freedom of expression. Turns out what was really happening was that majority of people in the west were just swallowing their state propaganda uncritically, and there was no need for heavy handed censorship. Now that the state is losing control of the narrative heavy handed measures are promptly introduced.
As always, the west is no different from its adversaries, but people in the west are gullible enough to think they’re special.
how to miss the point entirely
Actually we do know that there are diminishing returns from scaling already. Furthermore, I would argue that there are inherent limits in simply using correlations in text as the basis for the model. Human reasoning isn’t primarily based on language, we create an internal model of the world that acts as a shared context. The language is rooted in that model and that’s what allows us to communicate effectively and understand the actual meaning behind words. Skipping that step leads to the problems we’re seeing with LLMs.
That said, I agree they are a tool, and they obviously have uses. I just think that they’re going to be a part of a bigger tool set going forward. Right now there’s an incredible amount of hype associated with LLMs. Once the hype settles we’ll know what use cases are most appropriate for them.