Frequently, yes.
Frequently, yes.
So you’re telling me if I stop breathing I’ll never get older? I’m in!
This is what Ilya saw…
The Report of the Committee Appointed by the Royal Society to Consider of the Best Method of Adjusting the Fixed Points of Thermometers; And of the Precautions Necessary to Be Used in Making Experiments with Those Instruments
Seems fancy and legit, I see no reason to actually read it and confirm the info.
I like this version better than “he had a fever when he measured 100 degrees” so I will choose to believe it without further research.
I hope you are correct.
As long as they don’t fuck it up in a similar fashion to seemingly every other thing they have tried for a couple decades.
Assuming it takes its answer from search results, and the search results are all affiliate marketing sites that just want you to click on a link and buy something, this makes perfect sense.
Imagine having to make up an entire series of math lectures just to cover up your search history.
Is language conscious?
Are atoms?
I don’t know if LLMs of a large enough size can achieve (or sufficiently emulate) consciousness, but I do know that we barely know anything about consciousness, let alone it’s limits.
When he heard about the incident, King Frederick IV of Denmark asked for the admiralty to court-martial Wessel.[3] He stood trial in November 1714, accused of disclosing vital military information about his lack of ammunition to the enemy, as well as endangering the ship of king Frederick IV by fighting a superior enemy force.[5] The spirit with which he defended himself and the contempt he poured on his less courageous comrades took the fancy of Frederick IV.[4] He successfully argued a section of the Danish naval code which mandated attacking fleeing enemy ships no matter the size, and was acquitted on 15 December 1714. He then went to the king asking for a promotion and was raised to the rank of captain on 28 December 1714.[5]
The balls on this man. And this is the part just before the section titled “Greatest Exploits”…
The thing is, LLMs can be used for something like this, but just like if you asked a stranger to write a letter for your loved one and only gave them the vaguest amount of information about them or yourself you’re going to end up with a really generic letter.
…but to give me amount of info and detail you would need to provide it with, you would probably end up already writing 3/4 of the letter yourself which defeats the purpose of being able to completely ignore and write off those you care about!
I would pay for AI-enhanced hardware…but I haven’t yet seen anything that AI is enhancing, just an emerging product being tacked on to everything they can for an added premium.
Well, $1 from you and 100 other people. Those politician bribes campaign donations are cheap compared to the aggregate profit they make off them.
“Are you suggesting coconuts migrate?”
I keep forgetting that that’s an option
Gotta go fast!
-Ancient Middle Eastern Philosopher, probably
No clue? Somewhere between a few years (assuming some unexpected breakthrough) or many decades? The consensus from experts (of which I am not) seems to be somewhere in the 2030s/40s for AGI. I’m guessing accuracy probably will be more on a topic by topic basis, LLMs might never even get there, or only related to things they’ve been heavily trained on. If predictive text doesn’t do it then I would be betting on whatever Yann LeCun is working on.
Perhaps there is some line between assuming infinite growth and declaring that this technology that is not quite good enough right now will therefore never be good enough?
Blindly assuming no further technological advancements seems equally as foolish to me as assuming perpetual exponential growth. Ironically, our ability to extrapolate from limited information is a huge part of human intelligence that AI hasn’t solved yet.
Quantum is when two things at the same time