Yeah that’s pretty bad. We all know you can bait LLMs to spit out some evil stuff, but that they do it on their own is scary.
Yeah that’s pretty bad. We all know you can bait LLMs to spit out some evil stuff, but that they do it on their own is scary.
I like the X on top of the bird. Nice
I still use RedReader for Reddit, it’s a lot less convenient with its swipe gestures though than Connect. Makes me automatically use Lemmy more than Reddit.
That actually sounds like fun.
For android I can recommend the Lemmy Connect app. Pretty good.
The article fails to say what the issue is beside Mercy full team rez. Also does classic have the old 6vs6 balancing or just 5vs5 one but 6 players?
I only care about Lemmy
Ukraine is currently fighting this war for Europe too. Spending money and lifes directly in a confrontation is massively more expensive than sending weapons.
For security updates in critical infrastructure, no. You want that right away, in best case instant. You can’t risk a zero day being used to kill people.
They’ll walk on US streets before the end of 2025
Actually this one and I just saw it on Lemmy too. As a life long Trek fan I have this quote in my head quite often.
Yeah looks like the wallstreet is looking forward for deregulations. Haha
Open your neobroker app, if Trump won, the market will crash before it made the news.
Thanks. I’ll check it out.
I’m really amazed by their consistent win11 patch fuckups. I’ve never seen it in this dimension with win10. Luckily I’m still on win10 and pretty sure I’ll get the updates past 2025 somehow.
Weren’t they one of those blocking early GME?
Fuck VW. They sleep on providing cheap eCars and now the people have to suffer by losing jobs.
That’s… That’s good. You want people to work as hard as possible to save everyone’s future. We only have one.
1526 communities. Wow that’s a lot. Is there anything left?
Yes, there is a degeneration of replies, the longer a conversation goes. Maybe this student kind of hit the jackpot by triggering a fiction writer reply inside the dataset. It is reproducible in a similar way as the student did, by asking many questions and at a certain point you’ll notice that even simple facts get wrong. I personally have observed this with chatgpt multiple times. It’s easier to trigger by using multiple similar but non related questions, as if the AI tries to push the wider context and chat history into the same LLM training “paths” but burns them out, blocks them that way and then tries to find a different direction, similar to the path electricity from a lightning strike can take.