If anyone was somehow still thinking RoboTaxi is ever going to be a thing. Then no, it’s not, because of reasons like this.
If anyone was somehow still thinking RoboTaxi is ever going to be a thing. Then no, it’s not, because of reasons like this.
Giving Saudi Arabia the 2034 World Cup has been a longtime coming. The 2030 World Cup was split across South America, Africa and Europe, yes really, under the banner of it being the ‘centennial cup’. Of course they could have held it all in South America, Uruguay being the original hosts.
In World Cup rules once a continent has hosted the World Cup they can’t hold it again for a certain period. So FIFA decided to hold it in three continents at the same time, thus reducing the available opportunities for 2034 and pretty much guaranteeing Saudi for 2034
A great video on this subject. https://youtu.be/q6h-a4GfYy4?si=npwu5xXxWScuPu1J
Todays news
https://www.turkiyetoday.com/turkiye/turkiye-bans-discord-amid-concerns-over-platform-safety-62896/
For now, Discord users in Türkiye face limited access to the platform, though it remains unclear whether a full ban will be implemented in the coming days.
Still working for me on hotel wifi.
Edit: it won’t launch on my laptop now. Stuck trying to update. Still works on my phone.
All hotels reserve the right to inspect your room whenever they need to. The privacy sign just means you don’t want room service, it’s not some magical lock.
They’d still knock, not just burst into your room to catch you in flagrante.
That said seeing the black hat conference in this way is daft.
Was it left outside in the rain?
Just saw this a few hours ago. If it wasn’t for this post I think I’d have forgotten about it already.
I guess it’s fine, just entirely unremarkable.
The show has real Late Late Breakfast Show vibes. An 80’s BBC show where the public took part in more and more over the top stunts. In the end someone died and the show was cancelled 3 days later.
https://www.everything80spodcast.com/the-late-late-breakfast-show-tragedy-of-1986/
They’ve committed to support AM5 (the LGA socket launched 2022) through at least 2027.
“We envision other types of more complex guardrails should exist in the future, especially for agentic use cases, e.g., the modern Internet is loaded with safeguards that range from web browsers that detect unsafe websites to ML-based spam classifiers for phishing attempts,” the research paper says.
The thing is folks know how the safeguards for the ‘modern internet’ actually work and are generally straightforward code. Where as LLMs are kinda the opposite, some mathematical model that spews out answers. Product managers thinking it can be corralled to behave in a specific, incorruptible way, I suspect will be disappointed.
Yeah, most 3rd person games I like to play with a controller, first person not so much.
I remember the ‘good old days’ of Sun Fire 10k and similar servers. You could replace entire boards of CPU and RAM and the server would keep on trucking.
Because it’s actually really hard to achieve technically. When ads are served outside the stream you can easily serve different ads to different viewers based on their profiles. When the ads are baked into the stream you can either
A) Create a whole bunch of different copies of the video asset with different ads baked in and then rotate these on a regular basis. Which would be expensive to update and store and limit the range of adverts that could be served to a particular user.
B) Dynamically create a stream on the users request, which while possible means standard CDN caching isn’t going to work so there’s a distribution challenge.
Or some other alternative they’ve come up with. I’d be really interest to know what their approach is here.
There are no M1 devices with less than 8GB of RAM.
The A16 Bionic has as Neural Engine capable of 17 TOPS but 6GB of RAM.
The M1 had a Neural Engine capable of just 11 TOPS but all M1 chips have at least 8GB of RAM.
So the model could run on an A16 Bionic if it had 8GB of RAM as it has 54% more TOPS than the M1, but it only has 6GB of RAM. Apple have clearly decided that a model small enough to fit just wouldn’t give good enough results.
Maybe as research progresses they’ll find a way to make it work with a model with fewer parameters but I’m not going to hold my breath.
Yeah I thought it was a NPU tops issue that’s keeping it off the 17 non pro. However since it runs on a M1 I think it’s more to do with needing 8GB RAM to fit the model.
He called the software integration between the two companies “an unacceptable security violation,” and said Apple has “no clue what’s actually going on.”
I’d be very surprised if corporates wouldn’t just be able to disable it in MDM for their worker’s phones. Not sure it’s Apple who has ‘no clue’ here.
If they keep burning $100k/w on their Vercel bill they might not be around that long anyway!
The thing with serverless is you’re paying for iowait. In a regular server, like an EC2 or Fargate instance, when one thread is waiting for a reply from a disk or network operation the server can do something else. With serverless you only have one thread so you’re paying for this time even though it’s not actually using any CPU.
While you’re paying for that time you can bet that CPU thread is busy servicing some other customer and also charging them.
I like serverless for it’s general reliability, it’s one less thing to worry about, and it is cheap when you start out thanks to generous free tiers, at scale it’s a more complex answer as whether it is good value or not.
It needs to be way way better than ‘better than average’ if it’s ever going to be accepted by regulators and the public. Without better sensors I don’t believe it will ever make it. Waymo had the right idea here if you ask me.