Apple is rumored to be working on two versions of Vision Pro, however a new report from Bloomberg’s Mark Gurman alleges the Cupertino tech giant is aiming to beat Meta to the punch with a pair of AR glasses. Citing someone with knowledge of the matter, the report maintains Apple CEO Tim Cook has put development …
This AR obsession is utterly baffling to me. There are so few real applications and the hardware requirements are insane so it’s not something that will get widely adapted anyway. Sure in a decade or so it might have matured enough to have shed all these issues, but AR/VR feels like a really out of touch thing to prusue, especially if you look at the garbage ideas they have on how to use it - virtual meetings??
I get movies and games on these, possibly even some recording and porn, but these are not their B2B wet dreams anyway.
In theory, there’s a Million awesome business applications for it.
Let’s say you’re in construction and your glasses tell you exactly what to build where and how.
You’re a waiter and the glasses tell you which table ordered what, needs attention, etc.
You’re a network engineer and the glasses show you on every port which device is connected.
And don’t even get me started on the military applications.
Of course we’re not there yet. But that’s why they’re so obsessed with it. They want to be the first.
We were already there 10 years ago with Google Glass. Despite its failure in the consumer market, it found significant success in enterprise settings in the exact scenarios you’ve listed.
Except, all of these are scenarios in blue collar work. Apple seems hell bent on making this succeed in white collar areas with its emphasis on meetings, which is extremely baffling.
How Is Google Glass Doing in Enterprise and Industrial Settings? - Engineering.com - https://www.engineering.com/how-is-google-glass-doing-in-enterprise-and-industrial-settings/
How does the construction app know what needs to be constructed and how?
How does the waiter app know which table ordered what, needs attention, etc?
How does the IT app know on which port every device is connected?
These things are all real hard to know. Having glasses that display the knowledge could be really nice but for all these magic future apps, having a display is only part of the need.
If you have all that info you could probably remove the human from the equation and automate it.
As for the NPC-Waiter 🤢
As somebody who wanted google glass back in the day and thinks AR glasses would be really really cool, this is ultimately where I end up on it, and with a lot of tech in general: the primary usefulness of any of this shit is in accurate and relevant information, and that’s the part of the equation that these big companies are definitely NOT in the business of producing. In fact, they seem to have discovered a while back that inaccurate and irrelevant information being blasted in your face is the real money maker. And now with AI/ML producing so much/filling in gaps, I just can’t imagine that it’s going to get any better.
That being said, I think the tech is so cool. I’d love to travel to a new city and be able to get directions around to different sightseeing spots and real time star ratings above all the restaurants instead of anxiously glancing at my phone the entire time. If we ever get to that level of goodness I’m in, but I have a lot of doubts that it’ll ever be more than another attention-seeking thing attached to your body.
In the current US political climate, giving everyone glasses with always-on cameras run by big tech companies seems particularly dangerous.
I agree. But unfortunately, nobody gives a flying fuck.
I think for the most part society has gotten used to being on someone’s camera when in public at pretty much all times.
It’s something I used to think about, now I just, don’t.
Everyone has been looking for the next big hardware thing. It looked like it might be foldable phones for a little while but I reckon AR Glasses are the ultimate endgame until they start making bio implants.
They’ve gotten used to it in different political circumstances. But as people start to see how an authoritarian and vindictive fascist government works with surveillance tech to invade and endanger people’s lives, attitudes to things like always-on cameras may start to shift.
If it helps, they don’t have the battery life to be constantly recording or sending that much traffic. And that stuff can’t be invisible, us nerds can see it all. That’s one of the things dystopian sci-fi dramas have to gloss over, it all still runs on the properties of physics, sending a wireless message, even if the contents themselves are encrypted, we can still figure out where it is going and how much data it is by reading the wave. No way to block that from being possible.
Plus, there is no reason to be covert or secretive about manipulating people. They have been literally saying it out loud for years now, and it’s still just as effective.
Sounds like a robot would just steal your job if that was implemented well. (And that is a big IF) Meanwhile you would pay off your AR glasses by watching a constant stream of ads for months.
Even lightweight glasses can be irritating and the extra weight from steel v plastic is noticeable. There will never be ar glasses or goggles that are comfortable to wear all the time.
All of this can be done with AR on a mobile phone.
Only when you need to do this AND have both hands free do AR glasses become necessary. So surgery, bomb refusal or something niche like thar.
This might be the dumbest take I’ve heard today.
Everything your smartphone does your laptop can do, too. Therefore, smartphones are useless!!
Everything AR can do that your smartphone can do today will be a hundred times more convenient because you don’t have to carry a slab of glass with you all the time. You just have to wear glasses. Like I already do anyway.
The only reason for smartphones to still exist in a world where AR is compact will be if we can’t figure out a way to efficiently input data without annoying everyone around us. As soon as that problem’s solved, nobody will be using smartphones anymore.
This might be the dumbest take I’ve heard today.
You’re forgetting that AR headgear requires to WEAR THAT THING ON YOUR FACE AT ALL TIMES
No matter how compact (don’t even start talking about some techbro “all conteined in a lens” type of shit), there will absolutely, always be people who will refuse to wear it. (Ask any former glasses user who went for contact lenses)
A phone you glance at and is in your pocket only when you need it is a million times more convenient than something that goes over your eyes all of the time.
Your world where external compact computing devices (phone/tablet/smartwatch/a slab of glass) are no longer needed is mostly constructed out of flatulence of the technology brotherhood.
I need a good reason to spend $2000 to do something I can already do with a $100 phone. Using 2 hands is very niche.
Imagine being anyone anywhere whipped like an Amazon worker. Will the waitress have to piss in bottles? Bad for tips I think.
Unifi equipment already can sorta do this! The little dot pattern on the screen is an AR code and you can use the app to see this. It’s pretty cool actually. I’ve never actually used it for real work though, I just look at the dashboard on my laptop and find the port that way.
It would be really really cool to be able to just touch the physical port and be able to change the settings in real space with AR glasses though.
Yep, that’s why I was thinking of that example 😄
This could also be the breakout app for AI. While AR glasses obviously need shape recognition and manipulation, the real world has many many more things than likely to be codified. How do you deal with that? AI. How do you do arbitrary summaries of whatever you’re looking at? AI. How do you interact with the glasses and the real world? Speech recognition and AI.
You heard it here first, folks. Two hot new technologies with no real use yet will find each other and turn into something useful
I mean, technically, we heard it first at the demonstrations of the meta and google glasses, where that is exactly the main use of them demonstrated. But they also do smartphone stuff. Like project directions when looking straight ahead, and a map when glancing downwards. Or translate stuff you are looking at. Their AI stuff was like, “Where did I leave my keys?”, “Can you play me the first song off this album(while holding a record)?”, and they also did more general memory stuff like “what was the title of the white book on the shelf?”.
But yeah, even “indoor” VR headsets have an AI assistant on them now that can help with context aware intelligence. Like “What is this thing I’m looking at?” And it can be used in both the real world and the virtual world. Like, “Is this everything I need to bake a cake?” or “how do I kill this boss?” See, real world and virtual world… lol. Or like, “Can you give me a hint on this puzzle? Not too big of a hint though.”.
I just personally don’t like asking questions out loud.
What should they be pursuing now? They have state of the art chips, tablets, phones, laptops and even all in one desktops, the only thing they don’t have are TV’s, at this point why not try to conquer the next frontier. even if it takes a decade?
overlaying ads on literally everything could be the end goal.
We need laws restricting advertising
Apple is not that strong in the overlaying ads over everything department though.
But Apple will happily take a 30% cut of everything bought using the AR glasses.
It’s a mobile phone you don’t need to hold.
It’s a mobile phone that never goes in your pocket.
It’s a mobile phone that is always on and has access to everything you see and hear.
It’s a bummer than those sound like bad things simply because corporate abuse is always a forgone conclusion. If your data was truly private and always entirely under your control and ONLY your control, those would be really attractive features.
Some implementations also have the problem of constantly pointing cameras at non-consenting passers-by.
Totally. I’d also love to train a LLM on my own personal data and preferences, but there is no way I’m trusting a corporation that information.
Sounds like a fucking nightmare, but a wet dream to Big Tech.
That’s pill. They just have to sugar coat it enough for everyone to swallow it (like we did with phones).
Exactly, it’s literally just the next step more convenient than a smartphone. You know how many people have neck and back problems now from smartphones? Not having to look at your hands or even hold anything in your hands is going to be so much better. Not having to pull your phone out of your pocket for a map or a web search or a text or to translate stuff(visual or audio). Having both hands free while doing the things your current phone does, or new things a current phone can’t do.
It’s going to be so much nicer, and sure, the first one is gonna be expensive and not perfect, but it only needs nerds to start with anyway. We’ll make sure it gets to a point where it doesn’t annoy normal people and offers real value. And while the most popular ones will inevitably be the ones made with walled gardens like apple and meta, there will be good ones too for us nerds to move to once we have finished beta testing the mass market ones for you guys.
It’s the same as every tech product cycle. You know the main thing preventing wider adoption of VR/MR/XR right now? Headsets don’t look cool… so, once they are a pair of glasses, or sun glasses, the main barrier is gone. Can’t say people wouldn’t spend 500$ to 2000$ on something as un-necessary as a smartphone every couple of years. They very much do. And if you no longer need to buy or carry a smartphone, all of a sudden you got exactly that amount of money in your pocket.
I want an open source version of this https://www.evenrealities.com/
Yeah, open source third party ones come a little later. But they will come.
It’s also a device that can literally put your imagination in front of you in the real world.
A corporate marketers imagination. Yes.
or what I chose to 3D paint in my living room… it’s not all just corporate hellscape you know
I’d really just like some glasses that simulate multiple monitors without needing special software. That’s all I want
Yep, and that seems to be the route Apple was going. Screens you can place anywhere in your visual field.
Gotta need some insane resolution for that right? And 1000hz refresh to make things good I guess.
I mean for text editing, coding etc.
Yep I’ve played with virtual monitors in VR space and I don’t even like watching movies on them, the loss in resolution and the way the dynamic aspect of it (using a moving screen to simulate a static screen) makes it a shitty solution. Eventually it’ll be good enough to watch TV in but I can’t imagine doing serious work in it.
Quest 3 lens and displays actually are nice to look at, I coded for 5 hours in it the other day, and the only glaring flaw was the weight. My forehead hurt afterwards from the pressure, and I wasn’t even using stock strap. The stock strap on quest headsets is known to be terrible. Tbf I only have a 1080p monitor for comparison bur its nice.
If you tried on anything lower than a Quest 3 with Virtual Desktop, you were right.
Quest 3 was the first VR headset to make virtual screens worth it. The clarity of pancake lenses cannot be overstated. The Quest pro technically had them too, but it wasn’t quite good enough in some of the other aspects.
A Quest 3 with Virtual Desktop has replaced my TV and monitor because it was an upgrade to both. Even if all I did was placed those screens statically exactly where they used to be in real life. But of course, they can be anywhere, any size. The screens are 4k 120hz, good enough for pretty much anything. Once you get to about 80 degrees field of view, every pixel of a 4k 60hz signal can be temporally represented. Your head micromoves enough that you aren’t missing any detail between each frame of the reference taking up two of the headsets frames. And when playing a game in actual 120 fps, you won’t notice that you aren’t seeing every single pixel directly physically represented every single frame, it looks good. Worth doing. 4k still looks much nicer than 1440p, which can be fully properly represented at that size and framerate.
Using anything other than Virtual Desktop, there is no need to set a monitor any higher than 1080p since they can’t even draw that well enough to be properly represented. Virtual desktop is the only one that uses timewarp layers. If you were around for Carmack, you’ll know that was always his first advice to every piece of VR software he reviewed, “please use timewarp layers for anything you want to look clear face-on” it’s a huge difference.
Interesting. Ok, I will give it another go at some point. I had an Oculus Rift and there was a ton of promise but the tech was just not ready.
Oh yeah, for sure. The rift was great for it’s time, but it is straight up comparitively garbage compared to what is out now. Wireless is now even more stable than the rift was at tracking, and the screens are so high res and they can decode at such speed that a wireless feed is almost as low latency and is much higher fidelity than what the Rift could do. There are still wired headsets that would be more clear nowadays, but with Virtual Desktop, the downsides to streaming wirelessly are pretty minimal.
Definitely get a demo of a Quest 3 if you can, or better. Though keep in mind the 3s isn’t better, despite being newer, it is “s” in the same sense that smart phones tend to use it, it’s a newer generation, but a cheaper lower end headset. A really good value. But it doesn’t have pancake lenses, the most important part of the Quest 3, and clearly most expensive part, lol.
Wireless headsets can just be used anywhere, especially when you are in AR mode or playing something mixed reality. But they are still at their best when using your computer through them. Although, you don’t have to. Their standalone games are basically xbox 360/PS3 level graphics, not amazing, but not really a problem. Most of what graphics have advanced by since then is just less “faking” stuff to look almost exactly right anyway and more rendering it in insanely computationally demanding ways to make it look 10% more right.
With Virtual Desktop, my computer is now in every room of my house, including the ones where I get to lay back in a recliner. And my computer is also at all my friends and family’s houses. And with cell-phone tethering, it can be on a bus, or a hotel room where I don’t want to use their wi-fi. Sometimes the cell connection is bad enough that I have to lower the resolution or framerate, but often times 4k 120hz is still viable on cell. Just has a bit more latency, so some game types are contraindicated. A 4k 120hz stream only needs about 25mbit to be clear enough to be worth using over a lower resolution or framerate. And cell latency can be as low as 5ms nowadays. 4g could only go as low as 200ms, 5g can theoretically go as low as 1ms, but obviously in practice that is almost impossible.
I did have fun with the novelty of moving multiple screens around like Minority Report but it really is just a novelty at this point
The resolution thing is actually almost solved IMO. I used my Quest 3 in AR mode almost every single day and the screens are perfectly fine for reading text or having a video on in the background.
Yeah there’s still some screen door effect but it’s really only noticeable when I look for it, it disappears in normal use.
And I genuinely can’t think of a reason you would need 1000hz displays. Human eyes start to get steady motion at like 50-60 and 90-150 is when the normal eye starts to hit the limit.
It depends on what you mean by special software, but current VR headsets already do that out of the box, it’s just that their built-in multi monitor stuff is not amazing. Without any special software, you could have multiple apps open, and those apps could be any android app(including browsers or relatively bad desktop experiences like dex). The third party stuff you can download or buy is just way better. And it’s also way better when the multiple monitors are your computer’s monitors. Cuz then they have 50x the horsepower behind them. For current headsets, generally the best option is Virtual Desktop, if you don’t need more screens than can be handled by high quality timewarp layers. You can get clear 4k or 5740x1080, or anything smaller. With other multi desktop options, you can get more total screens, but there is no point to picking anything above 1080p since even that is already not rendered clearly.
Solutions for current VR/MR/XR headsets will follow to VR/MR/XR glasses, since headsets and glasses are slowly meeting in the middle. Headsets will continue to shrink while packing in the same or more tech, and the glasses will slowly be able to handle more and more tech in their tiny frames.
There will always be full size headsets, but they will essentially be the PC equivalent to the glasses being the smart phone equivalent. We will also likely still have PCs, but it’s concievable that a smartphone won’t be necessary for most people anymore. And even for the people that would still want a smartphone, a “processing puck” for the glasses would be the more likely solution. Give them pocket computer level power instead of smart watch level. So you can play good games on them, like 10-15 years ago-then pc game graphics.
I want a GTA style HUD at all times 🤪
Current wanted level by the police would be quite handy
Health bar, bank account balance, number of steps in the day, calories burned, my next calendar event
Arguably just gps instructions and step tracking superimposed on reality would be a great use of AR
Definitely. AR has a lot of potential, also could go sideways and have ads shoved in your peripheral view all the time too.
https://immersed.com/
That is literally special software lmao
I’m not sure that wish will every be granted. You need some sort of specific software to get the hardware communicating.
It’s been over a decade since the oculus rift came out and there hasn’t been much improvement.
A Quest 3 isn’t “insane.” It does AR just fine for a few hundred bucks. There ARE real world applications and more coming all the time. The education and medical fields in particular can benefit greatly from such tech.
Maybe it’s as simple as the next big product. When smartphones were new, nobody foresaw just how huge they’d become. Nobody could have foreseen what a force they’d turn Apple into. But now improvements are simply iterative, the market is nearing saturation, there’s not much room left to expand what’s next?
Maybe AR. It’s a really cool technology just now becoming practical to implement. Think of them as where smartphones were 15 years ago. Maybe they won’t go anywhere but imagine if they did! Imagine being the company most associated with the next hit tech product!
Apple risks stagnating if they don’t find a next hit product
That’s the point. They want to set themselves up so that when the issues are shed and it becomes a realistic product, they’re already in a place where their product can be the one that takes over the market. If you wait until a product is viable before starting on development, you’re too late.
Agree on all that. In addition, headsets would become so very unhealthy if they took off. Just imagine the addictiveness of phones combined with the sedentary qualities of TV, with both dialed up to 11. People’s vision would get all fucked up, and they would start dying on their couches plugged in. It’s simply not a vision for the future that has any legs.
More often than not, I’m burning calories in VR.
It was in the movies they liked when they were kids. Or at least in the movies they think users want to see brought to reality.
As in an answer to the question “what’s cool and futuristic”. Solving medieval barbarism and wars is futuristic, but turns out to not be achievable. Same with floating/underwater oceanic cities, blooming deserts, Mars colonies and 20 minutes on train from Moscow to New Delhi. At the same time the audience has been promised by advertising over years that future will be delivered to them. So - AR. For Apple this is the most important part, I think.
Also to augment something you have to analyze it, and if you have to analyze it, you are permitted to scan and analyze it. That’s a general point of attraction, I think. They are just extrapolating what led them to current success.
Also in some sense popular things were toys or promises of future for businesses and individuals alike, in the last 10-15 years. The audience is getting tired of toys and promises, while these companies don’t know how to make something else.
So let Tim Apple care about anything from AR in front of him to apples in his augmented rear, he surely knows what he wants. As another commenter says, a source of instructions and hints for a human walking drone is one, with visualization. I’m not sure that’s good, because if you can get that information for the machine, having a human there seems unnecessary. And if that information is not reliable enough, then it may not improve human’s productivity and error rate.
And the most important part is that humans learn by things being hard to do, it’s like working out in an exoskeleton, what’s the purpose? And if training and work are separated here, then it seems more effort is spent in total. Not sure.
It’s for real time facial recognition for LEO so they can easily identify and round up immigrants and dissidents. They want the government contracts