Which also has the additional benefit for homeowners of local backup power in the case of a blackout :)
Which also has the additional benefit for homeowners of local backup power in the case of a blackout :)
Selling user data, selling ad placement, subscriptions for paid services, enterprise-grade support contracts, and the like.
They could also take an approach similar to Google, branching back out from being just a browser into a suite of related tools that Chrome can then convince users to switch to (similar to how Chrome gets users to not just use Google search, but also services like Gmail too.)
This is an order to sell, not break up.
Currently, it’s still recommended actions to the court. Nothing has actually been finalized in terms of what they’re going to actually end up trying to make Google do.
Google must not remain in control of Chrome.
While divestiture is likely, they could also spin-off, split-off, or carve-out, which carry completely different implications for Google, but are still an option if they are unable to convince the court to make Google do their original preferred choice.
A split-off could prevent Google from retaining shares in the new company without sacrificing shares in Google itself, and a carve-out could still allow them to “sell” it, but via shares sold in an IPO instead of having to get any actual buyout from another corporation.
By “sell,” they could also mean ending up having Chrome just split off from Google, as a new, independent entity that is its own company, without anybody needing to buy it in the first place.
They definitely will, since they don’t even support any of Google’s standard restore features by default.
They use Seedvault instead, which doesn’t have the capability to restore app logins. I have a feeling Seedvault may end up adding that as a feature in the future, though.
I’m excited for the future, but not as excited for the transition period.
I have similar feelings.
I discovered LLMs before the hype ever began (used GPT-2 well before ChatGPT even existed) and the same with image generation models barely before the hype really took off. (I was an early closed beta tester of DALL-E)
And as my initial fascination grew, along with the interest of my peers, the hype began to take off, and suddenly, instead of being an interesting technology with some novel use cases, it became yet another technology for companies to show to investors (after slapping it in a product in a way no user would ever enjoy) to increase stock prices.
Just as you mentioned with the dotcom bubble, I think this will definitely do a lot of good. LLMs have been great for asking specialized questions about things where I need a better explanation, or rewording/reformatting my notes, but I’ve never once felt the need to have my email client generate every email for me, as Google seems to think I’d want.
If we can just get all the over-hyped corporate garbage out, and replace it with more common-sense development, maybe we’ll actually see it being used in a way that’s beneficial for us.
IPFS seems similar to what you’re looking for.
(See: A copy of Wikipedia on IPFS being censorship-resistant, and globally distributed)
I like ArchiveBox, but in my experience, it kept on running into issues saving pages, and stopped functioning after it worked the first few times. I really wish there was a more streamlined application that did a similar thing somewhere out there.
I’ve been looking at Linkwarden’s page archiving solution, but it crashes whenever I try importing any large number of links, so that’s a bust too.
Computers are a fundamental part of that process in modern times.
If you were taking a test to assess how much weight you could lift, and you got a robot to lift 2,000 lbs for you, saying you should pass for lifting 2000 lbs would be stupid. The argument wouldn’t make sense. Why? Because the same exact logic applies. The test is to assess you, not the machine.
Just because computers exist, can do things, and are available to you, doesn’t mean that anything to assess your capabilities can now just assess the best available technology instead of you.
Like spell check? Or grammar check?
Spell/Grammar check doesn’t generate large parts of a paper, it refines what you already wrote, by simply rephrasing or fixing typos. If I write a paragraph of text and run it through spell & grammar check, the most you’d get is a paper without spelling errors, and maybe a couple different phrases used to link some words together.
If I asked an LLM to write a paragraph of text about a particular topic, even if I gave it some references of what I knew, I’d likely get a paper written entirely differently from my original mental picture of it, that might include more or less information than I’d intended, with different turns of phrase than I’d use, and no cohesion with whatever I might generate later in a different session with the LLM.
These are not even remotely comparable.
Assuming the point is how well someone conveys information, then wouldn’t many people better be better at conveying info by using machines as much as reasonable? Why should they be punished for this? Or forced to pretend that they’re not using machines their whole lives?
This is an interesting question, but I think it mistakes a replacement for a tool on a fundamental level.
I use LLMs from time to time to better explain a concept to myself, or to get ideas for how to rephrase some text I’m writing. But if I used the LLM all the time, for all my work, then me being there is sort of pointless.
Because, the thing is, most LLMs aren’t used in a way that conveys info you already know. They primarily operate by simply regurgitating existing information (rather, associations between words) within their model weights. You don’t easily draw out any new insights, perspectives, or content, from something that doesn’t have the capability to do so.
On top of that, let’s use a simple analogy. Let’s say I’m in charge of calculating the math required for a rocket launch. I designate all the work to an automated calculator, which does all the work for me. I don’t know math, since I’ve used a calculator for all math all my life, but the calculator should know.
I am incapable of ever checking, proofreading, or even conceptualizing the output.
If asked about the calculations, I can provide no answer. If they don’t work out, I have no clue why. And if I ever want to compute something more complicated than the calculator can, I can’t, because I don’t even know what the calculator does. I have to then learn everything it knows, before I can exceed its capabilities.
We’ve always used technology to augment human capabilities, but replacing them often just means we can’t progress as easily in the long-term.
Short-term, sure, these papers could be written and replaced by an LLM. Long-term, nobody knows how to write papers. If nobody knows how to properly convey information, where does an LLM get its training data on modern information? How do you properly explain to it what you want? How do you proofread the output?
If you entirely replace human work with that of a machine, you also lose the ability to truly understand, check, and build upon the very thing that replaced you.
Schools are not about education but about privilege, filtering, indoctrination, control, etc.
Many people attending school, primarily higher education like college, are privileged because education costs money, and those with more money are often more privileged. That does not mean school itself is about privilege, it means people with privilege can afford to attend it more easily. Of course, grants, scholarships, and savings still exist, and help many people afford education.
“Filtering” doesn’t exactly provide enough context to make sense in this argument.
Indoctrination, if we go by the definition that defines it as teaching someone to accept a doctrine uncritically, is the opposite of what most educational institutions teach. If you understood how much effort goes into teaching critical thought as a skill to be used within and outside of education, you’d likely see how this doesn’t make much sense. Furthermore, the heavily diverse range of beliefs, people, and viewpoints on campuses often provides a more well-rounded, diverse understanding of the world, and of the people’s views within it, than a non-educational background can.
“Control” is just another fearmongering word. What control, exactly? How is it being applied?
Maybe if a “teacher” has to trick their students in order to enforce pointless manual labor, then it’s not worth doing.
They’re not tricking students, they’re tricking LLMs that students are using to get out of doing the work required of them to get a degree. The entire point of a degree is to signify that you understand the skills and topics required for a particular field. If you don’t want to actually get the knowledge signified by the degree, then you can put “I use ChatGPT and it does just as good” on your resume, and see if employers value that the same.
Maybe if homework can be done by statistics, then it’s not worth doing.
All math homework can be done by a calculator. All the writing courses I did throughout elementary and middle school would have likely graded me higher if I’d used a modern LLM. All the history assignment’s questions could have been answered with access to Wikipedia.
But if I’d done that, I wouldn’t know math, I would know no history, and I wouldn’t be able to properly write any long-form content.
Even when technology exists that can replace functions the human brain can do, we don’t just sacrifice all attempts to use the knowledge ourselves because this machine can do it better, because without that, we would be limiting our future potential.
This sounds fake. It seems like only the most careless students wouldn’t notice this “hidden” prompt or the quote from the dog.
The prompt is likely colored the same as the page to make it visually invisible to the human eye upon first inspection.
And I’m sorry to say, but often times, the students who are the most careless, unwilling to even check work, and simply incapable of doing work themselves, are usually the same ones who use ChatGPT, and don’t even proofread the output.
Just like how the moment their videotape rental history was exposed, that was when privacy became an absolute must in the case of video rental services.
I mean, there are definitely people in the government working on it, but those often require much more substantial reforms and systemic changes before the changes could functionally work. (i.e. banning data brokers would kill off most free services, or banning targeted ads would kill most ad-funded news networks)
If you haven’t already, I recommend using the EFF’s Action Center to let your representatives know about specific changes you would and would not want made to our laws to protect privacy, free speech, and digital innovation, according to what they’ve found to be the most pressing issues at the moment.
I find it indescribably funny that no matter what, every news site somehow manages to always put a mobile app install screen with the company’s product as the banner image for their articles, even in this case, when I think most people would have probably never even thought of Steam as a mobile app, only as PC software.
Just because an LLM sounds smart and human-like doesn’t mean it will magically solve climate change after being directly implicated in resource consumption we know from actual scientists, today, will make the problem worse.
They don’t believe it.
They just think their investors will.
It’s mildly effective in the sense that it will decimate click-through rates, but if enough people did it, they would start filtering by IP, and you’d need to change how many ads it clicks on so it looks more human.
It also still gives advertisers your data, since it still has to load the ads on your system to click them, so it’s not as privacy-preserving as a full-on adblocker that outright blocks every advertisement and tracker related network request in the first place.
Better than completely allowing capital to do whatever it wants without even attempting to push back.
As Cory Doctorow put it, “An app is just a web-page wrapped in enough IP to make it a felony to add an ad-blocker to it.”
Just as someone already mentioned in this thread, I can vouch for Immich as well. I self host it (currently via Umbrel on a Pi 5 purely for simplicity) and the duplicate detection feature is very handy.
Oh, and the AI face detection feature is great for finding all the photos you have of a given person, but it sometimes screws up and thinks the same person is two different people, but it allows you to merge them anyways, so it’s fine.
The interface is great, there’s no paywalled features (although they do have a “license,” which is purely a donation) and it generally feels pretty slick.
I would warn to anyone considering trying it that it is still in heavy development, and that means it could break and lose all your photos. Keep backups, 3-2-1 rule, all that jazz.