Imagine an AGI (Artificial General Intelligence) that could perform any task a human can do on a computer, but at a much faster pace. This AGI could create an operating system, produce a movie better than anything you’ve ever seen, and much more, all while being limited to SFW (Safe For Work) content. What are the first things you would ask it to do?
The obvious answer is to use that to create an AGPL3-or-newer clean room implementation of itself, then use that to do whatever I want
Discuss the notion and evidence for us being in an approximate recreation of Earth circa 2020s as recreated by future version of said superintelligence.
If I’m having an existential crisis, it should too.
I was thinking about this a few days ago. GANs and the Simulation Hypothesis: An AI Perspective
I think a lot of things that are proposed here could not be done by an AGI on an computer, no matter how intelligent. Consider this alternative scenario: You have an exceptionally intelligent young human adult with a computer locked in a room. They have no specialized education or anything. They are just extremely intelligent. What could you achieve through such a person?
Discovery of new physics is out of the question. That would need experiments.
Locked in a room with an internet connection? A lot. But without any contact with the outside world? Not nearly as much. It could have other people running experiments for it with an internet connection, but not without one.
Anyway, whether or not the AGI can interact with the real world undermines the purpose of my explicit statement in the question. I specifically mentioned that it only operates as a human on a computer. I didn’t mention it could acquire a physical body, so let’s just assume it can’t and can’t use other people to do physical labor either.
Tell it to figure out zero point energy or whatever else scifi type free energy is possible. Then tell it to figure out the cheapest easiest way to implement the technology. Then have it disseminate those plans world wide to everyone.
This sounds like science fiction. Even if the AGI were capable of creating plans for a fusion reactor, for example, you would still need to execute those plans. So, what’s the point of everyone having access to the plans if the same electrical companies will likely be responsible for constructing the reactor?
If that exists, it’s curtains for humanity. Not because of the AGI itself killing us all, necessarily, but because that means human labor is forever obsolete and the vast majority of humans, including me, will soon starve to death on the street or be imprisoned for vagrancy.
So, I wouldn’t ask it anything, except maybe to recommend a suicide method.
Hopefully there are some people more positive than that, willing to change society so AGI doesn’t make most humans starve to death or be imprisoned.
Come up with a very low cost power generator and open source the whole thing.
The kind that uses gas? I honestly wouldn’t have thought someone would be interested in open-sourcing this. I would prefer if it designed an open-source Roomba or, while we’re at it, a robot body so that it could perform more tasks. But you would still have to build it yourself.
Not gas, something more environmental for sure.
I heard disruptive science is slowing down which I think means pretty much everything possible has already been thought of. So talking about things that exist, do you mean a cheaper solar panel or wind/water turbine? Or are we talking about science fiction like an Arc Reactor?
You’re assuming a human could do that on a computer, though. It’s kind of hard to improve on that basic and very mature technology.
We’re talking super intelligence here.
I put more weight on the description text, but yes that was in the title.
Even if we assume it’s a god, though, I’m not sure there’s a way to improve on most kinds of generators more than incrementally. I don’t expect it would improve on “the wheel” either.
I’m sure there are methods of generating electricity that we haven’t even stumbled on.
How would that work. Electrons are very well understood, as are the ways of getting them to move.
I think we’re pretty far from the peak understanding of almost everything. There are so many discoveries still to be made.
Based on what? Sure, I’m guessing we’re just starting with planetary science and cosmology, but power generation has been explored to death and we’re still using the same basic alternator design as Tesla was.
I’d want a familiar/daemon that was running an AI personality to act as a personal assistant, friend and interactive information source. It could replace therapy and be a personalized tutor, and it would always be up to date on the newest science and global happenings.
I honestly think that with an interesting personality, most people would drastically reduce their Internet usage in favor of interacting with the AGI. It would be cool if you could set the percentage of humor and other traits, similar to the way it’s done with TAR in the movie Interstellar.
That’s possible now. I’ve been working on such a thing for a bit now and it can generally do all that, though I wouldn’t advise it to be used for therapy (or medical advice), but mostly for legal reasons rather than ability. When you create a new agent, you can tell it what type of personality you want. It doesn’t just respond to commands but also figures out what needs to be done and does it independently.
Yeah I haven’t played with it much but it feels like ChatGPT is already getting pretty close to this kind of functionality. It makes me wonder what’s missing to take it to the next level over something like Siri or Alexa. Maybe it needs to be more proactive than just waiting for prompts?
I’d be interested to know if current AI would be able to recognize the symptoms of different mental health issues and utilize the known strategies to deal with them. Like if a user shows signs of anxiety or depression, could the AI use CBT tools to conversationally challenge those thought processes without it really feeling like therapy? I guess just like self-driving cars this kind of thing would be legally murky if it went awry and it accidentally ended up convincing someone to commit suicide or something haha.
That last bit already happened. An AI (allegedly) told a guy to commit suicide and he did. A big part of the problem is while GPT4 for instance knows all about all the things you just said and can probably do what you’re suggesting, nobody can guarantee it won’t get something horribly wrong at some point. Sort of like how self driving cars can handle like 95% of things correctly but that 5% of unexpected stuff that maybe takes some extra context that a human has and the car was never trained on is very hard to get past.
Thanks for the link, that sounds like exactly what I was asking for but gone way wrong!
What do you think is missing to prevent these kinds of outcomes? Is AI simply incapable of categorizing topics as ‘harmful to humans’ on it’s own without a human’s explicit guidance? It seems like the philosophical nuances of things like consent or dependence or death would be difficult for a machine to learn if it isn’t itself sensitive to them. How do you train empathy in something so inherently unlike us?
Truth be told I don’t know what I would. That much said, I don’t foresee AI replacing us right away. It’s got quite a ways to go before that happens. But when it does, I expect there to be a massive disruption in society because joblessness and homelessness is going to skyrocket because there just is no assistance in a hypercapitalist world.
I wouldn’t be surprised if corporations just asked the AI to make as much money as possible at the expense of everything else. But people like living in capitalist countries anyways, while complaining about the lack of safety nets. Otherwise they would move to countries like China, North Korea or Cuba.
how to tie my shoe?