It’s getting old telling people this, but… the AI that we have right now? Isn’t even really AI. It’s certainly not anything like in the movies. It’s just pattern-recognition algorithms. It doesn’t know or understand anything and it has no context. It can’t tell the difference between a truth and a lie, and it doesn’t know what a finger is. It just paints amalgamations of things it’s already seen, or throws together things that seem common to it— with no filter nor sense of “that can’t be correct”.
I’m not saying there’s nothing to be afraid of concerning today’s “AI”, but it’s not comparable to movie/book AI.
Edit: The replies annoy me. It’s just the same thing all over again— everything I said seems to have went right over most peoples’ heads. If you don’t know what today’s “AI” is, then please stop assuming about what it is. Your imagination is way more interesting than what we actually have right now. This is why we should have never called what we have now “AI” in the first place— same reason we should never have called things “black holes”. You take a misnomer and your imagination goes wild, and none of it is factual.
THANK YOU. What we have today is amazing, but there’s still a massive gulf to cross before we arrive at artificial general intelligence.
What we have today is the equivalent of a four-year-old given a whole bunch of physics equations and then being told “hey, can you come up with something that looks like this?” It has no understanding besides “I see squiggly shape in A and squiggly shape in B, so I’ll copy squiggly shape onto C”.
I really think the only thing to be concerned of is human bad actors with AI and not AI. AI alignment will be significantly easier than human alignment as we are for sure not aligned and it is not even our nature to be aligned.
I’ve had this same thought for decades now ever since I first heard of ai takeover scifi stuff as a kid. Bots just preform set functions. People in control of bots can create mayhem.
Isn’t that also referred to as Virtual Intelligence vs Artificial Intelligence? What we have now I’d just very well trained VI. It’s not AI because it only outputs variations of what’s it been trained using algorithms, right? Actual AI would be capable of generating information entirely distinct from any inputs.
GAI - General Artificial Intelligence is what most people jump too. And, for those wondering, that’s the beginning of the end game type. That’s the kind that will understand context. The ability to ‘think’ on its own with little to no input from humans. What we have now is basically autocorrect on super steroids.
What people are calling “AI” today is not AI in the sense of how laypeople understand it. Personally I hate the use of the term in this context and think it would have been much better to stick with Machine Learning (often just ML). Regardless, the point is that you cannot get from these system to what you think of as AI. To get there it would require new, different systems. Or changing these systems so thoroughly as to make them unrecognizable from their origins.
If you put e.g. ChatGPT into a robotic body with sensors… you’d get nothing. It has no concept of a body. No concept of controlling the body. No concept of operating outside of the constraints within which it already operates. You could debate if it has some inhuman concept of language, but that debate is about as far as you can go.
Actual AI in the sense of how we conceive of it at a societal level is something else. It very well may be that many years down the line that historians will look back at the ML advancements happening today as a major building block for the creation of that “true” AI of the future, but as-is they are not the same thing.
To put it another way: what happens if you connect the algorithms controlling a video game NPC to a robotic body? Absolutely nothing. Same deal here.
Not the guy you were referring to, but it’s not so much “improve” as “another paradigm shift is still needed”.
A “robotic body with sensors” has already been around since 1999. But no matter how many sensors, no matter how lifelike and no matter how many machine learning algorithms/LLMs are thrown in, it is still not capable of independent thought. Anything that goes wrong is still due to human error in setting parameters.
To get to a Terminator level intelligence, we need the machine to be capable of independent thought. Comparing independent thought to our current generative AI technology is like comparing a jet plane to a horse drawn carriage - you can call it “advancement”, yes, but there are many intermediate steps that need to happen. Just like an internal combustion engine is the linkage between horse-drawn carriages and planes, some form of independent thought is the link between generative AI and actual intelligent machines.
With a crucial difference - inventors of all those knew how the invention worked. Inventors of current AIs do NOT know the actual mechanism how it works. Hence, output is unpredictable.
they program it to learn. They can tell you exactly how it learns, but not what it learned (there are some techniques to give some small insights, but not even close to the full picture)
Problem is, how it behaves nepends on how it was programmed and what it learned after being trained. Since what it learned is a black box, we cannot explain their behaviour
I just think it’s wild they wouldn’t know how it works when they’re the ones who created it. How do you program something that you don’t understand?! It’s crazy.
Regardless of if its true AI or not (I understand its just machine learning) Cameron’s sentiment is still mostly true. The Terminator in the original film wasn’t some digital being with true intelligence, it was just a machine designed with a single goal. There was no reasoning or planning really, just an algorithm that said "get weapons, kill Sarah Connor. It wasn’t far off from an Boston Dynamics robot using machine learning to complete a task.
You don’t understand. Our current AI? Doesn’t know the difference between an object and a painting. Furthermore, everything it perceives is “normal and true”. You give it bad data and suddenly it’s broken. And “giving it bad data” is way easier than it sounds. A “functioning” AI (like a Terminator) requires the ability to “understand” and scrutinize— not just copy what others tell it without any context or understanding, and combine results.
That type of reductionism isn’t really helpful. You can describe the human brain to also just be pattern recognition algorithms. But doing that many times, at different levels, apparently gets you functional brains.
I just listened to 2 different takes on AI by true experts and it’s way more than what you’re saying. If the AI doesn’t have good goals programmed in, we’re fucked.It’s also being controlled by huge corporations that decide what those goals are. Judging from the past, this is not good.
It isn’t AI. It’s just a digital parrot. It just paints up text or images based on things it already saw. It has no understanding, knowledge, or context. Therefore it doesn’t matter how much data you feed it, it won’t be able to put together a poem that doesn’t sound hokey, or digital art where characters don’t have seven fingers or three feet. It doesn’t even understand what objects are and therefore how many of them there should be. They’re just pixels to the tech.
This technology will not be able to guide a robot to “think” and take actions accordingly. It’s just not the right technology— it’s not actually AI.
If the AI doesn’t have good goals programmed in, we’re fucked
When they built a new building at my college they decided to to use “AI” (back when SunOS ruled the world) to determine the most efficient route for the elevator to take.
The parameter they gave it to measure was “how long does each wait to get to their floor”. So it optimized for that and found it could get it down to 0 by never letting anyone get on, so they never got to their floor, so their wait time was unset (which = 0).
They tweaked the parameters to ensure everyone got to their floor and as far as I can tell it worked well. I never had to wait much for an elevator.
Mate, a bad actor could put today’s LLM, face recognition softwares and functionality into an armed drone, show it a picture of Sara Connor and tell it to go hunting and it would be able to handle the rest. We are just about there. Call it what you want.
LLM stands for Large Language Model. I don’t see how a model to process text is going to match faces out in the field. And either that drone is flying chest-hight, it better recognize people’s hair patterns (balding Sarah Connors beware or wear hats!).
It’s getting old telling people this, but… the AI that we have right now? Isn’t even really AI. It’s certainly not anything like in the movies. It’s just pattern-recognition algorithms. It doesn’t know or understand anything and it has no context. It can’t tell the difference between a truth and a lie, and it doesn’t know what a finger is. It just paints amalgamations of things it’s already seen, or throws together things that seem common to it— with no filter nor sense of “that can’t be correct”.
I’m not saying there’s nothing to be afraid of concerning today’s “AI”, but it’s not comparable to movie/book AI.
Edit: The replies annoy me. It’s just the same thing all over again— everything I said seems to have went right over most peoples’ heads. If you don’t know what today’s “AI” is, then please stop assuming about what it is. Your imagination is way more interesting than what we actually have right now. This is why we should have never called what we have now “AI” in the first place— same reason we should never have called things “black holes”. You take a misnomer and your imagination goes wild, and none of it is factual.
THANK YOU. What we have today is amazing, but there’s still a massive gulf to cross before we arrive at artificial general intelligence.
What we have today is the equivalent of a four-year-old given a whole bunch of physics equations and then being told “hey, can you come up with something that looks like this?” It has no understanding besides “I see squiggly shape in A and squiggly shape in B, so I’ll copy squiggly shape onto C”.
I really think the only thing to be concerned of is human bad actors with AI and not AI. AI alignment will be significantly easier than human alignment as we are for sure not aligned and it is not even our nature to be aligned.
I’ve had this same thought for decades now ever since I first heard of ai takeover scifi stuff as a kid. Bots just preform set functions. People in control of bots can create mayhem.
Not at all.
They just don’t like being told they’re wrong and will attack you instead of learning something.
Strong AI vs weak AI.
We’re a far cry from real AI
Isn’t that also referred to as Virtual Intelligence vs Artificial Intelligence? What we have now I’d just very well trained VI. It’s not AI because it only outputs variations of what’s it been trained using algorithms, right? Actual AI would be capable of generating information entirely distinct from any inputs.
GAI - General Artificial Intelligence is what most people jump too. And, for those wondering, that’s the beginning of the end game type. That’s the kind that will understand context. The ability to ‘think’ on its own with little to no input from humans. What we have now is basically autocorrect on super steroids.
deleted by creator
Not much, because it turns out there’s more to AI than a hypothetical sum of what we already created.
deleted by creator
That’s not what they said.
What people are calling “AI” today is not AI in the sense of how laypeople understand it. Personally I hate the use of the term in this context and think it would have been much better to stick with Machine Learning (often just ML). Regardless, the point is that you cannot get from these system to what you think of as AI. To get there it would require new, different systems. Or changing these systems so thoroughly as to make them unrecognizable from their origins.
If you put e.g. ChatGPT into a robotic body with sensors… you’d get nothing. It has no concept of a body. No concept of controlling the body. No concept of operating outside of the constraints within which it already operates. You could debate if it has some inhuman concept of language, but that debate is about as far as you can go.
Actual AI in the sense of how we conceive of it at a societal level is something else. It very well may be that many years down the line that historians will look back at the ML advancements happening today as a major building block for the creation of that “true” AI of the future, but as-is they are not the same thing.
To put it another way: what happens if you connect the algorithms controlling a video game NPC to a robotic body? Absolutely nothing. Same deal here.
It’s not about improvement, it’s about actual AI being completely different technology, and working in a completely different way.
Not the guy you were referring to, but it’s not so much “improve” as “another paradigm shift is still needed”.
A “robotic body with sensors” has already been around since 1999. But no matter how many sensors, no matter how lifelike and no matter how many machine learning algorithms/LLMs are thrown in, it is still not capable of independent thought. Anything that goes wrong is still due to human error in setting parameters.
To get to a Terminator level intelligence, we need the machine to be capable of independent thought. Comparing independent thought to our current generative AI technology is like comparing a jet plane to a horse drawn carriage - you can call it “advancement”, yes, but there are many intermediate steps that need to happen. Just like an internal combustion engine is the linkage between horse-drawn carriages and planes, some form of independent thought is the link between generative AI and actual intelligent machines.
True but that doesn’t keep it from screwing a lot of things up.
Yes, sure. I meant things like employment, quality of output
That applies to… literally every invention in the world. Cars, automatic doors, rulers, calculators, you name it…
With a crucial difference - inventors of all those knew how the invention worked. Inventors of current AIs do NOT know the actual mechanism how it works. Hence, output is unpredictable.
Lol could you provide a source where the people behind these LLMs say they don’t know how it works?
Did they program it with their eyes closed?
they program it to learn. They can tell you exactly how it learns, but not what it learned (there are some techniques to give some small insights, but not even close to the full picture)
Problem is, how it behaves nepends on how it was programmed and what it learned after being trained. Since what it learned is a black box, we cannot explain their behaviour
Yes I can. example
Opposed to other technology, nobody knows the internal structure. Input A does not necessarily produce output B.
Whether you like it or not is irrelevant.
“Whether you like it or not is irrelevant.”
That’s a very hostile take.
I just think it’s wild they wouldn’t know how it works when they’re the ones who created it. How do you program something that you don’t understand?! It’s crazy.
Sounds like you described a baby.
Yeah, I think there’s a little bit more to consciousness and learning than that. Today’s AI doesn’t even recognize objects, it just paints patterns.
Regardless of if its true AI or not (I understand its just machine learning) Cameron’s sentiment is still mostly true. The Terminator in the original film wasn’t some digital being with true intelligence, it was just a machine designed with a single goal. There was no reasoning or planning really, just an algorithm that said "get weapons, kill Sarah Connor. It wasn’t far off from an Boston Dynamics robot using machine learning to complete a task.
You don’t understand. Our current AI? Doesn’t know the difference between an object and a painting. Furthermore, everything it perceives is “normal and true”. You give it bad data and suddenly it’s broken. And “giving it bad data” is way easier than it sounds. A “functioning” AI (like a Terminator) requires the ability to “understand” and scrutinize— not just copy what others tell it without any context or understanding, and combine results.
That type of reductionism isn’t really helpful. You can describe the human brain to also just be pattern recognition algorithms. But doing that many times, at different levels, apparently gets you functional brains.
But his statement isn’t reductionism.
I just listened to 2 different takes on AI by true experts and it’s way more than what you’re saying. If the AI doesn’t have good goals programmed in, we’re fucked.It’s also being controlled by huge corporations that decide what those goals are. Judging from the past, this is not good.
You seem to have completely missed the point of my post.
Could you explain to me how?
It isn’t AI. It’s just a digital parrot. It just paints up text or images based on things it already saw. It has no understanding, knowledge, or context. Therefore it doesn’t matter how much data you feed it, it won’t be able to put together a poem that doesn’t sound hokey, or digital art where characters don’t have seven fingers or three feet. It doesn’t even understand what objects are and therefore how many of them there should be. They’re just pixels to the tech.
This technology will not be able to guide a robot to “think” and take actions accordingly. It’s just not the right technology— it’s not actually AI.
When they built a new building at my college they decided to to use “AI” (back when SunOS ruled the world) to determine the most efficient route for the elevator to take.
The parameter they gave it to measure was “how long does each wait to get to their floor”. So it optimized for that and found it could get it down to 0 by never letting anyone get on, so they never got to their floor, so their wait time was unset (which = 0).
They tweaked the parameters to ensure everyone got to their floor and as far as I can tell it worked well. I never had to wait much for an elevator.
That’s valid, but it has nothing to do with general intelligent machines.
An AI can’t be controlled by corporations, an AI will control corporations.
Mate, a bad actor could put today’s LLM, face recognition softwares and functionality into an armed drone, show it a picture of Sara Connor and tell it to go hunting and it would be able to handle the rest. We are just about there. Call it what you want.
That sure sounds nice in your head.
LLM stands for Large Language Model. I don’t see how a model to process text is going to match faces out in the field. And either that drone is flying chest-hight, it better recognize people’s hair patterns (balding Sarah Connors beware or wear hats!).