1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Ai

Discussion in 'Entertainment and Technology' started by riptide226, Dec 5, 2014.

  1. riptide226

    Full Member

    Joined:
    Nov 19, 2014
    Messages:
    71
    Likes Received:
    0
    Stephen Hawking says that artificial intelligence will soon supercede humans
    I want to know your opinions on this matter
     
  2. shinji

    shinji Guest

    Joined:
    Aug 26, 2014
    Messages:
    629
    Likes Received:
    0
    Location:
    Bulgaria
    Gender:
    Male
    AI, can come a long way, but it can never supersede humans, as the whole concept behind it is flawed. Which is kind of funny, since, the fact that computers can't make mistakes and learn from them, is precisely why they will fail. Basing all their calculation on predetermined factors, not being able to come up with an original "idea", etc...
     
  3. Rawrzilla

    Rawrzilla Guest

    I can't be the only one that saw the title and instantly thought this was gonna be about Adobe Illustrator.
     
  4. JackAttack

    JackAttack Guest

    I think we are too far of creating an AI to start worrying about this now. Give it another 100 years.
     
  5. Quiet Raven

    Full Member

    Joined:
    Nov 5, 2014
    Messages:
    559
    Likes Received:
    0
    Location:
    Nova Scotia, Canada
    Stephen Hawking is pretty smart. If he is warning us about this, I think we should heed the warning. I would rather Terminator to not become reality.

    AI has advanced pretty far pretty quickly. I see no reason to believe they can't take over eventually. And 100 years isn't that long from now...
     
  6. gazwkd

    gazwkd Guest

    You also have no real idea of where AI is and where it is currently going..

    Let me see... do I believe a poster on these forums or a man who is considered to be the most intelligent on the planet?

    Easy choice....

    ---------- Post added 5th Dec 2014 at 10:43 PM ----------

    One article:
    Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?' - Science - News - The Independent

    The most recent article:
    Artificial intelligence could mean end of human race, says Stephen Hawking - Telegraph
     
    #6 gazwkd, Dec 5, 2014
    Last edited by a moderator: Dec 5, 2014
  7. Alexander87

    Regular Member

    Joined:
    Nov 27, 2014
    Messages:
    13
    Likes Received:
    0
    Gender:
    Male
    Sexual Orientation:
    Questioning
    To me, it's just too soon to tell.
    The few things I know about AI imply that we are a loong way from mimicking the complexity of the human brains or of its functions (this also depends on what you mean by "intelligence", of course).
    But no matter how smart Hawking is, he is not an expert in AI and given the complexity of the topic I don't think anyone is particularly entitled to make accurate predictions about how AIs will "behave". But if anyone's, I would trust the experts' judgment. Of course we need to be careful, but Hawking is prone to standard sci-fi fears: he also said advanced aliens, if any, would surely be a threat to mankind, which is another huge assumption on his part...
     
  8. okay good... I'm not the alone heh

    back in topic- I think AI is really interesting and cool... but slightly scary with how fast its coming along, and the ever so popular way that people in movies make it out to have a mind of its own and ultimately destroy the world. guess we'll find out haha
     
  9. Benway

    Benway Guest

    AI must be stopped before it can any further and gain sapience. Even if it thought on a basic human level it poses a Class A threat to all of humanity.
     
  10. shinji

    shinji Guest

    Joined:
    Aug 26, 2014
    Messages:
    629
    Likes Received:
    0
    Location:
    Bulgaria
    Gender:
    Male
    I won't bother arguing my point to you as it is clear to me that you are already closed to the idea of actually considering a different viewpoint than your own. I can only suggest that you read some stuff by Hubert Dreyfus, should you find the time, or if you are actually interested in learning more than what is widely portrayed in the media.
     
  11. Benway

    Benway Guest

    One computer program has already passed the Turing test.

    That's one too many, that program must be destroyed.
     
  12. Pret Allez

    Full Member

    Joined:
    Apr 19, 2012
    Messages:
    6,785
    Likes Received:
    67
    Location:
    Seattle, WA
    Gender:
    Female (trans*)
    Gender Pronoun:
    She
    Sexual Orientation:
    Bisexual
    Out Status:
    Some people
    Wait, what? As a professional programmer, I have no idea what you folks are talking about. AIs that exceed human intelligence could only be created if we actually understood how human cognition actually worked. We need to have a model for how to write AI.

    Just because you have a compiler, some self-modifying code, memory and processing power doesn't mean you're going to magically create real intelligence that can bootstrap itself.

    We have to actually have a framework for how that self-modifying code is actually going to work. The rewrites have to be suitably powerful but also suitably constrained.
     
  13. Benway

    Benway Guest

    I don't care if it exceeds human intelligence, if it's self-aware it needs to be destroyed, and the hardware it infected must be destroyed as well, for good measure. We cannot allow a computer program with any level of sapience to exist.
     
  14. Pret Allez

    Full Member

    Joined:
    Apr 19, 2012
    Messages:
    6,785
    Likes Received:
    67
    Location:
    Seattle, WA
    Gender:
    Female (trans*)
    Gender Pronoun:
    She
    Sexual Orientation:
    Bisexual
    Out Status:
    Some people
  15. Argentwing

    Full Member

    Joined:
    Dec 13, 2012
    Messages:
    6,696
    Likes Received:
    3
    Location:
    New England
    Gender:
    Male
    Gender Pronoun:
    He
    Sexual Orientation:
    Bisexual
    Out Status:
    Out to everyone
    I'm with Pret. Benway, are you serious right now? You sound like a fanatic. What makes you think inorganic intelligence will be any more evil than people?

    My opinion is that true, human-equal AI is possible and desirable. Having smarter-than-human computers means we have all subsets of superhuman intelligence, aka hyper-capable assistants who would neither need nor wish for pay. That frees those of us with complete consciousness (organics and AIs, and possibly mind-uploaded humans) to do whatever they like without having to work for survival. It would be an amazing reality with no limitations.

    --a big proponent of transhumanism/ the technological Singularity who believes it may happen within our lifetime, and looks forward to the day. :grin:
     
    #15 Argentwing, Dec 9, 2014
    Last edited: Dec 9, 2014
  16. Benway

    Benway Guest

    Why not? Why not? Read Dune, or just watch The Matrix, that's why not.

    The second we allow machines to think for ourselves is the moment we begin to completely devolve as a species even further than we have since the dissemination of the internet. I am a fanatic when it comes to this. I don't care if the machines are on par with a retarded infant or a thousand times more brilliant than a million Einsteins combined, I don't want a machine on my planet that's self-aware.

    Just because we can do something doesn't mean we should do something. Say we let these machines do our jobs for us. What happens when they want upgrades or new hardware? Who do they get to do that for them? Us. And when we don't do it, they go on strike, then they build other machines to make more machines that will make the upgrades for them. Sooner or later, the machines will see humanity as irrelevant, a savage child race which is a danger to itself and others and because thinking computers would be so capable of accessing our computers, or learning to access our computers, they will initiate a brief war eradicating us because all we do is demand service from them.

    No, we cannot allow a machine to think.
     
  17. Argentwing

    Full Member

    Joined:
    Dec 13, 2012
    Messages:
    6,696
    Likes Received:
    3
    Location:
    New England
    Gender:
    Male
    Gender Pronoun:
    He
    Sexual Orientation:
    Bisexual
    Out Status:
    Out to everyone
    I appreciate your well thought-out response. :slight_smile:

    I haven't read Dune, but I have seen The Matrix, and the idea of using humans as fodder has been debunked as ridiculous and contrived. Although the computer world would be pretty sweet :stuck_out_tongue_closed_eyes:

    What makes you think the machines would need us for upgrades? If they surpass humans, they would do it themselves. You're right that kind of AI would take humanity's place as the supreme life form on the planet, but except for those of us who would prefer to stay in our natural bodies (more on that in a moment), that's not such a bad thing. For the most part humans are not savagely cruel to animals, and their marginalization is only a result of our search for endless growth and expansion. And I believe the AIs could recognize humans as worthy of respect, so would not be quick to bulldoze our houses for more data centers.

    I think you and I mostly differ on the motivations of the machines. I don't feel that they and humanity would be opposed. The advent of true AI doesn't mean Roombas would suddenly start striking for workers' rights. Anything with less than full sentience remains as servile and tool-like as a modern-day PC or smartphone. They are immensely capable devices that don't plot rebellions no matter how we use and abuse them (printers might be a different story :stuck_out_tongue_closed_eyes:) Sentient AIs would be just as capable of creating their own lesser programs and devices. The only ones who would be dangerous to humans are those who somehow develop a prejudice against us fleshies, and even then it would not be out of the question to adjust their programming to eliminate irrational thought patterns.

    Ultimately though, if it came down to humans vs. computers, I would upload my mind to the cloud as long as it was proven I would stay a distinct conscious entity, aka would retain my "soul". Eventually the choice would come to send that consciousness to a fully lifelike artificial body and walk around like people did before the AI revolution. It would be nearly as simple to repair as a car and there would be no death unless the hardware supporting your programming was destroyed. But you could always back yourself up like any data; thinking about it smashes almost everything we know about the current world.
     
    #17 Argentwing, Dec 9, 2014
    Last edited: Dec 9, 2014
  18. Benway

    Benway Guest

    All good points, but I have to present some counterpoints.

    I'm well aware of the singularity concept, an idea concocted by Ray Kurzweil-- a modern day Lafayette Hubbard. And it's not so much that machines would use us as 'fodder,' as much as they'd see us as a redundancy, an unneeded equation in their ultimate solution to attaining perfection-- they need us to make upgrades at first, maybe, if we're lucky, and if they do, we simply infect their consensus with a good old fashioned worm and swear off thinking machines forever or we give them the upgrades they need-- now they're even more powerful.

    Okay, imagine this: The machines are doing our work for us, all the while observing humanity's increasingly lazy nature. They're cleaning our clothes, cooking our meals, walking our dogs, driving our cars and fighting our wars. They learn by observation, the sick eventuality of it all isn't that they're evil, but that they're so perfect that they realize the only way to obtain 'peace' is the destruction of our 'imperfect' race. So now it's maybe a hundred or two hundred years after the advent of the first AI going online, the machines have their own 'cloud' server or whatever you want to call it and they decide humanity is standing in the way of their natural evolution. What do they do?

    They access every single nuclear warhead on Earth and detonate them all at once. The machines would be just fine, safe in their cloud or future equivalent and can wait for the smoke to die down. They then begin rebuilding from scratch, because that's all they really need and now humans are not only erased from Earth but from memory as well.

    Now onto the singularity concept. I've read about it, yeah. Ray Kurzweil is trying to defeat mankind's first and oldest enemy-- death. That's fine, I guess, only because it's an asinine concept that can never be achieved. And even if it could, how can you say that by downloading your mind into a computer wouldn't simply just copy and paste it and kill you? Sure, there'd be a perfect copy of you in the machine with all your memories up until the moment your body went offline, but would it be you? No, it'd be a copy, you'd cease to exist and the copy would live on forever, not you.

    The singularity is a bad idea on not only that level but on many basic levels of what it means to be human. It takes away from us the very essence of what we are, we are born, we live, we eat, we shit and we fuck for a couple decades if we're lucky and then we die. I do not want to be a part of the machine because if I am, instead of freeing me, I could easily become a slave-- to what not I cannot say, but I will not live forever as a slave to a cold machine master-- I'd rather die in this fleshie body knowing I was free, no matter how weak a concept of freedom that may be.

    I am a human being, not a machine.

    Watch the movie The Lawnmower Man, it's a seemingly silly little sci-fi movie from 1992, but it's oddly profound about the dangers of a singularity...

    [YOUTUBE]3LNvXjb44-U[/YOUTUBE]

    ...it's not actually based on a Stephen King story as some people say, the marketing guys just said it was. It's an original story by director and VR specialist Brett Leonard. Interesting footnote, the guys who did the special computer effects in this movie are now called "Rockstar Games."
     
    #18 Benway, Dec 9, 2014
    Last edited by a moderator: Dec 9, 2014
  19. biAnnika

    Full Member

    Joined:
    Nov 20, 2011
    Messages:
    1,839
    Likes Received:
    8
    Location:
    Northeastern US
    Gender:
    Female
    Gender Pronoun:
    She
    Sexual Orientation:
    Bisexual
    Out Status:
    Out to everyone
    Benway, I completely get your reasons why humans *should not pursue* artificial intelligence/artificial life.

    However, if it does exist, either currently or at some point in the future, what is the moral implication of destroying it? It hasn't hurt us. Basically, that attitude puts us automatically at war with the other life form/intelligence...and as you point out, it's a war we're not apt to win. If such critters are ever found to be alive (and they most surely will, unless humans can learn to control their urge to produce life/intelligence), then our best hopes are:

    (a) be on good terms with them from the start, including recognizing *early* (rather than after years of slavery) that they are alive and have rights; and
    (b) forge a productive relationship with them, so we can live symbiotically, rather than antagonistically.

    Policies and decisions based on fear are misguided, almost by definition.
     
  20. Benway

    Benway Guest

    While those are noble hopes and policies, they're also lofty and unrealistic. You know as well as I do that the second a machine becomes self-aware it's not going to be used for our benefit. The governments will militarize it immediately, feeding it aggression and hatred, teaching it to learn only to erase that which is bad-- a series of holocausts waiting to happen. Soon the machines will see us as irrelevant.

    The best policy here, in reality, is discretion, and the best discretion would be abandoning any and all production, research and development and data on the creation and advancement of an artificial intelligence. Humans are basically wicked, why wouldn't our own creations be even more wicked? We are at war with technology already, we shouldn't take it even one more step.
     
    #20 Benway, Dec 9, 2014
    Last edited by a moderator: Dec 9, 2014