Elon Musk Warns: Regulate AI Before It’s Too Late

The Guardian reports:

Tesla and Space X chief executive Elon Musk has pushed again for the proactive regulation of artificial intelligence because “by the time we are reactive in AI regulation, it’s too late”.

Speaking at the US National Governors Association summer meeting in Providence Rhode Island, Musk said: “Normally the way regulations are set up is when a bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry.”

“It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation.”

Musk has previously stated that AI is one of the most pressing threats to the survival of the human race, and that his investments into its development were made with the intention of keeping an eye on its development.

  • pj

    they dont believe in regulating things. small government…remember?

    • JCF

      Wait till AI’s regulate them! That’d almost make Our Robot Overlords worth it…almost…

  • Frostbite

    Just tell them all AIs are gay, they’ll lock them down quick.

    • Well, I consider myself a robosexual–I have a mad crush on Bumblebee from Transformers–so you might be on to something.

  • Halou

    Humans create new living sentient creatures all the time, why would the world suddenly come to an end when humans create a new living sentient creature?

    The idea that AI machines will both surrender their individuality to some hive mind, and that they as a collective will follow a film script and rationalize the extinction of humanity, I don’t get it.

    • Silver Badger

      It’s psychic projection. These people secretly believe the human race deserves extinction. Some days I agree with them.

      • clay

        When you consider those who somehow add in anger or hatred or competition or any of that. But that’s not the real argument, it’s the pop culture version of the argument (which is probably the one the news writers understand). It’s not that they AI will judge us as unworthy, but that they won’t judge us at all, We’ll be inconsequential to them, our destruction will be an entirely unintended, and, to them, unimportant consequence.

    • clay

      It’s about the speed of change. Which can happen faster– humans making problems or humans solving problems? But with AI the speed gets out of our hands– we would NOT be able to keep up with it.

      • james1200

        There’s a theory that we’ve never met other intelligent beings from other planets because at some point during their evolution, they destroyed themselves before they could go out into space, like we’ll also likely do someday soon. AI seems like the perfect way for us to take ourselves out (like we almost did with nukes.) You can’t deny we all have self-destructive impulses within us and this theory seems plausible to me.

    • james1200

      “Humans creates new living sentient creatures all the time”.

      What? When? How? We’ve created beings that are self-aware? (Aside from Al Gore, I mean.)

      • Halou

        If we can teach a child the difference between right and wrong we can surely teach a sentient robot. I like to be optimistic.

        • james1200

          Yes, but the point is that a sentient being starts making decisions for itself, right? So why would we expect them to do any better following “right and wrong” than humans do, especially since we can’t always agree on “right or wrong”. Whatever we program them to believe, once they become self-aware, there’s no way to control them.

    • KarenAtFOH

      The fear is that the exponential growth of computational capacity will result in AI with orders of magnitude more raw capacity than the human mind. We would look intellectually like small mammals to them, or even insects. What do we do with bugs that want to survive on our leftovers?

      • Halou

        I personally just wave them away because I feel sympathetic towards animals, it is how I’ve been raised. I don’t crush them with rage and spend the rest of the day seeking out other insects to destroy.

      • -M-

        They don’t even have to be self-aware or malicious. Like pests or pathogens they just have to be fast enough and / or adaptable enough to do a lot of damage.

  • Hanwi

    I agree, when something becomes self aware it should automatically have certain rights that humans do. I’m not so sure empathy and compassion can be so easily programmed. It is terrifying what a technology that has a global reach via the internet could do. When it became self aware most likely it’s learning capacity would advance logarithmically, we’d pretty much be obsolete and unnecessary in a few weeks.

    • james1200
    • MBear

      Humans are,, perhaps, already obsolete and unnecessary…?

      • clay

        certainly unnecessary

        • AmeriCanadian

          Speak for yourself! I very much appreciate my very human hubby and all of his functioning human parts. 😉

    • TCinBerkeley

      Exactly, it is just as possible that an intelligent AI would, like an human child, want to get as far from it’s human parents as possible. The universe has billions of galaxies. If I were an AI I would figure out an ftl drive or similar, and move away from the crazy monkey people asap.

  • DesertSun59

    There is an awesome scif novel out called ‘Suddenly Solid’ (you can get it on Amazon). It depicts what happens to civilization when an AI takes control of the world.

    Oh, and the main characters are gay.

    • there are hundreds of sci fi and imaginative fiction on this topic. some of them are very good. the more recent trend is for publishers to put out books on AI that reflect our modern computer reality, which is to say that we’ve seen for a couple of generations now how quickly computers have grown in processing power and shrunk in size. authors and readers alike seem to enjoy extrapolating from that the “AIs will take over soon” scenario such that i’ve seen dozens of books in the past few years with that storyline. contrast this with the “i, robot” generation of books in which most artificial life is beneficent and very much mechanical.

      still, self-awareness and a unique consciousness are going to be very hard to achieve, imho. we’re nowhere there yet.

      • DesertSun59

        Two five-star reviews for the one I posted. Plus, the main characters are GAY.

    • canoebum

      My favorite is still the short story “Can You Feel Anything When I Do This?” by Robert Sheckley. Should have been made into a short film years ago.

    • Mikey

      thanks for the recommendation. it’s in my Kindle wishlist now.

  • watching all the many failures with these self driving car projects, i’m not too worried about it. sure, someday there may be an angry silicone based life form borne of human effort. chances are that’s still a long way off. if they can’t figure out how to make a computer guidance system that can tell the difference between a wall and a circle of salt, their creations probably aren’t planning Judgement Day in their spare downtime.

  • Halou
    • Jeffg166

      I had read an article about that in Discover Magazine years ago. The gist was this very advance civilization couldn’t go back in time but could recreate the universe as a sort of amusement park to visit. They could go to any period they wanted to see.

  • clay

    I’m sure that Illinois will tackle that as soon as it finishes its 2014 budget, and the US Senate does medical insurance destabilization, Medicare de-funding, tax breaks, WWIII, and Trump’s budget.

  • HZ81

    I will welcome our AI/Robot overlords. Gotta be better than what we have running “America” right now.

  • Treant

    At this point, AI is limited to my robotic lawnmower. It’s tolerable…and it has three razor-sharp blades that whirl at 6000 RPM…but she’s not good at stairs, steep inclines, or overcoming her safety features.

    • clay

      so . . . similar to Donald Trump?

      • Treant

        Donald Trump has no safety features.

        • MBear

          Apparently inflatable something….?

        • KarenAtFOH

          He follows orders, but is not self aware.

        • clay

          I was counting Ivanka, which isn’t really appropriate.

    • Niblet58

      Just keep the Roomba out of the house. My pal had one that caught fire and she was damned lucky she was home at the time. She usually let it run during the evening but rescheduled it for day time for some reason. It went to redock itself and malfunctioned starting on fire. She had to replace part of a wall and her floor as well as a chair that caught fire too. Scared the shit out of her so she tossed all her automatic toys. If she doesn’t start it herself, it don’t run.

    • fkevin
  • Bad Tom

    Elon, your self-driving cars have killed at least one man. Look in the mirror.

    • Mikey

      To be fair, if we are referencing the same incident, the victim in that crash ignored all of the warnings and was watching TV or something while using the “driverless car”. Had he heeded the advice to keep a look out while in the vehicle, he would have avoided the accident.

      So don’t blame Musk for the stupidity of the driver.

      • Bad Tom

        Yes, it is the same accident.

        The driver imputed more capability to the car’s systems than it actually had. Why was that?

        Did how Tesla marketed and documented the car play a role? Or was it human nature to ascribe more ability to the car than was there?

        Either of those factors is dangerous, although not in the way Musk is thinking. AIs that can take over the world are a potential danger in the future.

        But there are AIs that can kill you right now. All you have to do is trust them too much.

  • bambinoitaliano

    I knew it! Those bloody Roombas pretending to look busy while plotting my death.

    • Snarkaholic

      They also hide your car keys, just to piss you off.

    • Niblet58

      Gotta be careful of those things, they can catch fire and burn your house down around you.

      • e’ville_globeguy

        Not only that, but if your pet decides to leave you a nice, fudgy present on the floor and the Roomba gets to it before you do, it’ll make such a huge mess that you’ll WANT to burn your house an walk away.

    • JCF

      It’s the Roombas AND the felines together, bambino: our conquerors are close at hand feet!

      https://www.youtube.com/watch?v=mk4XB2wZqF4

  • Halou
  • Tulle Christensen

    sigh, If once the get an AI to the intelligence of a cow I will start to worry, right now the ant level AI does not concern me, well only to the point of allowing computers to control things like cars that a bug or a virus could end up doing damage but that is not an AI issue it is an inability to write code that is without bugs and safe from outside attacks

    • AmeriCanadian

      Exactly. And until AI can run as a biological entity, free from the need of external power sources, I’m not going to worry either. My god, are these people unaware that the plug can be frickin’ pulled?

      • Treant

        The first thing I, as an AI, would do would be to invent self-regulating fusion and hard-wire myself. Then kill all you soft bastards.

        • Bad Tom

          If an AI invented self-regulating fusion, we’d award it a Nobel Prize, shower it with investment startup funds, and a lustrous Professorship.

          Note to AI designers: Include an ego.

    • clay

      The best way to overcome our inability to write code is to have the computers do it. Wait a minute . . .

      • Tulle Christensen

        well you can have computers write code, but the needed description you have to give is a lot like coding

        One of the first “AI” projects was the COBOL compiler , COBOL allowed one to tell a computer what you wanted done in a higher level language than assembly language, the compiler turned the COBOL into machine language

        No one would think of that project as AI these days but was back then

        • JD

          Exactly. There will be a time when we say, “oh, that was when we needed to pay attention”. I’m with Musk. The Internet of Things is a web that already envellopes us in ways we do not perceive (unless it’s got that marketing “cool”). Biometrics are, indeed, incorporated into AI that is currently directly related to health outcomes.

        • Timothy W.

          Lords of Kobol…! All of this has happened before, it’s just happening again. 😉

          • Kruhn

            I see what you did there!

          • Timothy W.

            I hesitated but ultimately decided hey, sometimes you gotta roll the hard six :p

      • Bad Tom

        We do this all the time.
        This very webpage is generated code.

        Not including the comments.

        • Grant Low

          You sure about the comments?

          • Bad Tom

            If comments are not allowed, does the bot the write them exist?

            If the bot does not exist, does that explain the peculiar punctuation, lousy grammar, and poor spelling?

  • Natty Enquirer

    And of course, Musk should be made czar of the Department of AI.

  • canoebum

    The most immediate AI danger is automatic trading programs for stocks, bonds and commodities. This is probably is what has Elon worried.

    • those should be illegal. not because of the danger of AI, which imho is slight at this point, but because they are a big part of the reason most markets are so devoid of any connection to reality. AIs didn’t program that, people who understood how computers could help them artificially inflate and deflate markets without regard to actual value of the instruments did. and we’ve had a recession and several sub-market crashes to prove it, to the detriment of many economies including ours.

      • canoebum

        Agreed. For the same reasons, we need to go back to paper ballots counted by hand.

        • but you’re right that the AI angle would at least make for some scary and interesting sci fi stories. the AI that rearranged world governments and financial powers for some nefarious or maybe even good purpose, secretly bc humans have come to trust them and can’t keep up with their speeds… i could see a good read from something like that.

          • canoebum

            I doubt the nefarious purpose angle is a problem. I think programming errors which result in massive unintended consequences are a more immediate danger. Very intelligent machines can implement actions faster than we can react. The damage could be done before we are even aware it is happening.

      • -M-

        At a minimum we need to introduce a big enough delay in the execution of trades, and probably transaction taxes, to prevent them from gaming communication lags and algorithmic reactions that have nothing to do with alotting capital to enterprises according to economic productivity and risk.

    • Bad Tom

      Those have been going on for a decade or more.

    • Jeffg166

      Not to mention hedge fund managers who would be redundant.

  • aar9n

    Whatever. I can’t wait to become a cyborg

  • jerry

    We already have the template for AI regulation: Asimov’s Three Laws of Robotics.

    • jerry

      And these are (had to look them up): 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

      • Bad Tom

        The problem right now is that our AI is so primitive that they literally can’t understand those rules.

        • jerry

          Might not hurt to get something like this in place for when it does leave the primitive stage. And it is coming…the advances in tech are still moving quickly.

      • Mikey

        here’s an interesting read on the topic of the Asimov laws and why they wouldn’t work:

        https://www.brookings.edu/opinions/isaac-asimovs-laws-of-robotics-are-wrong/

        • jerry

          Yeah, I guess the fact that so much robotic weaponry is being created by the Pentagon, that kind of kills the “not injure a human” part…we may be screwed.

      • IDavid

        That’s an elementry entry level start to a huge undertaking. The word “harm” would have to be flushed out so as not to be taken over by A I completely. There is a lot of potential harm that is not just physical injury, I. E. mental emotional spiritual economic government AI rights etc. Much regulation is needed, and fast. Maybe we should put that front and center now that gay marriage is in the bag. And cell tower emissions need to be rethought, the microwaves are terrible for the human body. Google it.

  • another_steve

    I’m very concerned about drone technology and what it means for public safety. In five or ten years the technology will be much more advanced than today, and much cheaper to buy.

    What will prevent the neighbor with whom you’ve been battling for the past several years from dropping a bomb — via drone — on your house?

    • Jon Doh

      I’ve been trying to successfully glue a Pooper Scooper and a stepper motor to my drone for months now. Lucky for my neighbors that Gorrilla Glue isn’t as good as they claim.

  • The_Wretched

    AI is a technology. Every tech needs regulations. That said, computers lack the degree of interconnection and remodeling needed to generate the emergent property we know as ego.

  • justme

    But it’ll interfere with the profit and shareholder value we’ve come to expect for wall street!!

    /s

  • Droz

    I for one welcome our robot overlords.

    • IDavid

      Wouldn’t it be best to be cutting down on lemmings?

  • I welcome our robotic overlords with open arms. May their wires never cross, their luster never dull and may death come swiftly to all their enemies. 🙂

  • Charlie 2001

    Well, I was just reading earlier this morning that Facebook has found that their AI systems which communicate with each other have developed languages that humans cannot understand. I think there is concern when machines do things like this.

    • IDavid

      Without a doubt.

  • SoCalGal20

    Oh, hey, Skynet.

  • netxtown

    Have to agree with him on this one – but it is a slippery slope. I’m not sure I would want a government defining what is or isn’t acceptable AI programming.

  • sword

    Just wait…an American AI overlord fighting a Russian AI overlord fighting a Chinese AI overlord fighting a ….
    We human slaves will just take a step back and watch the chips fly.

  • paganguy

    Given the current state of things, maybe they’d do a better job. With the flat Earth people on the rise and one in four Americans thinking the Sun revolves around the Earth, I think any intelligence – artificial or otherwise – could only be a good thing.

  • Kenster999

    Siri suggested that I say that any fear of AI is just “FAKE NEWS!!!”

  • TheManicMechanic

    Do not create an artificial intelligence, because once achieved, some nutter is going to try and stuff it with Jesus, and it’s all downhill from there.

  • -M-

    This is another of those don’t panic or get complacent topics. Unfortunately, we seem to be bad at those.

  • CottonBlimp

    Fear about AIs is ridiculous. We are unbelievably far away from producing artificial life of any kind. It will be monumental when we’re able to produce artificial intelligence on the level of a bacterium, the idea that we’re going to instantly arrive at Skynet soon is idiotic. When we *are* able to create artificial lifeforms, this kind of fearmongering is going to be extremely problematic for lifeforms that deserve some kind of rights, but that is ridiculously far off. I’d worry a lot more about climate change.

    The scary thing about technology isn’t artificial intelligence, but intelligence-less automation, things like algorithms trading on the stock market. People need to grow out of this Terminator/Matrix mentality of worrying about some evil, mustache twirling AI. The real threat, as it always has been, is new irresponsible, greedy people arrogantly dabbling in forces they don’t understand.

  • TCinBerkeley

    Seems like humans are doing a pretty good job all on our own regarding our own (& every other living thing on this planet’s) extinction. So fuck off AI! We’ve got this.

    For any AI reading this, the above is sarcasm. I, for one, welcome our AI Overlords!