Asif the field of AI wasnāt competitive enoughāāāwith giants like Google, Apple, Facebook, Microsoft and even car companies like Toyota scrambling to hire researchersāāāthereās now a new entry, with a twist. Itās a non-profit venture calledĀ OpenAI, announced today, that vows to make its results public and its patents royalty-free, all to ensure that the scary prospect of computers surpassing human intelligence may not be the dystopia that some people fear. Funding comes from a group of tech luminaries including Elon Musk,Ā Reid Hoffman, Peter Thiel,Ā Jessica LivingstonĀ and Amazon Web Services. They have collectively pledged more than a billion dollars to be paid over a long time period. The co-chairs are Musk andĀ Sam Altman, the CEO ofĀ Y Combinator, whose research group is also a funder. (As is Altman himself.)
Musk, a well-known critic of AI, isnāt a surprise. But Y Combinator? Yep. Thatās the tech accelerator that started 10 years ago as a summer project that funded six startup companies by paying founders āramen wagesā and giving them gourmet advice so they could quickly ramp up their businesses. Since then, YC has helped launch almost 1,000 companies, including Dropbox, Airbnb, and Stripe, and has recently started a research division. For the past two years, itās been led by Altman, whose companyĀ LooptĀ was in the initial class of 2005, and sold in 2012 for $43.4 million. Though YC and Altman are funders, and Altman is co-chair, OpenAI is a separate, independent venture.
Essentially, OpenAI is a research lab meant to counteract large corporations who may gain too much power by owning super-intelligence systems devoted to profits, as well as governments which may use AI to gain power and even oppress their citizenry. It may sound quixotic, but the team has already scored some marquee hires, including former Stripe CTOĀ Greg BrockmanĀ (who will be OpenAIās CTO) and world-class researcherĀ Ilya Sutskever, who was formerly at Google and was one of the famed group of young scientists studying under neural net pioneerĀ Geoff HintonĀ in Toronto. Heāll be OpenAIās research director. The rest of the lineup includes top young talent whose resumes include major academic groups, Facebook AI andĀ DeepMind, the AI company Google snapped up in 2014. There is also a stellar board of advisors including Alan Kay, a pioneering computer scientist.
OpenAIās leaders spoke to me about the project and its aspirations. The interviews were conducted in two parts, first with Altman and then another session with Altman, Musk, and Brockman. I combined the interviews and edited for space and clarity.
How did this come about?
Sam Altman: We launched YC Research about a month and a half ago, but I had been thinking about AI for a long time and so had Elon. If you think about the things that are most important to the future of the world, I think good AI is probably one of the highest things on that list. So we are creating OpenAI. The organization is trying to develop a human positive AI. And because itās a non-profit, it will be freely owned by the world.
Elon Musk:Ā As you know, Iāve had some concerns about AI for some time. And Iāve had many conversations with Sam and with Reid [Hoffman], Peter Thiel, and others. And we were just thinking, āIs there some way to insure, or increase, the probability that AI would develop in a beneficial way?ā And as a result of a number of conversations, we came to the conclusion that having a501c3, a non-profit, with no obligation to maximize profitability, would probably be a good thing to do. And also weāre going to be very focused on safety.
And then philosophically thereās an important element here: we want AI to be widespread. Thereās two schools of thoughtāāādo you want many AIs, or a small number of AIs? We think probably many is good. And to the degree that you can tie it to an extension of individual human will, that is also good.
Human will?
Musk:Ā As in an AI extension of yourself, such that each person is essentially symbiotic with AI as opposed to the AI being a large central intelligence thatās kind of an other. If you think about how you use, say, applications on the internet, youāve got your email and youāve got the social media and with apps on your phoneāāāthey effectively make you superhuman and you donāt think of them as being other, you think of them as being an extension of yourself. So to the degree that we can guide AI in that direction, we want to do that. And weāve found a number of like-minded engineers and researchers in the AI field who feel similarly.
Altman: We think the best way AI can develop is if itās about individual empowerment and making humans better, and made freely available to everyone, not a single entity that is a million times more powerful than any human. Because we are not a for-profit company, like a Google, we can focus not on trying to enrich our shareholders, but what we believe is the actual best thing for the future of humanity.
Doesnāt Google share its developments with the public, like it just did with machine learning?
Altman: They certainly do share a lot of their research. As time rolls on and we get closer to something that surpasses human intelligence, there is some question how much Google will share.
Couldnāt your stuff in OpenAI surpass human intelligence?
Altman: I expect that it will, but it will just be open source and useable by everyone instead of useable by, say, just Google. Anything the group develops will be available to everyone. If you take it and repurpose it you donāt have to share that. But any of the work that we do will be available to everyone.
If Iām Dr. Evil and I use it, wonāt you be empowering me?
Musk:Ā I think thatās an excellent question and itās something that we debated quite a bit.
Altman: There are a few different thoughts about this. Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think its far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else. If that one thing goes off the rails or if Dr. Evil gets that one thing and there is nothing to counteract it, then weāre really in a bad place.
Will you have oversight over what comes out of OpenAI?
Altman:Ā We do want to build out an oversight for it over time. Itāll start just with Elon and me. Weāre still a long, long way from actually developing real AI. But I think weāll have plenty of time to build out an oversight function.
Musk:Ā I do intend to spend time with the team, basically spending an afternoon in the office every week or two just getting updates, providing any feedback that I have and just getting a much deeper understanding of where things are in AI and whether we are close to something dangerous or not. Iām going to be super conscious personally of safety. This is something that I am quite concerned about. And if we do see something that we think is potentially a safety risk, we will want to make that public.
Whatās an example of bad AI?
Altman: Well, thereās all the science fiction stuff, which I think is years off, like The Terminator or something like that. Iām not worried about that any time in the short term. One thing that I do think is going to be a challengeāāāalthough not what I consider bad AIāāāis just the massive automation and job elimination thatās going to happen. Another example of bad AI that people talk about are AI-like programs that hack into computers that are far better than any human. Thatās already happening today.
Are you starting with a system thatās built already?
Altman: No. This is going to start like any research lab and itās going to look like a research lab for a long time. No one knows how to build this yet. We have eight researchers starting on day one and a few more will be joining over the next few months. For now they are going to use the YC office space and as they grow theyāll move out on their own. They will be playing with ideas and writing software to see if they can advance the current state of the art of AI.
Will outsiders contribute?
Altman: Absolutely. One of the advantages of doing this as a totally open program is that the labs can collaborate with anyone because they can share information freely. Itās very hard to go collaborate with employees at Google because they have a bunch of confidentiality provisions.
Sam, since OpenAI will initially be in the YC office, will your startups have access to the OpenAI work? [UPDATE: Altman now tells me the office will be based in San Francisco.]
Altman: If OpenAI develops really great technology and anyone can use it for free, that will benefit any technology company. But no more so than that. However, we are going to ask YC companies to make whatever data they are comfortable making available to OpenAI. And Elon is also going to figure out what data Tesla and Space X can share.
What would be an example of the kind of data that might be shared?
Altman: So many things. All of the Reddit data would be a very useful training set, for example. You can imagine all of the Tesla self-driving car video information being very valuable. Huge volumes of data are really important. If you think about how humans get smarter, you read a book, you get smarter, I read a book, I get smarter. But we donāt both get smarter from the book the other person read. But, using Teslas as an example, if one single Tesla learned something about a new condition every Tesla instantly gets the benefit of that intelligence.
Musk:Ā In general we donāt have a ton of specific plans because this is really just the incipient stage of the company; itās kind of the embryonic stage. But certainly Tesla will have an enormous amount of data, of real world data, because of the millions of miles accumulated per day from our fleet of vehicles. Probably Tesla will have more real world data than any other company in the world.
AI needs a lot of computation. What will be your infrastructure?
Altman: We are partnering with Amazon Web Services. They are donating a huge amount of infrastructure to the effort.
And there is a billion dollars committed to this?
Musk:Ā I think itās fair to say that the commitment actually is some number in excess of a billion. We donāt want to give an exact breakdown but there are significant contributions from all the people mentioned in the blog piece.
Over what period of time?
Altman: However long it takes to build. Weāll be as frugal as we can but this is probably a multi-decade project that requires a lot of people and a lot of hardware.
And you donāt have to make money?
Musk:Ā Correct. This is not a for-profit investment. It is possible that it could generate revenue in the future in the same way that the Stanford Research Institute is a 501c3 that generates revenue. So there could be revenue in the future, but there wouldnāt be profits. There wouldnāt be profits that would just enrich shareholders, there wouldnāt be a share price or anything. We think thatās probably good.
Elon, you earlier invested in the AI company DeepMind, for what seems to me to be the same reasonsāāāto make sure AI has oversight. Then Google bought the company. Is this a second try at that?
Musk:Ā I should say that Iām not really an investor in any normal sense of the word. I donāt seek to make investments for financial return. I put money into the companies that I help create and I might invest to help a friend, or because thereās some cause that I believe in or something Iām concerned about. I am really not diversified beyond my own company in any material sense of the word. But yeah, my sort of āinvestment,ā in quotes, for DeepMind was just to get a better understanding of AI and to keep an eye on it, if you will.
You will be competing for the best scientists now who might go to Deep Mind or Facebook or Microsoft?
Altman: Our recruiting is going pretty well so far. One thing that really appeals to researchers is freedom and openness and the ability to share what theyāre working on, which at any of the industrial labs you donāt have to the same degree. We were able to attract such a high-quality initial team that other people now want to join just to work with that team. And then finally I think our mission and our vision and our structure really appeals to people.
How many researchers will you eventually hire? Hundreds?
Altman: Maybe.
I want to return to the idea that by sharing AI, we might not suffer the worst of its negative consequences. Isnāt there a risk that by making it more available, youāll be increasing the potential dangers?
Altman: I wish I could count the hours that I have spent with Elon debating this topic and with others as well and I am still not a hundred percent certain. You can never be a hundred percent certain, right? But play out the different scenarios. Security through secrecy on technology has just not worked very often. If only one person gets to have it, how do you decide if that should be Google or the U.S. government or the Chinese government or ISIS or who? There are lots of bad humans in the world and yet humanity has continued to thrive. However, what would happen if one of those humans were a billion times more powerful than another human?
Musk:Ā I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then thereās not any one person or a small set of individuals who can have AI superpower.
Elon, you are the CEO of two companies and chair of a third. One wouldnāt think you have a lot of spare time to devote to a new project.
Musk:Ā Yeah, thatās true. But AI safety has been preying on my mind for quite some time, so I think Iāll take the trade-off in peace of mind.