that's I I would bet that the metaverse turns out in the upside case then which I think has a reasonable chance of happening the upside case the metaverse turns out to be more like something on the order of the iPhone like a new a new container for software and you know a new way a new computer interaction thing and AI turns out to be something on the order of like a legitimate technological Revolution um and so I think it's more like how the metaverse is going to fit into this like new world of AI than AI fit into the metaverse but low confidence but TBD all right questions hey there how do you see uh Technologies uh foundational Technologies like tpg3 affecting um the pace of life science research specifically you can group in medical research there and and sort of just quickening the iteration cycles and then what do you see as the rate limiter in life science research and sort of where we won't be able to get past because they're just like laws of nature yeah something like that um so I think the currently available models are kind of not good enough to have like made a big impact on the field at least that's what like most like life sciences researchers have told me they've all looked at it and they're like it's a little helpful in some cases um there's been some promising work in genomics but like stuff on a bench top hasn't really impacted it I think that's going to change and
I think uh this is one of these areas where there will be these like you know new 100 billion to trillion dollar companies started those those areas are rare but like when you can really change the way that if you can really make like a you know future a Pharma company that is just hundreds of times better than what's out there today that's going to be really different um as you mentioned there still will be like the rate limit of like bio has to run at its own thing and human Trials take however long they take and that's so I think an interesting cut of this is like where can you avoid that like where are the the synthetic bio companies that I've seen that have been most interesting are the ones that find a way to like make the cycle time super fast um and that that benefits like an AI That's giving you a lot of good ideas but you've still got to test them which is where things are right now um I'm a huge believer for startups that like the thing you want is low costs and fast cycle times and if you have those you can then compete as a startup against the big incumbents uh and so like I wouldn't go pick like cardiac disease is my first thing to go after right now with like this kind of new kind of company um but you know using bio to manufacture something that sounds great uh I think the other thing is the simulators are still so bad and if I were an a if I were a bio means AI startup I would certainly try to work on that somehow when do you think the AI Tech will help create itself oh it's almost like a self-improvement will help make the simulators significantly better um people are working on that now uh I I don't know quite how it's going but you know there's very smart people are very optimistic about that yep other questions and I can keep going on questions I just want to make sure you guys had a chance this ah here yes great Mike is coming awesome thank you um I was curious what what aspects of Life do you think won't be changed by AI um sort of did all of the deep biological things like I think we will still really care about interaction with other people like we'll still have fun in like the reward you know systems of our brain are still going to work the same way like we're still going to have the same like drives to kind of create new things and you know compete for silly status and like you know form families and whatever um so I think the the stuff that people cared about 50 000 years ago is more likely to be the stuff that people care about you know 100 years from now than 100 years ago as an amplifier on that before we get to the next whatever the next question is what do you think are the best utopian science fiction universes so far good question um Star Trek is pretty good honestly uh like I do like all of the ones that are sort of like you know we turn our Focus to like exploring and understanding the universe as much as we can um it's not this is not a utopian one well maybe I think the last question is like an incredible short story yeah uh I was expecting you to say Ian Banks on the culture those are great uh I think science fiction is like there's not like one there's not like one sci-fi universe that I could point to and say I think all of this is great but like the collective optimistic corner of sci-fi which is like a smallish yeah Corner um I'm excited about actually uh I took a few days off to write a Sci-Fi story and I had so much fun doing it just about sort of like the optimistic case of AGI um that it made me want to go like read a bunch more so I'm looking for recommendations of more to read now um like the sort of less known stuff if you have anything I will I will get you some great some recommendations so in a similar vein one of my favorite sci-fi books is called childhood's End by Arthur Clark from like the 60s I think and the I guess the one sentence summary is aliens come to the Earth try to save us and they just take our kids and leave everything else so you know there's a slightly more optimistic than that but yes I mean there's Ascension into the over mind is is is meant to be more utopian but yes okay uh you may not read it that way but yes well also in our current Universe our current situation um you know a lot of people think about family building and fertility and like some of us have different people have different ways of approaching this but from where you stand what do you see as like the most promising Solutions it might not be a technological solution but I'm curious what you think other than everyone having 10 kids you know like how do we of everyone having 10 kids yeah how do you populate how do you like how do you see family building coexisting with you know AGI high tech it's this is like a question that comes up at open AI a lot like how do I think about you know how should one think about having kids there's I think no consensus answered to this um there are people who say yeah I'm not I was gonna I thought I always thought I was gonna have kids and now I'm not going to because of AGI like there's just for all the obvious reasons and I think some less obvious ones there's people who say like well it's going to be the only thing for me to do in you know 15 20 years so of course I'm going to have a big family like that's what I'm going to spend my time doing you know I'll just like raise great kids and then I think that's what'll bring me fulfillment I think like as always it is a a personal decision I get very depressed when people are like I'm not having kids because of AGI uh the EA Community is like I'm not doing that because they're all going to die they're kind of like a techno optimists are like well it's just like you know I want to like merge into the AGI and go off exploring the universe and it's going to be so wonderful and you know just I want total freedom but I think like all of those I find quite depressing um I think having a lot of kids is great I you know want to do that now more than I did even more than I did when I was younger and I I'm excited for it what do you think will be the way that most users interact with Foundation models in five years do you think there'll be a number of verticalized AI startups that essentially have adapted and fine-tuned Foundation model to an industry or do you think prompt engineering will be something many organizations have as an in-house function I don't think we'll still be doing prompt Engineering in five years I think it'll just be like you and this will be integrated everywhere um but you will just like you know either with text or voice depending on the context you you will just like interface in language and get the computer to do whatever you want and uh that will you know apply to like generate an image where maybe we still do a little bit of prompt engineering but you know it's kind of just going to get it to like go off and do this research for me and do this complicated thing or just like you know be my therapist and help me figure out how to make my life better or like you know go use my computer for me and do this thing or or any number of other things but I think the fundamental interface will be natural language let me actually push on that a little bit before we get to the next question which is I mean to some degree just like we have a wide range of human talents right now uh and taking a look for example a dolly when you have like a a great visual thinker they can get a lot more out of Dolly because they know how to think more they know how to iterate the loop through the the test don't you think that will be a general truth about most of these things so it isn't that why would be natural language is the way you're doing it it will be there will be like almost an evolving set of human talents about about going that extra mile 100 I just hope it's not like figuring out like hack the prompt by adding one magic word to the end that like changes everything else I I like what will matter is like the quality of ideas and the understanding of what you want so the artist will still do the best with image generation but not because they figured out to like add this one magic word at the end of it because they were just able to like articulate it with a creative eye that you know I don't have certainly what they have is a vision and kind of how their visual thinking and iterating through it yeah yeah well obviously it'll be that word or prompt now but it'll iterate to to better all right uh at least we have a question here hey thanks so much um uh I think the term AGI is used uh thrown around a lot and um sometimes I've noticed my own discussions like the sources of confusion just come from people having different definitions of AGI and so it can kind of be the magic box where everyone just kind of projects their their ideas onto it and I just want to get a sense from you like how do you think you know how would you define AGI and how do you think you'll know uh yeah when we should have defined that earlier it's a great point I think there's like a lot of valid definitions to this but uh for me um AGI is basically the equivalent of a median human that you could like you know hire as a co-worker um so and then they could like say do anything that you'd be happy with a remote co-worker doing like just behind a computer um which includes like you know learning how to go be a doctor learn how to go be a very competent coder like there's a lot of stuff that a median human is capable of getting good at and I think one of the skills of an AGI is not any particular Milestone but the The Meta skill of learning to figure things out and that it can go decide to get good at whatever you need um so for me like that's that's kind of like AGI and then Super intelligence is when it's like smarter than all of humanity put together thanks um just uh what would you say or in the next 20 30 years are some of the main societal issues that will arise as AI continues to grow and what can we do today to mitigate those issues obviously the economic impacts are huge and I think it's just like if it is as Divergent as I think it could be for like some people doing incredibly well and others not I think Society just won't tolerate at this time and so figuring out when we're gonna like disrupt so much of economic activity and even if it's not all disrupted by 20 or 30 years from now I think it'll be clear that it's all going to be um what like what is the new social contract like how do my guess is that the things that we'll have to figure out are how we think about fairly Distributing wealth um access to AGI systems which will be like kind of the commodity of the realm and governance like how we collectively decide what they can do what they don't do things like that um and I think figuring out the answer to those questions is is gonna just be huge I I'm optimistic that people will figure out how to spend their time and be very fulfilled I think people worry about that in a little bit of a silly way I'm sure what people do will be very different but we always solve this problem um but I do think like the concept of wealth and access and governance those are all going to change and how how we address those will will be huge actually one thing I don't know what level of devs you can share that but one of the things I love about what openai and you guys are doing is when you they think about these questions a lot themselves and they initiate some research so you've initiated some research on this stuff yeah so we run the largest uh Ubi experiment in the world um I don't think that is uh we have a year and a half a year and a quarter left in a five-year project um I don't think that's like the only solution but I think it's a great thing to to be doing um and you know I think like we should have like 10 more things like that that we try um we also try with different ways to get sort of input from a lot of the groups that we think will be most affected and see how we can do that early in the cycle um we've explored more recently like how this technology can be used used for reskilling people that are going to be impacted early um we'll try to do a lot more stuff like that too yeah so they are the the organization is actually in fact uh these are great questions addressing them and actually doing a bunch of interesting research on it so next question hi so um creativity came up today in several of the panels you know and um it seems to me that the way it's being used like you you have tools for human creators to go and expand human creativity so where do you think the line is between these tools to to allow a Creator to be more productive in artificial creativity itself so um I I think and I think we're seeing this now that tools for creatives that that is going to be like the great application of AI in the short term um people love it it's really helpful uh and I think it is at least in what we're seeing so far um not replacing it is mostly enhancing it's replacing in some cases uh but for the majority of like the kind of work that people in these fields want to be doing it's enhancing and I think we'll see that Trend continue for a long time um eventually yeah it probably is just like you know we look at 100 years okay it can do the whole creative job um I think it's interesting that if you asked people 10 years ago uh about Holly I was going to have an impact with a lot of confidence from almost most people you would have heard you know first it's going to come for the blue collar jobs working in the factories truck drivers whatever then it will come for the kind of like the low skill White Collar jobs then the very high skill like really high IQ uh white-collar jobs like a programmer or whatever and then very last of all and maybe never it's gonna take the creative jobs and it's really gone exactly and is going exactly the other direction and I think this like isn't there's an interesting reminder in here generally about how hard predictions are but more specifically about you know we're not always very aware maybe even ourselves of like what skills are hard and easy like what uses most of our brain and what doesn't or how like difficult bodies are to control or make or whatever we have one more question over here hey thanks for being here so you mentioned that um you will be skeptical of any startup trying to train their own language model and it would love to understand more so what I have heard and which might be wrong is that large language models depend on data and compute and any startup can access to the same amount of data because it's just like internet data and compute like different companies might have different compute but I guess I see a big players can sell more compute so how good a large language model startup differentiate from another how would the startup differentiate from another how would one large language model startup differentiate I think it'll be this middle layer um I think in some sense the startups will chain their own models just not from the beginning uh they will take like you know base models that are are like hugely trained with a gigantic amount of compute and data and then they will train on top of those to create you know the model for each vertical and and that those startups so in some sense they are training their own models just not not from scratch but they're doing the one percent of training that really matters for for whatever this use case is going to be those startups I think they will be hugely successful and very differentiated startups there but that'll be about the kind of like data flywheel that the startup is able to do the kind of like all of the pieces on top of and Below uh like this could include prompt engineering for a while or whatever the sort of the kind of like core base model I think that's just going to get too too complex and too expensive and the world also just doesn't make enough chips so Sam has a work thing he needs to get to so and as you probably can tell with a very far far ranging thing Sam always expands my uh boundaries and a little bit unlike the that when you're feeling depressed whether it's kids in a house you're the person I always turn to probably I appreciate that yes so anyway I think I think like no one knows like we're sitting on this like precipice of AI and like people like it's either gonna be like really great or really terrible um you may as well like you gotta you gotta like plan for the worst you certainly like it's not a strategy to say it's all going to be okay but you may as well like emotionally feel like we're gonna get to the Great future and we're playing hard as you can to get there and play for it yes rather than like act from this place of like fear and despair all the time because if you if we acted from a place of fear and paranoia we would not be where we are today so let's thank Sam for spending dinner with us thank you [Music]
コメント