The Rise and Fall… and RISE Of Sam Altman Has GRAVE Implications For AI Research

Episode 42 The Rise and Fall and RISE of Sam Altman (1)

From the sidelines, reading the media reports on the weekend of November 17 it was a series of irrepressible events that unfolded that weekend and the coming days. It was considered a tumultuous weekend, a series of uncontrolled outcomes, each with its own significance to the fallout of OpenAI and its employees. While the world watched this drama unfold online, no one would have guessed the outcome of events that transpired within the course of the coming days: the firing of Sam Altman… the quitting of OpenAI president, Greg Brockman and the threatened resignations from 700 employees of Open AI… to the requested resignation of the Board… to the hiring of Brockman and Altman as new MS employees… to the final decision to reinstate Altman as CEO of Open AI with new board members in tow – all in the course of 5 days.

What these events with Open AI and Microsoft have signalled are some unprecedented implications for the AI community, its researchers and more importantly the more far reaching impacts to the society for which AI is built.

So what does this all mean… – when a nonprofit side, with no financial incentives is unable to effectively oust an executive because of legitimate safety concerns – when a structure that was put in place to mitigate the risk of unilaterial control of advanced AI and was essentially was usurped by the same capitalistic forces – this structure was designed to prevent.

We welcome Christoph Schuhmann, Cofounder of LAION.AI, Large-scale Artificial Intelligence Open Network, a non profit organization, who has a large community of volunteers, data scientistis, researchers, practitioners developing applications in this field, to weigh in on the events of November 17-21, 2023 and what it means to the AI Community.

Transcript

Hessie Jones

So November 17th to the 20th is actually a time that will be forever embedded in my memory. I was watching on Twitter. I was watching the news and from the sidelines, reading all the media reports on the weekend of November 17th, it seemed to be this irrepressible. Set of events that. Unfolded that weekend in the coma. Days that nobody would have guessed the outcome that could have transpired during that period from the firing of Sam Altman, who was the CEO of Open AI, the quitting of J Greg Brockman, who is the President of Open AI, the threatened resignations from 700 employees of Open AI. Through the requested resignation of the board to the hiring. Of Brockman and Altman. As Microsoft new employees to the final decision to reinstate Altman as CEO to open AI with the new board member or new board members, until all of this happened in the course of five days. Hi everyone. My name is Hessie  Jones and welcome to Tech Uncensored with these events with open AI and Microsoft. That signal there’s some unprecedented. Implications for the AI community, its researchers, and more importantly, the far reaching impacts for society for which AI is going to be built for. So let’s talk about the economics. Microsoft owns 49% of open AI. They poured in more than $10 billion since 2019, with the reported investment. Of over $13 billion. So now the valuation. Open AI is somewhere around $29 billion, but Microsoft has no board seats in 2019, when Microsoft invested in open AI, they were a non profit Corp at that time and the company status was changed to for profit when. Microsoft invested their their money. What remained of the nonprofit entity was a governance flair where the board was charged with making decisions that impacted the broader humanity. And so, this nonprofit group had no shares in the company. They had no financial interest in open AI. And this was the group that else did Sam Altman. So the structure was actually designed to enable open AI to raise the dollars that needed to build the AGI. At the same time, preventing capitalistic forces from controlling AGI. And this was essentially according to reports that ticking time bomb because the minute that Microsoft. Off put in $10 billion to open AI this past January. It it things would have gone awry regardless. So what does this all mean? When a nonprofit with no financial incentives is unable to effectively oust an executive because of safety concerns? And then when the structure is actually put into place to mitigate the rest of this unilateral control of advanced AI and was essentially used served by the same capitalistic forces, AKA Microsoft. The events that this structure was designed to actually prevent something is terribly wrong. I’ve spoken to many people within the AI research community, mainly from open source academia, to weigh in on these events, and I’m happy today. To introduce Christoph Schuman, who is the co-founder of Lyon AI, a large, large scale artificial intelligence open network. So they are a nonprofit organization. They have a large community of volunteers, data scientists, researchers, practitioners, developing applications in the field, in in large language. Models welcome, Chris.

 

Christoph Schuhmann

Hello Hessie, I’m so happy to be here. Really.

Hessie Jones

Thank you. So first tell us a little bit about Laion.

 

Christoph Schuhmann

Yeah, Laion is a grassroots community with basically many, many people who share this dream of making top AI open source accessible for everyone. So like I remember when I was founded I already followed it and I was so excited about what they had been doing and I would never have dreamed to be a part of like this AI development community. But for some reason it happened like these communities on Discord where you have like these groups of, like scientists and students and hackers like discussing about like, how can we recreate what my eye is doing or maybe like? Surpass them in some areas where they are not pushing the bond boundaries in some way. So and yeah like this is such a magical place and I happen to be like a part of it at a time when it was easy to become an integral part of this community. It’s still like easy like everyone is needed. Everyone who is watching this and wants to be a part. Of the. Truly old may. Eye Revolution is happily invited to invited to come to our discord. Over and yeah, like, I mean we cannot compete in every domain with the big tag because we simply do not have the resources, but it’s surprising. That we can compete or maybe complement what they are doing in some ways, surprisingly well simple because so many, like thousands of people. Like from the Google developer to a high school student to stand for a professor. They all share this dream without hierarchy. Being in this community, helping each other. And this is this is like. A magical place.

 

Hessie Jones

Can I ask just? From that perspective, you said there are thousands of people like Lyon is actually, you actually have a page on Wikipedia, which means that you are. You obviously have a force. You are. You are a force to be reckoned with. So what in from what context are we talking about how many volunteers? In general, how many models have you been developing? Like what kind of activity has Laion been involved in?

 

Christoph Schuhmann

Ohh yeah, we have this discord server and technically we have more than 20,000 members, but like in practice it’s maybe like 1-2 hundred people who are regularly contributing. On the paper we are a nonprofit, but this core nonprofit is really just a mailbox in a small bank account. We don’t have much resources like on the bank account. But it’s that the people who come are affiliated with some university or with some supercomputing lab or whatever, and then they don’t care so much about bureaucracy. It’s more like, oh, yeah, we can maybe get some GPU hours from this company or from this from this supercomputer there, because the body of me has. Like a grand there and they have access to 500,000 GPU hours. So just OK, let’s figure out how we can find whoever is competent to run it and then. Get him there. And so the distribution between resources like talent and GPUs and storage and ideas, this is really a really quick connection, not much bureaucracy, not like in a university where you have to write an application and. You have to. Talk to your professor and your supervisor. It’s more like. I woke up, I have an idea. I’m talking to some random people on this court and then I think, ohh yeah, it’s a great idea. I just. Invite people to join. And maybe I start with proof of concept or other people start with proof of concept and then like it becomes alive because the community is there to basically. Push it forward.

 

Hessie Jones

That’s great. That’s great. There was a was there a recent article like was it Fortune that named you one of  the top 50 AI companies? What was that?

 

Christoph Schuhmann

It’s crazy, right? Fortune magazine made like a list of the top AI companies like Top 50 AI companies that are driving progress in AI at the moment. And they reached out to us, and they included us and I’m a high school teacher. I’m not making any money with this. I’m doing it. Really because I like to do it and I have to say in Germany. I’m a tenured high school teacher and my salary is not super good, but it’s also not bad. So I I’m pretty much carefree. I have enough money, I’m happy. I have two two wonderful kids and I basically devote 90% of my free time into this because I think this really matters like to democratize this AI.

 

Hessie Jones

Absolutely. OK. So let’s go back to the the events of November 17th. I guess it was like over a week ago now. What were you thinking when all this happened, right, what you probably, I’m sure, I’m sure the community was reeling. So at the end of the weekend, Sam Altman was when put back in control of open AI. From your perspective what was going to change there?

 

Christoph Schuhmann

Ohh yeah, if you ask me honestly, like in the first moment I was thinking. What the ****? But then after like. Thinking about this, I was thinking, yeah, I mean. It will continue the way it was. Going previously with some small cosmetics. So basically, I didn’t know Sam Altman and Greg Brockman when I, like, started to read more about what’s going on in the AI world like 5-6, seven years ago. These people have been like idols to me like they I looked up to them because they were starting OpenAI and they were so cool and I was just a high school teacher. Right now, I must say. I was a little bit disappointed when they went to become a for profit with Open AI. But I understood the masses behind it. And right now, at this moment, I think of my like open my eye and Sam Altman and Greg Brockman. They will have the confidence, he people are behind them. The money is behind them. The employees are behind them. And they will probably feel a little bit scared by the critics constantly looking at them now more carefully. So they will pretend to be a little bit more. Safety oriented and they will behave a little bit more like politicians like thinking about every word they say, like giving no vector of attack for the critics. But in the end, in the long term, like strategy, they will just continue because they will feel like validated and whatever they have been doing. And honestly, I mean, the moment or may I sign these papers that they will basically give away the research results. To Microsoft, it was clear that the nonprofit would not have. Anything important to say because at any point everyone could just say goodbye and just move over to Microsoft it. It was so clear.

 

Hessie Jones

That’s so you’re even saying that no matter what structure was originally put in place so that Microsoft would not be seen to have control or or try not to influence the trajectory of open AI that. That was not the case at the end of the day. I mean the. So what? What’s your opinion on the new members of the board that they installed, well, I mean there’s still going to be a non-profit side that presumably is going to still keep make the profit side accountable you would think. Well, what do you think?

 

Christoph Schuhmann

I think probably I don’t know the people on the board right now and there’s a very high likelihood that all of them are just nice people. I don’t know. I am an optimist by human nature. I like so everyone watching this like my critics sometimes say. Ohh yeah, Christophe is so naive and I like the people who consider themselves like existential effective altruists or like these tumors or safety people they say. Ohh yeah. Christoff he’s so naive but actually. It all boils down in the end. To your core beliefs about human nature. If you believe that the world is a dangerous place and that mistakes have more severe negative consequences. Then, like opportunities to learn and to have feedback and to improve, then the consequence should be you should avoid mistakes as your highest priority. And if you have this like this leads to a pretty gloomy. World view. Where control is the main objective and if you think. Yeah, maybe like there are some really bad mistakes, but overall, the majority of people and the majority of mistakes are positive feedback and chances to improve yourself. And in the end basically maybe with AI or parenting or whatever things like. Will probably be good. If you’re open minded and don’t give. Up so and if you have this world view like me for example, then in the end like control is not the main objective, but it’s more like the main objective will be stuff. How to make sure that people can flourish by, like, pursuing their objectives, their own objectives? Because you don’t think that the majority of people are psychopath? Yeah, you you think that the majority of people are kind of decent? Maybe not angels, but they are kind of decent. And they are like more or less like a swarm positive agents. Mildly positive. So you want them to be resilient and to empower people. So my my perspective on this is like instead of. Thinking too much about regulation and putting boundaries about open AI and Google and all the potential like. Sources of risks. We should think much more about how to empower people, how to how people like you like me, like the researchers and the scientific community. The the viewers here right now like you, but maybe you’re a high school student. Maybe your tech reporter or housewife. I don’t know. Maybe an AI researcher. All of you. You should become more powerful, in my opinion, more resilient, because like people are talking about risks. And the risks that we can foresee right now. Are not the risks that will be there in one year. And trying to like extinguish all risks and put boundaries and regulations about this. This is a flawed approach in my opinion. We should like think how can we make you the viewer you so powerful, so resilient with cool AI superpowers basically. That you will be capable to mitigate the risks of tomorrow, that we cannot even foresee right now. And this is basically the core belief behind Lyon. I would say that we try to replicate what like these big tech companies are doing and what what maybe in my opinion, national laboratories should be doing. They should have been leading this all along but they have not been. So we we try to replicate. What what these top research institutions are doing and make it accessible so that you and me and everyone out there will be empowered to be resilient to the. Risks of the future?

 

Hessie Jones

So right now, could you talk about access and you talk about the fact that like Laion exists without funding. Laion exists it to develop the things that they’re doing and they don’t have the same level of resources: GPU’s, top talent that Microsoft has and and yet, you are competing almost on the same level according to Fortune magazine. So my so my question to you and this is something, this is something that I think is is also one of the fallouts of what’s happened because of open AI in the last week or so. Is this emergence of these communities that you had spoken about? Earlier and there’s those effective altruists, and there are the AI optimists. And when we talk about the doomers, I don’t know if we can put them in the same light as an effective. Interest. But there is this belief within some of the big tech communities that they need to protect their investments and so that could have been a veiled response to, hey, I want to pause AI because we we’re the only ones that can provide. The light technology to save humanity. From this existential threat, what’s your view on this? Because you’re considered an AI optimist, and you believe that we don’t have to pause AI, that we can have regulation in lockstep with how we develop responsibly.

 

Christoph Schuhmann

Yeah, I think that slowing down is. Like a really bad idea. Why? Because if, like this, this is so obvious. Like if one participant in this. Rays of breakthroughs slows down then. Others won’t. Like if the US would ban AI research and Germany and the EU. Do you honestly believe that, like China, would slow down or Russia would slow down or whoever? They wouldn’t like it. It’s obvious. That they wouldn’t. Do we want China and Russia to lead this AI breaks through race. I think no, I think like. If you are really. Concerned about like ohh what could go wrong, and this is the main focus you’re focusing on. Then it is like a fight or flight response that focuses you automatically on thinking. Ohh, how can we avoid mistakes? Avoiding mistakes? Ohh yeah so but like. Avoiding mistakes is really, really bad for long term research and for making. Good founded long term decisions, I think like what we need is we need to think about how can we, the Western societies like. Governments, citizens, everyone. Be good role models. And instead of leading it with regulations and criticizing, open my eye and pointing fingers like like to whatever. Like whoever like did say anything or did research. Anything like instead of pointing fingers, we should look at ourselves and think when we wake up, how can we be good role models? Because actually like some people. Say ohh yeah. Christopher, Sona, eve. Like if you open source everything then someone will come up with some bad evil application of it and then this will be bad for all of us. But in the end. I mean. Almost no one thinks about the risk. Of holding back AI progress. I mean. Think about. If there would be a cure to cancer. That could be researched with the help of AI with open or open source AI, for example. Maybe three years earlier. Then like. In contrast to, if you would ban. It or regulate it or whatever. Then what? Would be the cost. Like, what is the worth of these people? Who? Cannot be saved in these three years. And this sounds like a little bit unfair, like like, it sounds like a little bit like an ethical. Yeah. Sledgehammer or whatever. But I I have to say there is some truth to it. I mean, like. I want to die of cancer my my sister when she was like. 46 I’m I’m 41 now. She was. She died of cancer. She left behind a seven years old daughter. And I like my. Mom almost died with cancer. Like she she just survived like a chemotherapy. And I’m now 41. I’m still feeling young. I’m not feeling old like I’m. I’m still feeling like like I was when. I was 30 or 20. Five, but I I I’m seeing that I’m getting. A little bit. Older every year. And I don’t want to die of cancer. I I want to get, like, maybe 100 years in a healthy way. And this could not be achieved by like investing some money into some tech company and then maybe having some super expensive injections. No, it should. Be achieved by progressing this technology to a level where it will be so cheap that everyone could live up to 100 or 120 years in a good decent. And I’m I’m damn serious about this. I I think. Like everyone should have the right to participate from this. So I I don’t want to live in a future where I will be 120 years. And can live healthy and happily.

I can make a science fiction movies because I’m a billionaire. And I have this luxury. I want a future where every teacher and every cashier from the supermarket will have these superpowers. Where? We’re like it will be free for everyone. Imagine like. The scientific. There’s a science fiction movie from the future, like so a dystopian future would be where these AI matrix Oasis powers would be in the hands of a WS and Microsoft, and they basically they would. Give it to us. So that we would become addicted to holodecks and computer games, virtual worlds. And AI companions and everything would be like. With a hidden objective to generate revenue for these big companies. But like a utopian version of, it would be where the power to have your own holodeck and to have your own virtual assistants and whatever, like your own companions, your own teachers. It would run locally and it would not be aligned to what the AI the open AI safety people would think is good, but it would be aligned to what you think is good. And then everyone could choose do I want to have a virtual assistant who gives me superpowers? And helps me to be a more kind, more empathetic human being. With a hidden objective to in the end make me buy Amazon products or. Do I want to have the same? With maybe a little. Bit more effort to maintain it. To have it locally. Without any have hidden objective just for me and in the end I could choose. Do I want to have like this holiday experience where in the end I will be just playing every moment because I have so much fun? Or do I want to have this holiday experience where I have lots of fun but in the end they I will say hey Christoph you haven’t spent time with your kid. You should spend a little bit of time more. Time with your kid. Like now he’s. Coming right now. He’s like, running around here and like I want to say I assistants to be like my body, like my good friend who take care of me and I align with my values locally on my hardware and this should be a human right. This should be something that the UN is doing. And not open my eye. And I I I’m not scrutinizing like I’m not saying open my eye or Microsoft is bad. I’m saying I’m saying they are doing whatever makes sense for them. I think like we are to blame because we are not pushing our governments. To be good role models like, we should push our government to make sure to have something like, say, AI United Nations or like a. Certain for AI or whatever. Like we would need this in the US and in. The EU and maybe in India or whatever and. Like even in China, I don’t. Care. But like, like we should have like this. Several big governmental institutions that basically provide these foundation models like we should have this as a national service.

 

Hessie Jones

Yes, I would agree with that. So, but so here’s the thing and so what you’re advocating for is the decentralization of AI, so that it’s not in the hands of a few for profit companies whose motivations are very different than trying to, I guess, make it safe, make it make it accessible, make it available for everyone in the world. So that’s a very, very different goal and I think that’s the reason why that’s why government is needed. So can you talk about closed versus open and you know the the real risks of let’s say a Microsoft and an open AI, which is almost an oxymoron? Because it’s not. What is the risk to the world if we continue to allow closed systems that where a lot of people still depend on them?

 

Christoph Schuhmann

So the big danger I think at the moment is the danger of centralization. Centralization of power over knowledge and about over like this AI technology, because that’s the moment we only have like a few players like Microsoft and Google, and maybe Tencent or like Amazon and maybe even the Chinese government, who is like really pushing hard at the moment to to catch up and to train these foundation. Alive and. In the end, if they will only be. Like five or six players. To provide these services. All data will go there and even if they say yeah, we have the best safety team and everything will be private on Azure cloud and a container whatever. In the end you’re dependent on the goodwill of five players. And there’s another problem, like if opening eye or deep mind or whatever. Like they say, yeah, we have the best safety team like and traffic we have the best. Safest models? Then you have to trust that this. For profit company. Hires the best scientific like safety researchers. And do you really think their team is better? Then all safety experts from the scientific community. I mean, if you are someone who believes that human nature is in general more bad, and that risks are more expensive, that mistakes are more expensive than all the benefits of like learning from failure. Of course, then you should be scared and should say, yeah, we need like the best experts and. They have to. Like guardrail, it and like put it into a box and control it and we need to to prevent. Every misbehavior like OK like but this will lead ultimately into the surveillance state. There’s one example. I really, really like Geoffrey Hinton. I remember like 5-6, seven years ago. I always watched his lectures when a new lecture came out, like on YouTube somewhere. But recently he resigned from Google and he began to warn about the dangers of AI. And in one lecture, I think it was an Oxford lecture or something like this or Royal Society lecture. He said that misinformation is really super dangerous for our democracies and that there’s a big risk in having these, like people who pretend to be other people. Like having a fake voice or having like a fake image of a politician. A deep fake. And therefore his logical consequence would be that everyone who puts generated content out and pretends it’s real should be punished in the same way someone should be punished to using or producing counterfeit money like hard prison sentences. And I was thinking like, are you serious? Like there will be so many students, like high school students, kids who will do this. Do you want to build like, a surveillance Internet police to hunt them all down and bring them all to just? This like this, this cannot be real like I I wasn’t understanding. And then in the end during a Q&A session of this lecture, like he was saying, yeah, like he’s he basically told something like he was disillusioned by capitalism and deep in his heart he was thinking in the end we would need something like. Socialism, like a government that controls and sets clear rules and like he was not convinced that capitalism was like, secure. And I was thinking of my Oh my. God like he’s he’s. Seriously telling that he is basically. Proposing socialism like like Star Wars like Star Trek like Star Trek socialism, where you have the central government basically banning like the individual freedom to be resilient and capable. And I was thinking, yeah, this makes sense. Like if you are really. Like on, on this political standpoint, in view of human nature, I mean, OK, then the logical consequence could be OK we need, like the government to step in and control the Internet. But this is not what I’m like wanting to have in my life. Like I I want governments to lead the research, to empower the citizens. I don’t want governments. To lead. AI in regulating the citizens and the companies I like, they should be free to do whatever is OK in standard law, like don’t betray others. I mean, if a kid. Called someone and says Ohh yeah I’m Santa Claus. I’m your husband, blah blah blah and they’re making. A joke. It’s basically a kid making a joke. It shouldn’t lead to prison sentence if someone does it with the intent to rip off your money, or like maybe to to to like, deceive you in a serious way to put you to harm. This should be punished the same way it should have been without AI. I don’t think we need much more regulation. We need good role models to empower the general public to become resilient.

 

Hessie Jones

Do you think what happened in the last week that there is more of a risk of the centralization of power because of what Microsoft was successfully able to do with reinstalling Altman back into his seat.

 

Christoph Schuhmann

Yes and no. Yes, like from the perspective that open my eye will now really push it hard to keep put out the best products and potentially be the first to develop AGI. Whatever this means. On the other hand, I think this was a wakeup call for many governmental institutions and other companies and like critics. And now? The pressure to create alternatives will be stronger and I have some hope that something good will. Grow out of this.

 

Hessie Jones

Thank you, Christoph. This is, I could go on for hours because this is a big topic and. As you know. Developing AGI is scary and exciting at the same time and everybody’s watching it and they’re trying to be careful. They’re trying to make sure that the next Terminator. Doesn’t exist in our near future, so I know that Lyon is is very committed to making sure it’s open and transparent, so that doesn’t happen. So thank you.

 

Christoph Schuhmann

Let me let me maybe put. A statement to the end. If you’re watching this about open my eye and maybe you’re scared or hopeful or whatever. If you’re thinking. Ohh yeah, I’m just a programmer, or maybe I’m just a housewife or a teacher or a high school student and I cannot make a difference. I have to tell you. It’s not true. You can make a difference. I know someone. Who got into? An eye like. Nine months ago. And he was just like watching from the outside. And he was thinking ohh. Yeah. What like? Lion is doing and what the other open source people are doing it so cool. I really should get into it. And now he has built his own online discord community, and he’s now in talks to raise money, and he has been like, actually, it’s a guy behind the open order effort, the open orca effort is, like, really famous within the ML community because Orca was a project from Microsoft. Like in summer where they. Used data from GPE 4 to distill it into like a small 13 billion per meter model, much smaller than GPD 4, and make it much more powerful than it had been before this 13 billion. So like it was not as powerful as you before, but it was. A lot more powerful than before. And this was closed sourced by Microsoft and then this guy was reading about it and he was thinking, oh, yeah, there are these communities like Lyon and Eleuthera and like, whatever. And he started to work on this. And then he builds this data set. And he built like a community around it. And I know that there are many people in this big LM community. They are thinking about how we can take existing models and merge them together. How can we make like repurpose whatever we have in a way that, oh, my eye would? Never do it. Because for my eye, it’s easy to just throw. 20,000 GPUs at. It it’s easy. You have the GPUs. You have the money you have. The developers, you don’t need a really new creative way to. Basically, make breakthroughs with little resources, but I think I’m convinced there are many, many ways to make breakthroughs with little resources. Maybe few GPUs or maybe a few talented people. And people can make these breakthroughs, and the only reason why, Oh my, I had not made it is because they are so busy. With pursuing. The path of. Throwing more resources at it. It’s for them clear the path forward, just scale it up, hire the best minds, have them work on what is obvious. But there are many, many paths and many, many avenues that are not obvious to open my eye and. To me like. But that will probably in the end in three to four years from now, lead to much. More efficient AI? And this is where everyone can make a difference. Even if you don’t even have, like a university degree. Just go there. Written tutorials come to the discord communities. Like make a little bit a positive difference. Start with yourself and not pointing fingers.

 

Hessie Jones

Thank you. That’s great. Thank you, Christophe, because I think that that’s what we need. I think if humanity is going to benefit from this, I think we all need access. We know all need a voice and where this technology goes and it’s moving super-fast 3G-5G we already. All don’t have. Access so you know what? If the technology is built with humans in mind, then hopefully there is hope for all of us. So thank you. So that’s, that’s all we have time for today. Thank you everyone for joining us. Thank you Christoph so much for your insight into the events that happened in the last week or so for our audience. If you have any questions about future topics, please e-mail us like communications@altitudeaccelerator.ca. You can also find text and uncensored wherever you get your podcast. In the meantime, everyone until next time have fun and be safe.

Host Information
Hessie Jones

Hessie Jones is an Author, Strategist, Investor and Data Privacy Practitioner, advocating for human-centred AI, education and the ethical distribution of AI in this era of transformation. 

She currently serves as the Innovations Manager at Altitude Accelerator. She provides the necessary support for Altitude Accelerator’s programs including Incubator and Investor Readiness. She will be the liaison among key stakeholders to provide operational support and ultimately drive founder success. 

LinkedIn

You can also listen to this podcast on Transistor.

Please subscribe to our weekly LinkedIn Live newsletters.