During a House Oversight Committee hearing on Thursday, Rep. Anna Paulina Luna (R-FL) asked witnesses about the potential to create superhumans using artificial intelligence.
00:00Thank you Madam Chair. You know I want to thank you all for being here today. One of the biggest concerns I have with this whole discussion on AI is a failure to admit first and foremost that it is apparent, it does appear that in the next 10 years that the way that we have grown up and know life with AI will be forever changing.
00:19With that being said there are a few differences that I think most people acknowledge and that AI lacks a few things one being a soul also empathy and we are not gods or God and so I think that we're playing a dangerous game especially moving into a future that's simply unknown right now especially with if we do not essentially develop the first AI super weapon essentially that's what this is whoever does will essentially control the world and I think that that's a very serious topic that needs to be discussed right now.
00:49But specifically to transhumanism and coupling of AI with humanity, what regulatory frameworks can be established to ensure AI driven transhumanist technologies like brain computer interfaces prioritize human safety and consent? I'm opening this up to everyone on the panel. I have a few questions so I'd like to get through this so just please limit your responses.
01:09Mr. B if you want to go first or down there, either or.
01:12I'll just briefly say Congresswoman that there's been a lot of work done on AI governance by multiple bodies and entities.
01:19I just want to cite something that actually came from the Biden administration on this about the applicability of so many existing laws and regulations to AI.
01:27And their leading regulators for four large agencies said before leaving office that basically we have the ability to enforce the respective laws and regulations to promote responsible innovation and automated systems and that basically AI does not get an exemption from civil rights laws, from consumer protections, unfair and deceptive practices, fraud, so on and so forth.
01:48So we have a lot of policies that do regulate many of the fears you're raising.
01:53Is there the possibility of regulation though that exists that would enable transhumanist enhancements that exacerbate social inequalities or basically financial inequalities creating an elite class of enhanced individuals?
02:05I bring that up because we're talking about low level jobs potentially being removed from the workforce, people that have and can afford the ability of, for example, implementing or implanting chips, given access to unknown amounts of knowledge and essentially creating the first superhuman.
02:24And this is a real concern.
02:26I know it might come across as funny to other people but I am genuinely concerned about this.
02:30You know, from a bipartisan perspective here we're talking about humanity versus machine.
02:35Hopefully in a flowery world we would have it set up to where it could, you know, improve society for the best.
02:42But I'm also in politics and I have a very unfortunately sometimes negative perspective on the world because I've seen the worst of humanity in this job.
02:50So science fiction speaks of this a lot.
02:53I mean there's a lot of people who think about this notion of haves and have nots and how technology will exacerbate that.
02:59And this can depend on how it's available and how much money it costs and who can get it.
03:03Nina Farahe writes a lot about AI and brain interfaces and what we can do to protect humanity in that world.
03:11So I do recommend talking to her.
03:13She's the smartest person I know on that particular question.
03:16Okay. Also, what role can AI itself play in monitoring and mitigating risks associated with transhumanist technologies such as detecting biases and enhancement distribution?
03:30I think a lot of the conversation so far since ChatGPT came out is about foundation models and how they operate and how they hallucinate and some of the shortcomings of them and the risks with them.
03:41But a lot of these problems can be overcome at the application layer.
03:46And Ms. Miller and I represent companies who build applications on top of many models.
03:50And so if you think about that framework, what you understand then is the responsibility of some of these outcomes, the responsibility of some of these behaviors isn't just left to the model.
04:00It's left to the vendor and the developer of how you train it, how you package it, how you check for issues and guardrails and all of that.
04:09So real quick, I have one more question, just real quick, yes or no.
04:12Would either or any of you be in favor of the U.S. government through either the DIA, NSA, all agencies involved, Pentagon even creating a super AI, I would maybe term it guardian AI to actually train up based on our principles to defend the United States and potentially world against maybe a more nefarious actor like China creating the same super weapon because AI can only fight AI.
04:33I think you guys would all agree with that in an effort to defend the home front.
04:37I think we can't do that yet.
04:38Not yet, but maybe in the future.
04:40Maybe in the future.
04:41Okay.
04:42Maybe, but I need more about the details about what that means because there's a lot to unpack there.
04:47Unqualified probably to weigh in on that one.
04:50I'm supportive of any sort of sovereign AI that we think makes sense.
04:53Sovereign AI would be a good name.
04:55I think Governor Long cannot do it.
04:56I think public-private partnership should be required in this case because otherwise we will not be able to confront China.