During a House Oversight Committee hearing on Thursday, Rep. Dave Min (D-CA) spoke about the 10-year moratorium on the enforcement of AI regulations by state governments in the Republican budget.
00:03I appreciate the opportunity to discuss artificial intelligence in the federal government.
00:07Of course, AI has been around for a while, but it's recently hit, I think we all agree,
00:11a tipping point of exponential and profound growth.
00:15Innovation and advancement are now being measured in days or even seconds or minutes as opposed to years.
00:21And this rapid development of AI is poised to radically transform our society and our economy,
00:27and that's going to create a lot of great benefits, I believe, but also a lot of challenges,
00:31including to our workforce, our national security, data privacy, and a whole host of other issues.
00:36And these benefits and challenges of AI are also important to consider as we contemplate bringing more AI into our federal government.
00:44And one important point, and I know it's been made, but I just want to reemphasize this,
00:48AI is only as good as the inputs it receives, inputs it's trained on, the people who train it,
00:54and the objectives it's trained to try to achieve.
00:57And that's something that I do think is being lost right now.
01:01In the rush to think about AI, I think we have to think about what some of the goals are.
01:06A lot of mention has been made of the rash and reckless actions made by the Trump administration,
01:11specifically Elon Musk and Doge, around AI.
01:14And, again, I think what are we looking at as far as goals?
01:18I think we want to make sure that the missions of our different agencies,
01:21of the tasks they are meant to accomplish, is what's in mind, and not just simply cutting for the sake of cutting.
01:28And so, for example, you know, when we think about AI,
01:31Doge may be using AI right now to surveil communications at federal agencies,
01:36performing potential cuts, including staff and programs.
01:38They have stated that they want to use generative AI to automate much of the work previously done by civil servants.
01:44But there's a lot of potential mistakes there, as well as the data privacy aspects that have been touched on in previous questions.
01:51But this data privacy, and we don't actually know at this point what Elon Musk and Doge have been able to scrape.
01:58They certainly have had access to a lot of our data, illegally so, I would add.
02:03And what have they done with that?
02:04We've been told that they've downloaded them onto private servers.
02:07This committee has not done any oversight on this question, but I think it's an important one.
02:11As we think about AI, is Grok or Elon Musk's other AI models, are they being trained on data that is only available because of his position at Doge?
02:23This also creates a lot of national security risks, as well.
02:26And so, I guess my first question is to Commissioner Schneier.
02:29From a national security perspective, what safeguards should we be thinking about implementing
02:33to make sure that we're protecting sensitive data while still promoting artificial intelligence design and innovation?
02:40I mean, we have the safeguards.
02:43The regulations are in place that protect the data.
02:45We know how to do it.
02:46We have been doing it for years, for decades, and it's a matter of implementing them.
02:50The problem, as you said, was what Doge did was illegal.
02:53It was contrary to the laws.
02:56And when I think about using AI in organizations, it's a matter of bringing the technology in
03:01and using it in a way that makes sense.
03:04There's a lot of talk about using it to find fraud.
03:07That seems like a good use.
03:09I would want to add, I see Mr. Timmons left, but how about tax fraud?
03:15That is like the for-profit arm of the government, and we get a lot of bang for our buck for looking
03:21for tax fraud.
03:21I think AI would be great at a first thing, but it's a matter of keeping the data in the
03:25organizations with the laws that protect it.
03:28And I think Isaac Asimov wrote a book about this a long time ago, but something similar
03:32that we ought to be training AI, as we think about it, to follow basic rules and principles
03:37that we want them to follow, first and foremost, instead of just thinking about the end goals
03:42in mind.
03:43Now, again, I want to get back to the point of goals being important.
03:46When we look at Social Security, when we look at programs like Medicare and Medicaid, it's
03:51easy to simply just cut costs.
03:53But if we are diminishing service, if we're just looking at cutting costs and not at the
03:58service being provided or the efficiency of the service being provided, we end up with
04:01huge mistakes.
04:02At the same time, if you have a bunch of coders out there who don't have any context about
04:05the program, how it's been administered, you end up with huge problems.
04:08As we saw, you know, we saw that Doge flew into the administration without even understanding
04:14the basics of how Social Security was coding its different entries in the COBOL coding process.
04:20And so in that review, they noticed entries that made it seem like the administration was
04:24paying individuals' benefits who were aged 150 or older.
04:28Trump and Musk actually claimed this falsely.
04:30And again, it was because you relied on computer wizard geniuses like the 22-year-old coder named
04:36Big Balls instead of actual administration professionals who understood what was going on.
04:41Similarly with Medicare and Medicaid, the goal here should be to maximize health care in an
04:45efficient way, not simply to cut costs.
04:47The cheapest way to administer health care, of course, is just to let everyone die.
04:51That's cheaper than administering health care.
04:54And so we can't lose the force for the trees here.
04:56So I see I'm out of time.
04:58The last point I just want to make is the 10-year moratorium on AI regulation, something that
05:03is outrageous, something that was slipped in in the dark of night.
05:06It's a shocking provision to include, so shocking that my colleague, Madam Chair, Taylor
05:11Green, said she would vote against the entire big, beautiful bill if this provision wasn't