Skip to playerSkip to main contentSkip to footer
  • yesterday
AI companies like OpenAI are shifting responsibility to users, raising tough questions about risk, regulation, and accountability, according to legal and data privacy academic Simon Chesterman.
Transcript
00:00How many of you have accepted terms and conditions online that you didn't read?
00:06We have no choice.
00:07I've done books on data protection. I never read the terms and conditions.
00:10We have no choice.
00:10And so, following on from what Maymay was saying, there is this question, who should be responsible?
00:15The developer? The deployer?
00:17And then both those groups have decided, actually, it would be great if the users could be responsible.
00:22And so that's why, if you use Chachapiti, it now has a little disclaimer saying Chachapiti makes mistakes,
00:26which is lawyer speak for, don't blame us if things go wrong, that's on you.
00:31And so one of the risks is that the lawyers will say, okay, you manage risk by pushing it onto the user
00:37and saying, if you rely on this system that we know you're going to rely on, don't blame us.
00:43And it's only in regulated areas like banks, like medicine, that I think you will see regulators stepping in
00:50because elsewhere it's going to be very, very hard to micromanage those kind of contracts.

Recommended