NYC’s AI Chatbot Has Been Telling Businesses to Break the Law
  • 17 days ago
NYC’s AI Chatbot Has Been , Telling Businesses to Break the Law.
New York City's AI chatbot was created to help small business owners, but it's been dispensing concerning advice, AP reports. .
For example, many small business owners have
been given inaccurate information about local policies or been encouraged to break the law.
However, the city is not removing
the chatbot from its official website.
Instead, it has provided a disclaimer stating that the chatbot may "occasionally produce incorrect, harmful or biased" information.
Critics say that the situation highlights
the dangers of AI being used by
governments without proper guardrails.
They’re rolling out software that
is unproven without oversight, Julia Stoyanovich, computer science professor
and director of the Center for Responsible AI
at New York University, via statement.
It’s clear they have no intention
of doing what’s responsible, Julia Stoyanovich, computer science professor
and director of the Center for Responsible AI
at New York University, via statement.
There’s a different level of trust that’s
given to government. Public officials need
to consider what kind of damage they can
do if someone was to follow this advice
and get themselves in trouble, Jevin West, a professor at the University of Washington and
co-founder of the Center for an Informed Public, via statement.
Microsoft, which powers the chatbot, said it
is working with the city "to improve the service
and ensure the outputs are accurate and
grounded on the city’s official documentation.".
On April 2, Mayor Eric Adams said that
letting users find issues with the chatbot is just part of sorting out the new technology.
Anyone that knows technology knows this
is how it’s done. Only those who are fearful
sit down and say, ‘Oh, it is not working the
way we want, now we have to run away from
it all together.’ I don’t live that way, Mayor Eric Adams, via statement.
Julia Stoyanovich, a computer science professor and director of the Center for Responsible AI at New York University, referred to Adams' approach as "reckless and irresponsible," AP reports.
Recommended