Skip to playerSkip to main contentSkip to footer
  • 6/4/2025
#AIFactFail #MisinformationAlert #FakeNewsByAI #TrustButVerify #AIBlunder #DigitalDeception #AIMisinformation #ChatbotChaos #FactCheckFlop #AIvsTruth #TechGoneWrong #NewsNotNoise #AIHallucinations #MediaLiteracyMatters #TruthInCrisis #SmartNotFooled #HumanOverAI #RealityCheckNow #StopAIMisinformation #ThinkBeforeYouShare

Transcript
00:00If your so-called live news looks like a rerun of last year's viral video,
00:04you're probably chatting with a bot that's winging it.
00:07Not every digital assistant is Sherlock Holmes.
00:10Sometimes, they're more like the friend who confidently gets every trivia answer wrong.
00:15So, here's the scoop.
00:16During the recent India-Pakistan conflict,
00:19AI chatbots, like GrokChatGPT and Gemini,
00:22were supposed to help us separate fact from fiction.
00:25Instead, they sometimes did the opposite.
00:28Grok, for example, claimed an old Sudan airport video was a new missile strike in Pakistan.
00:34Oops.
00:35Why's this happening?
00:36Tech companies have cut back on human fact-checkers leaving bots to play referee.
00:40But AI hallucinates.
00:42Basically, it guesses when it doesn't know and does it.
00:45That's like asking your dog for stock tips.
00:48Experts warn over-relying on AI can actually make misinformation spread faster,
00:53especially during crises or elections.
00:56The fix?
00:56Always double-check with trusted sources
00:59and don't let a chatbot be your only filter for truth.
01:02Stay skeptical.
01:04Your feed and your sanity will thank you.

Recommended