Philly Sheriff's Office takes down fake AI-generated ‘news' headlines – NBC Philadelphia

The campaign team behind Philadelphia’s embattled sheriff acknowledged Monday that a series of positive “news” stories posted to their site were generated by ChatGPT.
Sheriff Rochelle Bilal’s campaign removed more than 30 stories created by a consultant using the generative AI chatbot. The move came after a Philadelphia Inquirer story on Monday reported that local news outlets – including NBC10 – could not find the stories in their archives.
The Bilal story list, which the site dubbed her “Record of Accomplishments,” had ended with a disclaimer — which the Inquirer called new — that the information “makes no representations or warranties of any kind” about the accuracy of the information.
The list of news stories, which includes purported publication dates, attributed four news stories to the Inquirer, none of which are in the paper’s archives, spokesperson Evan Benn said. The others were attributed to three local broadcast stations — NBC10, WHYY, and CBS 3. 
The fake headlines included multiple stories the website claimed NBC10 ran regarding Bilal winning the Philadelphia Sheriff’s race, the sheriff’s department handing out free gun locks and the suspension of evictions during the COVID pandemic. 
While NBC10 published multiple stories on evictions being suspended, none of them mentioned the Philadelphia Sheriff’s Office, which the headlines on the sheriff’s office’s website falsely claimed. 
NBC10 also posted a video on the sheriff’s office handing out free gun locks. However, that video was posted in 2016, before Bilal took office, while the fake headline included Bilal’s name. 
Breaking news and the stories that matter to your neighborhood.
NBC10 also posted a video on Bilal winning the sheriff’s race though it had a different headline from the one that the sheriff’s office website used. 
Some, including a fired whistleblower in Bilal’s office, fear such misinformation could confuse voters and contribute to ongoing mistrust and threats to democracy.
“I have grave concerns about that,” said Brett Mandel, who briefly served as her finance chief in 2020 and spoke before the campaign issued the statement.
“I think we have seen at the local and national level, not only a disregard for truth and the institutions we have thought of as being the gatekeepers to truth,” he said, “but I think we have eroded all trust in this area.”
Mandel filed one of several whistleblower suits lodged against the office. He alleged he was fired for raising concerns about office finances. Bilal has been criticized during her tenure over office spending, campaign finance reports, the reported loss of hundreds of weapons and other issues.
NBC10 reached out to the Philadelphia Sheriff’s Office about the headline controversy. They sent us the following statement Tuesday morning. 
Mike Nellis, founder of the AI campaign tool Quiller, called the campaign consultant’s use of AI “completely irresponsible.”
“It’s unethical,” he said. “It’s straight up lying.”
But he said OpenAI is responsible for enforcing its policies, which don’t allow people to share output from its products in order to scam or mislead people.
OpenAI also does not allow people to use its systems to build applications for political campaigning or lobbying, though there’s no evidence that happened in this instance. OpenAI didn’t immediately respond to a request for comment.
Nellis said local, state and federal regulation of AI tools in politics is also needed as the technology advances. Though bipartisan discussions in Congress have stressed the need for such legislation, no federal law has passed yet.
Large language models like OpenAI’s ChatGPT work by repeatedly predicting the most plausible next word in a sentence. That makes them good at completing challenging prompts in seconds, but it also causes them to make frequent errors known as hallucinations.
Many Americans have started using these tools to write work emails, website copy and other documents more quickly. But that can lead to trouble if they don’t prioritize accuracy or carefully fact-check the material.
Two lawyers had to apologize to a judge in Manhattan federal court last year, for example, after they used ChatGPT to hunt for legal precedents and didn’t immediately notice that the system made some up.


Leave a Reply

The Future Is A.I. !
To top