digdeeper

30 readers
7 users here now
founded 4 weeks ago
ADMINS
126
 
 
127
 
 

Backstory here: https://www.404media.co/ars-technica-pulls-article-with-ai-fabricated-quotes-about-ai-generated-article/

Personally I think this is a good response. I hope they stay true to it in the future.

128
 
 
129
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/pcmasterrace by /u/Quegyboe on 2026-02-15 22:36:07+00:00.

130
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/pcmasterrace by /u/Mental-Yogurtcloset8 on 2026-02-15 19:43:45+00:00.

131
 
 

A story about an AI generated article contained fabricated, AI generated quotes.

Archived version: https://archive.is/20260215215759/https://www.404media.co/ars-technica-pulls-article-with-ai-fabricated-quotes-about-ai-generated-article/

132
 
 
133
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/pcmasterrace by /u/Bmacthecat on 2026-02-15 21:41:48+00:00.

134
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/pcmasterrace by /u/de4thqu3st on 2026-02-15 20:55:47+00:00.

135
 
 
136
 
 

For the first time, speech has been decoupled from consequence. We now live alongside AI systems that converse knowledgeably and persuasively—deploying claims about the world, explanations, advice, encouragement, apologies, and promises—while bearing no vulnerability for what they say. Millions of people already rely on chatbots powered by large language models, and have integrated these synthetic interlocutors into their personal and professional lives. An LLM’s words shape our beliefs, decisions, and actions, yet no speaker stands behind them.

This dynamic is already familiar in everyday use. A chatbot gets something wrong. When corrected, it apologizes and changes its answer. When corrected again, it apologizes again—sometimes reversing its position entirely. What unsettles users is not just that the system lacks beliefs but that it keeps apologizing as if it had any. The words sound responsible, yet they are empty.

This interaction exposes the conditions that make it possible to hold one another to our words. When language that sounds intentional, personal, and binding can be produced at scale by a speaker who bears no consequence, the expectations listeners are entitled to hold of a speaker begin to erode. Promises lose force. Apologies become performative. Advice carries authority without liability. Over time, we are trained—quietly but pervasively—to accept words without ownership and meaning without accountability. When fluent speech without responsibility becomes normal, it does not merely change how language is produced; it changes what it means to be human.

This is not just a technical novelty but a shift in the moral structure of language. People have always used words to deceive, manipulate, and harm. What is new is the routine production of speech that carries the form of intention and commitment without any corresponding agent who can be held to account. This erodes the conditions of human dignity, and this shift is arriving faster than our capacity to understand it, outpacing the norms that ordinarily govern meaningful speech—personal, communal, organizational, and institutional.

137
 
 

Dating apps exploit you, dating profiles lie to you, and sex is basically something old people used to do. You might as well consider it: can AI help you find love?

For a handful of tech entrepreneurs and a few brave Londoners, the answer is “maybe”.

No, this is not a story about humans falling in love with sexy computer voices – and strictly speaking, AI dating of some variety has been around for a while. Most big platforms have integrated machine learning and some AI features into their offerings over the past few years.

But dreams of a robot-powered future – or perhaps just general dating malaise and a mounting loneliness crisis – have fuelled a new crop of startups that aim to use the possibilities of the technology differently.

Jasmine, 28, was single for three years when she downloaded the AI-powered dating app Fate. With popular dating apps such as Hinge and Tinder, things were “repetitive”, she said: the same conversations over and over.

“I thought, why not sign up, try something different? It sounded quite cool using, you know, agentic AI, which is where the world is going now, isn’t it?”

Is there anything we can't outsource?

138
139
140
 
 

Whether you agree with the Guardian’s conclusions or not, the underlying issue they’re pointing at is broader than any one company: the steady collapse of ambient trust in our information systems.

The Guardian ran an editorial today warning that AI companies are shedding safety staff while accelerating deployment and profit seeking. The concern was not just about specific models or edge cases, but about something more structural. As AI systems scale, the mechanisms that let people trust what they see, hear, and read are not keeping up.

Here’s a small but telling technology-adjacent example that fits that warning almost perfectly.

Ryan Hall, Y’all, a popular online weather forecaster, recently introduced a manual verification system for his own videos. At the start of each real video, he bites into a specific piece of fruit. Viewers are told that if a video of “him” does not include the fruit, it may not be authentic.

This exists because deepfakes, voice cloning, and unauthorized reuploads have become common enough that platform verification, follower counts, and visual familiarity no longer reliably signal authenticity.

From a technology perspective, this is fascinating.

A human content creator has implemented a low-tech authentication protocol because the platforms hosting his content cannot reliably establish provenance. In effect, the fruit is a nonce. A shared secret between creator and audience. A physical gesture standing in for a cryptographic signature that the platform does not provide.

This is not about weather forecasting credentials. It is about infrastructure failure.

When people can no longer trust that a video is real, even when it comes from a known figure, ambient trust collapses. Not through a single dramatic event, but through thousands of small adaptations like this. Trust migrates away from systems and toward improvised social signals.

That lines up uncomfortably well with the Guardian’s concern. AI systems are being deployed faster than trust and safety can scale. Safety teams shrink. Provenance tools remain optional or absent. Responsibility is pushed downward onto users and individual creators.

So instead of robust verification at the platform or model level, we get fruit.

It is clever. It works. And it should worry us.

Because when trust becomes personal, ad hoc, and unscalable, the system as a whole becomes brittle. This is not just about AI content. It is about how societies determine what is real in moments that matter.

TL;DR: A popular weather creator now bites a specific fruit on camera to prove his videos are real. This is a workaround for deepfakes and reposts. It is also a clean example of ambient trust collapse. Platforms and AI systems no longer reliably signal authenticity, so creators invent their own verification hacks. The Guardian warned today that AI is being deployed faster than trust and safety can keep up. This is what that looks like in practice.

Question: Do you think this ends with platform-level provenance becoming mandatory, or are we heading toward more improvised human verification like this becoming normal?

141
 
 
142
 
 

The CIA mic sabotage technology hits again!

143
 
 

After successfully recuperating tiktok, politicians are going to once again exploit pseudo-science to outlaw the "infinite scroll." Get ready for the comeback of the pager. Thanks libs!

144
145
 
 

Hacker News.

Just a decade after a global backlash was triggered by Snowden reporting on mass domestic surveillance, the state-corporate dragnet is stronger and more invasive than ever.

146
 
 
147
 
 
148
 
 
149
150
view more: ‹ prev next ›