digdeeper

28 readers
3 users here now
founded 4 weeks ago
ADMINS
26
 
 

Security researchers have discovered more than 300 Chrome extensions that leak browser data, spy on their users, or outright steal users’ data.

Research focused on the analysis of network traffic generated by Chrome extensions has uncovered 287 applications transmitting the user’s browsing history or search engine results pages (SERP).

Some of them, security researcher Q Continuum explains, would essentially expose the data to unsecured networks, while others would send it to collection servers, either due to intended functionality, for monetization purposes, or with malicious intent.

The extensions have over 37.4 million users, the researcher says. Of these, roughly 27.2 million users installed 153 extensions that were confirmed to leak browser history upon installation.

27
28
 
 

The middle schooler had been begging to opt out, citing headaches from the Chromebook screen and a dislike of the AI chatbot recently integrated into it.

Parents across the country are taking steps to stop their children from using school-issued Chromebooks and iPads, citing concerns about distractions and access to inappropriate content that they fear hampers their kids’ education.

29
 
 
30
 
 

HEVC video codec patent ruling

31
32
33
34
 
 

cross-posted from: https://lemmy.world/post/43189320

Access options

35
36
 
 
37
 
 
38
39
 
 

Whether you agree with the Guardian’s conclusions or not, the underlying issue they’re pointing at is broader than any one company: the steady collapse of ambient trust in our information systems.

The Guardian ran an editorial today warning that AI companies are shedding safety staff while accelerating deployment and profit seeking. The concern was not just about specific models or edge cases, but about something more structural. As AI systems scale, the mechanisms that let people trust what they see, hear, and read are not keeping up.

Here’s a small but telling technology-adjacent example that fits that warning almost perfectly.

Ryan Hall, Y’all, a popular online weather forecaster, recently introduced a manual verification system for his own videos. At the start of each real video, he bites into a specific piece of fruit. Viewers are told that if a video of “him” does not include the fruit, it may not be authentic.

This exists because deepfakes, voice cloning, and unauthorized reuploads have become common enough that platform verification, follower counts, and visual familiarity no longer reliably signal authenticity.

From a technology perspective, this is fascinating.

A human content creator has implemented a low-tech authentication protocol because the platforms hosting his content cannot reliably establish provenance. In effect, the fruit is a nonce. A shared secret between creator and audience. A physical gesture standing in for a cryptographic signature that the platform does not provide.

This is not about weather forecasting credentials. It is about infrastructure failure.

When people can no longer trust that a video is real, even when it comes from a known figure, ambient trust collapses. Not through a single dramatic event, but through thousands of small adaptations like this. Trust migrates away from systems and toward improvised social signals.

That lines up uncomfortably well with the Guardian’s concern. AI systems are being deployed faster than trust and safety can scale. Safety teams shrink. Provenance tools remain optional or absent. Responsibility is pushed downward onto users and individual creators.

So instead of robust verification at the platform or model level, we get fruit.

It is clever. It works. And it should worry us.

Because when trust becomes personal, ad hoc, and unscalable, the system as a whole becomes brittle. This is not just about AI content. It is about how societies determine what is real in moments that matter.

TL;DR: A popular weather creator now bites a specific fruit on camera to prove his videos are real. This is a workaround for deepfakes and reposts. It is also a clean example of ambient trust collapse. Platforms and AI systems no longer reliably signal authenticity, so creators invent their own verification hacks. The Guardian warned today that AI is being deployed faster than trust and safety can keep up. This is what that looks like in practice.

Question: Do you think this ends with platform-level provenance becoming mandatory, or are we heading toward more improvised human verification like this becoming normal?

40
 
 

After successfully recuperating tiktok, politicians are going to once again exploit pseudo-science to outlaw the "infinite scroll." Get ready for the comeback of the pager. Thanks libs!

41
42
 
 

Hacker News.

Just a decade after a global backlash was triggered by Snowden reporting on mass domestic surveillance, the state-corporate dragnet is stronger and more invasive than ever.

43
 
 

Adafruit: From Ultimate Driving Machine to Ultimate Rent-Seeking Machine: The BMW Logo Screw Patent.

If you haven’t already heard, BMW’s R&D teams have been busy “innovating.” Unfortunately, they aren’t focusing on the things that actually matter—like stellar engine performance or the legendary driving dynamics that gearheads love. Instead, the C-suite execs decided that the best use of their engineering budget was to design a proprietary security screw specifically intended to prevent BMW drivers from fixing their own cars.

44
45
 
 

Archive.today.

Anthropic’s artificial-intelligence tool Claude was used in the U.S. military’s operation to capture former Venezuelan President Nicolás Maduro, highlighting how AI models are gaining traction in the Pentagon, according to people familiar with the matter.

46
 
 

Dr. Mehmet Oz is pitching a controversial fix for America's rural health care crisis: artificial intelligence.

"There's no question about it — whether you want it or not — the best way to help some of these communities is gonna be AI-based avatars," Oz, the head of the Centers for Medicare and Medicaid Services, said recently at an event focused on addiction and mental health hosted by Action for Progress, a coalition aimed at improving behavioral health care. He said AI could multiply the reach of doctors fivefold — or more — without burning them out.

The AI proposal is part of the Trump administration's $50 billion plan to modernize health care in rural communities. That includes deploying tools such as digital avatars to conduct basic medical interviews, robotic systems for remote diagnostics, and drones to deliver medication where pharmacies don't exist.

47
 
 

Stable Video Infinity: Infinite-Length Video Generation with Error Recycling.

Links

Today, anyone can create realistic images in just a few clicks with the help of AI. Generating videos, however, is a much more complicated task. Existing AI models are only capable of producing videos that work for less than 30 seconds before degrading into randomness with incoherent shapes, colors and logic. The problem is called drift, and computer scientists have been working on it for years.

At EPFL, researchers at the Visual Intelligence for Transportation (VITA) Laboratory have taken a novel approach – working with the errors instead of circumventing or ignoring them – and developed a video-generation method that essentially eliminates drift. Their method is based on recycling errors back into the AI model so that it learns from its own mistakes.

48
 
 

Google Play has delisted UpScrolled, the "censorship-resistant" social media app founded by Issam Hijazi, following its rapid growth to over 2.5 million users and its brief stint as a top-ranked alternative to TikTok.

While the app remains available for existing users, Google has not provided a specific reason for the removal; UpScrolled's team confirmed they are working with the Play Store for reinstatement while maintaining their commitment to unfiltered content.

This development follows the app's rapid ascent in popularity, particularly amid concerns over content moderation on competing platforms like TikTok.

49
 
 

Co-ops are often dismissed as attempts to create islands of socialism. But building democratically controlled tech infrastructure can be part of a wider movement for working-class power.

50
 
 

Google has criticized the European Union’s intentions to achieve digital sovereignty through open-source software. The company warned that Brussels’ policies aimed at reducing dependence on American tech companies could harm competitiveness. According to Google, the idea of replacing current tools with open-source programs would not contribute to economic growth.

Kent Walker, Google’s president of global affairs and chief legal officer, warned of a competitive paradox that Europe is facing. According to the Financial Times, he said that creating regulatory barriers would be harmful in a context of rapid technological advancement. His remarks came just days after the European Commission concluded a public consultation assessing the transition to open-source software.

Google’s chief legal officer clarified that he is not opposed to digital sovereignty, but recommended making use of the “best technologies in the world.” Walker suggested that American companies could collaborate with European firms to implement measures ensuring data protection. Local management or servers located in Europe to store information are among the options.

The EU is preparing a technological sovereignty package aimed at eliminating dependence on third-party software, such as Google’s. After reviewing proposals, it concluded that reliance on external suppliers for critical infrastructure entails economic risks and creates vulnerabilities. The strategy focuses not only on regulation but also on adopting open-source software to achieve digital sovereignty.

According to Google, this change would represent a problem for users. Walker argues that the market moves faster than legislation and warns that regulatory friction will only leave European consumers and businesses behind in what he calls “the most competitive technological transition we have ever seen.” As it did with the DMA and other laws, Google is playing on fear. Kent Walker suggested that this initiative would stifle innovation and deny people access to the “best digital tools.”

The promotion of open-source software aims to break dependence on foreign suppliers, especially during a period of instability caused by the Trump administration. The European Union has highlighted the risks of continuing under this system and proposes that public institutions should have full control over their own technology.

According to a study on the impact of open-source software, the European Commission found that it contributes between €65 billion and €95 billion annually to the European Union’s GDP. The executive body estimates that a 10% increase in contributions to open-source software would generate an additional €100 billion in growth for the bloc’s economy.

view more: ‹ prev next ›